Skip to content

fix: modify bug in skill memory#1060

Closed
PolarisLiu1 wants to merge 1 commit intoMemTensor:dev-20260202-v2.0.5from
PolarisLiu1:dev-20260202-v2.0.5
Closed

fix: modify bug in skill memory#1060
PolarisLiu1 wants to merge 1 commit intoMemTensor:dev-20260202-v2.0.5from
PolarisLiu1:dev-20260202-v2.0.5

Conversation

@PolarisLiu1
Copy link

Description

Please include a summary of the change, the problem it solves, the implementation approach, and relevant context. List any dependencies required for this change.

Related Issue (Required): Fixes @issue_number

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • Unit Test
  • Test Script Or Test Steps (please provide)
  • Pipeline Automated API Test (please provide)

Checklist

  • I have performed a self-review of my own code | 我已自行检查了自己的代码
  • I have commented my code in hard-to-understand areas | 我已在难以理解的地方对代码进行了注释
  • I have added tests that prove my fix is effective or that my feature works | 我已添加测试以证明我的修复有效或功能正常
  • I have created related documentation issue/PR in MemOS-Docs (if applicable) | 我已在 MemOS-Docs 中创建了相关的文档 issue/PR(如果适用)
  • I have linked the issue to this PR (if applicable) | 我已将 issue 链接到此 PR(如果适用)
  • I have mentioned the person who will review this PR | 我已提及将审查此 PR 的人

Reviewer Checklist

  • closes #xxxx (Replace xxxx with the GitHub issue number)
  • Made sure Checks passed
  • Tests have been provided

Copilot AI review requested due to automatic review settings February 9, 2026 06:14
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adjusts the skill-memory extraction prompts and the LLM extraction pipeline to improve “update vs create” behavior when existing skill memories are present, and to reduce incorrect update outputs when no prior skills exist.

Changes:

  • Refines EN/ZH prompt instructions to update an existing skill only when the topic matches an existing skill memory.
  • Updates _extract_skill_memory_by_llm_md to include all SkillMemory items (removing the relativity threshold filter).
  • Adds language-dependent prompt headers and a post-parse guard to force update=false when no old skills exist.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
src/memos/templates/skill_mem_prompt.py Refines similarity/update instructions in EN/ZH prompt text to better control update vs create behavior.
src/memos/mem_reader/read_skill_memory/process_skill_memory.py Modifies how prior skill/context is embedded into the LLM prompt and adds a hallucination guard for update/old_memory_id.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 531 to 536
old_skill_content = (
("Exsit Skill Schemas: \n" + json.dumps(old_skill_content, ensure_ascii=False, indent=2))
"已有技能列表: \n" if lang == "zh" else "Exsit Skill Schemas: \n" +
json.dumps(old_skill_content, ensure_ascii=False, indent=2)
if old_skill_content
else ""
)
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The conditional expression building old_skill_content is parsed such that when lang == "zh" it always returns only the header ("已有技能列表") and never appends the JSON payload (and it will even return the header when the skill list is empty). Restructure this so the language-specific header is combined with json.dumps(...) only when the original skill list is non-empty (e.g., compute a header first, then header + dumps), otherwise use an empty string.

Copilot uses AI. Check for mistakes.
Comment on lines +538 to 540
old_memories_context = "相关历史对话:\n" if lang == "zh" else "Relavant Context:\n" + "\n".join(
[f"{k}:\n{v}" for k, v in old_memories_context.items()]
)
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

old_memories_context currently drops the actual joined history content when lang == "zh" (it becomes only "相关历史对话:\n"). This makes the prompt miss the recalled context for Chinese conversations. Build the prefix based on language, then always append the joined content (and consider returning an empty string if old_memories_context is empty).

Copilot uses AI. Check for mistakes.
Comment on lines +575 to +578
# If no old skill content, set update to False (for llm hallucination)
if not old_skill_content:
skill_memory["old_memory_id"] = ''
skill_memory["update"] = False
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hallucination guard if not old_skill_content: is now checking the post-formatted string, but old_skill_content is reassigned from a list to a string earlier. With the current formatting logic, this can become non-empty even when there are no skills (or no JSON payload), so the guard may not fire. Keep the original list in a separate variable (or check it before formatting) to decide whether to force update=False/empty old_memory_id.

Copilot uses AI. Check for mistakes.
Comment on lines +532 to 539
"已有技能列表: \n" if lang == "zh" else "Exsit Skill Schemas: \n" +
json.dumps(old_skill_content, ensure_ascii=False, indent=2)
if old_skill_content
else ""
)

old_memories_context = "Relavant Context:\n" + "\n".join(
old_memories_context = "相关历史对话:\n" if lang == "zh" else "Relavant Context:\n" + "\n".join(
[f"{k}:\n{v}" for k, v in old_memories_context.items()]
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spelling in the prompt labels: Exsit Skill Schemas / Relavant Context are misspelled. Typos in prompt text can reduce LLM instruction clarity; please correct them (e.g., Existing Skill Schemas, Relevant Context).

Copilot uses AI. Check for mistakes.
@PolarisLiu1 PolarisLiu1 closed this Feb 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant