servers icon indicating copy to clipboard operation
servers copied to clipboard

`add_observations` tool in Knowledge Graph Memory Server frequently fails when invoked by GitHub Copilot Chat (Claude) in VS Code

Open sascharo opened this issue 7 months ago • 1 comments

Describe the bug When using the Knowledge Graph Memory Server (installed via npx) in Visual Studio Code with GitHub Copilot Chat and Claude, the add_observations tool often fails silently or with error, whereas other tools such as create_entities appear to work consistently. This suggests a potential reliability or integration issue specific to add_observations.

To Reproduce Steps to reproduce the behavior:

  1. Open VS Code Insiders (version details below).
  2. Launch GitHub Copilot Chat (with Claude selected as the agent, if applicable).
  3. Interact with a tool using the add_observations method.
  4. Observe that add_observations often fails or is not executed correctly.
  5. In contrast, call create_entities – observe it works as expected.

Expected behavior The add_observations tool should succeed reliably when invoked by GitHub Copilot Chat in the same way as other tools like create_entities.

Logs Unfortunately, there are no explicit errors printed to the terminal in some cases – but behavior can be confirmed through:

  • Incomplete or missing results from memory updates.
  • Lack of entity enrichment that would normally follow add_observations.

Additional context

  • VS Code Version:

    Version: 1.101.0-insider (user setup)  
    Commit: 921786be45c46b54e727aa4f210819bbc6430a6d  
    Date: 2025-06-06T00:25:36.239Z  
    Electron: 35.5.1  
    ElectronBuildId: 11708675  
    Chromium: 134.0.6998.205  
    Node.js: 22.15.1  
    V8: 13.4.114.21-electron.0  
    OS: Windows_NT x64 10.0.26100  
    
  • Knowledge Graph Memory Server is started via npx.

  • Issue seems specific to GitHub Copilot Chat + Claude integration, possibly due to malformed requests or unexpected payloads.

  • A minimal reproduction may involve a manual call to add_observations via an equivalent API endpoint to compare success/failure modes.

sascharo avatar Jun 06 '25 14:06 sascharo

I am seing the same, I get errors like this:


2025-10-19T18:56:32.758852+02:00 server[299545]: 2025-10-19 18:56:32 | ERROR    | src.open_llm_vtuber.mcpp.tool_executor:run_single_tool:377 | Unexpected error executing tool 'add_observations': Entity with name undefined not found
2025-10-19T18:56:32.758951+02:00 server[299545]: Traceback (most recent call last):
2025-10-19T18:56:32.759031+02:00 server[299545]:
2025-10-19T18:56:32.759131+02:00 server[299545]:   File "/app/run_server.py", line 178, in <module>
2025-10-19T18:56:32.759257+02:00 server[299545]:     run(console_log_level=console_log_level)
2025-10-19T18:56:32.759340+02:00 server[299545]:     │                     └ 'INFO'
2025-10-19T18:56:32.759430+02:00 server[299545]:     └ <function run at 0x7bb2bdffc670>
2025-10-19T18:56:32.759503+02:00 server[299545]:
2025-10-19T18:56:32.759573+02:00 server[299545]:   File "/app/run_server.py", line 159, in run
2025-10-19T18:56:32.759628+02:00 server[299545]:     uvicorn.run(
2025-10-19T18:56:32.759685+02:00 server[299545]:     │       └ <function run at 0x7bb2e0bd6830>
2025-10-19T18:56:32.759735+02:00 server[299545]:     └ <module 'uvicorn' from '/app/.venv/lib/python3.10/site-packages/uvicorn/__init__.py'>
2025-10-19T18:56:32.759796+02:00 server[299545]:
2025-10-19T18:56:32.759850+02:00 server[299545]:   File "/app/.venv/lib/python3.10/site-packages/uvicorn/main.py", line 579, in run
2025-10-19T18:56:32.759908+02:00 server[299545]:     server.run()
2025-10-19T18:56:32.759969+02:00 server[299545]:     │      └ <function Server.run at 0x7bb2e0aac9d0>
2025-10-19T18:56:32.760028+02:00 server[299545]:     └ <uvicorn.server.Server object at 0x7bb2bc7d1a80>
2025-10-19T18:56:32.760079+02:00 server[299545]:   File "/app/.venv/lib/python3.10/site-packages/uvicorn/server.py", line 66, in run
2025-10-19T18:56:32.760155+02:00 server[299545]:     return asyncio.run(self.serve(sockets=sockets))
2025-10-19T18:56:32.760232+02:00 server[299545]:            │       │   │    │             └ None
2025-10-19T18:56:32.760306+02:00 server[299545]:            │       │   │    └ <function Server.serve at 0x7bb2e0aaca60>
2025-10-19T18:56:32.760487+02:00 server[299545]:            │       │   └ <uvicorn.server.Server object at 0x7bb2bc7d1a80>
2025-10-19T18:56:32.760555+02:00 server[299545]:            │       └ <function run at 0x7bb2e0db2950>
2025-10-19T18:56:32.760633+02:00 server[299545]:            └ <module 'asyncio' from '/root/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/asyncio/__init__.py'>
2025-10-19T18:56:32.760712+02:00 server[299545]:   File "/root/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/asyncio/runners.py", line 44, in run
2025-10-19T18:56:32.760779+02:00 server[299545]:     return loop.run_until_complete(main)
2025-10-19T18:56:32.760851+02:00 server[299545]:            │    │                  └ <coroutine object Server.serve at 0x7bb2b92ccc10>
2025-10-19T18:56:32.760917+02:00 server[299545]:            │    └ <cyfunction Loop.run_until_complete at 0x7bb2bc782cf0>
2025-10-19T18:56:32.760986+02:00 server[299545]:            └ <uvloop.Loop running=True closed=False debug=False>
2025-10-19T18:56:32.761063+02:00 server[299545]:
2025-10-19T18:56:32.761145+02:00 server[299545]:   File "/app/src/open_llm_vtuber/conversations/single_conversation.py", line 92, in process_single_conversation
2025-10-19T18:56:32.761232+02:00 server[299545]:     async for output_item in agent_output_stream:
2025-10-19T18:56:32.761296+02:00 server[299545]:               │              └ <async_generator object BasicMemoryAgent._chat_function_factory.<locals>.chat_with_memory at 0x7bb2bc836d40>
2025-10-19T18:56:32.761369+02:00 server[299545]:               └ {'type': 'tool_call_status', 'tool_id': 'call_1c3c8cc8-a91e-41ab-b6bb-4cdc13cc9d61', 'tool_name': 'add_observations', 'status...
2025-10-19T18:56:32.761434+02:00 server[299545]:
2025-10-19T18:56:32.761497+02:00 server[299545]:   File "/app/src/open_llm_vtuber/agent/transformers.py", line 182, in wrapper
2025-10-19T18:56:32.761567+02:00 server[299545]:     async for item in stream:
2025-10-19T18:56:32.761638+02:00 server[299545]:               │       └ <async_generator object BasicMemoryAgent._chat_function_factory.<locals>.chat_with_memory at 0x7bb2be0642c0>
2025-10-19T18:56:32.761705+02:00 server[299545]:               └ {'type': 'tool_call_status', 'tool_id': 'call_1c3c8cc8-a91e-41ab-b6bb-4cdc13cc9d61', 'tool_name': 'add_observations', 'status...
2025-10-19T18:56:32.761768+02:00 server[299545]:
2025-10-19T18:56:32.761831+02:00 server[299545]:   File "/app/src/open_llm_vtuber/agent/transformers.py", line 126, in wrapper
2025-10-19T18:56:32.761903+02:00 server[299545]:     async for item in stream:
2025-10-19T18:56:32.761965+02:00 server[299545]:               │       └ <async_generator object BasicMemoryAgent._chat_function_factory.<locals>.chat_with_memory at 0x7bb2be064a40>
2025-10-19T18:56:32.762032+02:00 server[299545]:               └ {'type': 'tool_call_status', 'tool_id': 'call_1c3c8cc8-a91e-41ab-b6bb-4cdc13cc9d61', 'tool_name': 'add_observations', 'status...
2025-10-19T18:56:32.762108+02:00 server[299545]:
2025-10-19T18:56:32.762194+02:00 server[299545]:   File "/app/src/open_llm_vtuber/agent/transformers.py", line 77, in wrapper
2025-10-19T18:56:32.762275+02:00 server[299545]:     async for item in stream:
2025-10-19T18:56:32.762350+02:00 server[299545]:               │       └ <async_generator object BasicMemoryAgent._chat_function_factory.<locals>.chat_with_memory at 0x7bb2bc7dbbc0>
2025-10-19T18:56:32.762430+02:00 server[299545]:               └ {'type': 'tool_call_status', 'tool_id': 'call_1c3c8cc8-a91e-41ab-b6bb-4cdc13cc9d61', 'tool_name': 'add_observations', 'status...
2025-10-19T18:56:32.762491+02:00 server[299545]:
2025-10-19T18:56:32.762556+02:00 server[299545]:   File "/app/src/open_llm_vtuber/agent/transformers.py", line 45, in wrapper
2025-10-19T18:56:32.762626+02:00 server[299545]:     async for item in divider.process_stream(stream_from_func):
2025-10-19T18:56:32.762702+02:00 server[299545]:               │       │       │              └ <async_generator object BasicMemoryAgent._chat_function_factory.<locals>.chat_with_memory at 0x7bb2bc7db5c0>
2025-10-19T18:56:32.762764+02:00 server[299545]:               │       │       └ <function SentenceDivider.process_stream at 0x7bb2df7709d0>
2025-10-19T18:56:32.762832+02:00 server[299545]:               │       └ <src.open_llm_vtuber.utils.sentence_divider.SentenceDivider object at 0x7bb2b92bfd90>
2025-10-19T18:56:32.762902+02:00 server[299545]:               └ {'type': 'tool_call_status', 'tool_id': 'call_1c3c8cc8-a91e-41ab-b6bb-4cdc13cc9d61', 'tool_name': 'add_observations', 'status...
2025-10-19T18:56:32.762954+02:00 server[299545]:
2025-10-19T18:56:32.763015+02:00 server[299545]:   File "/app/src/open_llm_vtuber/utils/sentence_divider.py", line 565, in process_stream
2025-10-19T18:56:32.763082+02:00 server[299545]:     async for item in segment_stream:
2025-10-19T18:56:32.763166+02:00 server[299545]:               │       └ <async_generator object BasicMemoryAgent._chat_function_factory.<locals>.chat_with_memory at 0x7bb2bc7db5c0>
2025-10-19T18:56:32.763247+02:00 server[299545]:               └ {'type': 'tool_call_status', 'tool_id': 'call_1c3c8cc8-a91e-41ab-b6bb-4cdc13cc9d61', 'tool_name': 'add_observations', 'status...
2025-10-19T18:56:32.763314+02:00 server[299545]:
2025-10-19T18:56:32.763376+02:00 server[299545]:   File "/app/src/open_llm_vtuber/agent/agents/basic_memory_agent.py", line 639, in chat_with_memory
2025-10-19T18:56:32.763438+02:00 server[299545]:     async for output in self._openai_tool_interaction_loop(
2025-10-19T18:56:32.763510+02:00 server[299545]:               │         │    └ <function BasicMemoryAgent._openai_tool_interaction_loop at 0x7bb2df7716c0>
2025-10-19T18:56:32.763581+02:00 server[299545]:               │         └ <src.open_llm_vtuber.agent.agents.basic_memory_agent.BasicMemoryAgent object at 0x7bb2df7cce20>
2025-10-19T18:56:32.763641+02:00 server[299545]:               └ {'type': 'tool_call_status', 'tool_id': 'call_1c3c8cc8-a91e-41ab-b6bb-4cdc13cc9d61', 'tool_name': 'add_observations', 'status...
2025-10-19T18:56:32.763727+02:00 server[299545]:
2025-10-19T18:56:32.763778+02:00 server[299545]:   File "/app/src/open_llm_vtuber/agent/agents/basic_memory_agent.py", line 561, in _openai_tool_interaction_loop
2025-10-19T18:56:32.763847+02:00 server[299545]:     update = await anext(tool_executor_iterator)
2025-10-19T18:56:32.763911+02:00 server[299545]:                          └ <async_generator object ToolExecutor.execute_tools at 0x7bb2bc7ea5c0>
2025-10-19T18:56:32.763982+02:00 server[299545]:
2025-10-19T18:56:32.764044+02:00 server[299545]:   File "/app/src/open_llm_vtuber/mcpp/tool_executor.py", line 223, in execute_tools
2025-10-19T18:56:32.764109+02:00 server[299545]:     ) = await self.run_single_tool(tool_name, tool_id, tool_input)
2025-10-19T18:56:32.764181+02:00 server[299545]:               │    │               │          │        └ {'observations': [{'entity': 'Normen', 'observation': 'Uses AMD Ryzen AI Max+ 395 as primary hardware for a 64\u202fGB LLM, a...
2025-10-19T18:56:32.764257+02:00 server[299545]:               │    │               │          └ 'call_1c3c8cc8-a91e-41ab-b6bb-4cdc13cc9d61'
2025-10-19T18:56:32.764315+02:00 server[299545]:               │    │               └ 'add_observations'
2025-10-19T18:56:32.764381+02:00 server[299545]:               │    └ <function ToolExecutor.run_single_tool at 0x7bb2bf5f48b0>
2025-10-19T18:56:32.764443+02:00 server[299545]:               └ <src.open_llm_vtuber.mcpp.tool_executor.ToolExecutor object at 0x7bb2bc776080>
2025-10-19T18:56:32.764505+02:00 server[299545]:
2025-10-19T18:56:32.764567+02:00 server[299545]: > File "/app/src/open_llm_vtuber/mcpp/tool_executor.py", line 334, in run_single_tool
2025-10-19T18:56:32.764623+02:00 server[299545]:     result_dict = await self._mcp_client.call_tool(
2025-10-19T18:56:32.764698+02:00 server[299545]:                         │    │           └ <function MCPClient.call_tool at 0x7bb2bf5f4790>
2025-10-19T18:56:32.764763+02:00 server[299545]:                         │    └ <src.open_llm_vtuber.mcpp.mcp_client.MCPClient object at 0x7bb2bdf93c10>
2025-10-19T18:56:32.764823+02:00 server[299545]:                         └ <src.open_llm_vtuber.mcpp.tool_executor.ToolExecutor object at 0x7bb2bc776080>
2025-10-19T18:56:32.764894+02:00 server[299545]:
2025-10-19T18:56:32.764956+02:00 server[299545]:   File "/app/src/open_llm_vtuber/mcpp/mcp_client.py", line 111, in call_tool
2025-10-19T18:56:32.765074+02:00 server[299545]:     response = await session.call_tool(tool_name, tool_args)
2025-10-19T18:56:32.765164+02:00 server[299545]:                      │       │         │          └ {'observations': [{'entity': 'Normen', 'observation': 'Uses AMD Ryzen AI Max+ 395 as primary hardware for a 64\u202fGB LLM, a...
2025-10-19T18:56:32.765236+02:00 server[299545]:                      │       │         └ 'add_observations'
2025-10-19T18:56:32.765308+02:00 server[299545]:                      │       └ <function ClientSession.call_tool at 0x7bb2c3732830>
2025-10-19T18:56:32.765372+02:00 server[299545]:                      └ <mcp.client.session.ClientSession object at 0x7bb2bf307940>
2025-10-19T18:56:32.765430+02:00 server[299545]:
2025-10-19T18:56:32.765491+02:00 server[299545]:   File "/app/.venv/lib/python3.10/site-packages/mcp/client/session.py", line 256, in call_tool
2025-10-19T18:56:32.765552+02:00 server[299545]:     return await self.send_request(
2025-10-19T18:56:32.765609+02:00 server[299545]:                  │    └ <function BaseSession.send_request at 0x7bb2c3730ee0>
2025-10-19T18:56:32.765675+02:00 server[299545]:                  └ <mcp.client.session.ClientSession object at 0x7bb2bf307940>
2025-10-19T18:56:32.765741+02:00 server[299545]:   File "/app/.venv/lib/python3.10/site-packages/mcp/shared/session.py", line 266, in send_request
2025-10-19T18:56:32.765802+02:00 server[299545]:     raise McpError(response_or_error.error)
2025-10-19T18:56:32.765870+02:00 server[299545]:           │        │                 └ ErrorData(code=-32603, message='Entity with name undefined not found', data=None)
2025-10-19T18:56:32.765939+02:00 server[299545]:           │        └ JSONRPCError(jsonrpc='2.0', id=29, error=ErrorData(code=-32603, message='Entity with name undefined not found', data=None))
2025-10-19T18:56:32.766008+02:00 server[299545]:           └ <class 'mcp.shared.exceptions.McpError'>
2025-10-19T18:56:32.766077+02:00 server[299545]:
2025-10-19T18:56:32.766147+02:00 server[299545]: mcp.shared.exceptions.McpError: Entity with name undefined not found

normen avatar Oct 19 '25 17:10 normen