Bug in LLMConfig and OpenAIEmbedderConfig instantiation: default models always loaded
Hello 👋 I'm working on adding graphiti docker compose MCP to my (pydantic AI based) agent with ollama local hosted models, mistral-small3.1 and snowflake-arctic-embed2. I passed the env vars to compose like so:
environment:
- OPENAI_API_KEY=None
- OPENAI_BASE_URL=http://host.docker.internal:11434/v1
- MODEL_NAME=mistral-small3.1
- EMBEDDER_MODEL_NAME=snowflake-arctic-embed2
- ... (Extra variables)
However the gpt-4.1-nano and text-embedding-3-small was chosen instead which caused this error:
Error code: 404 - {'error': {'message': 'model "gpt-4.1-nano" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}.
So I did some digging and found that:
- In
graphiti_mcp_server.pyat 304
llm_client_config = LLMConfig(api_key=self.api_key, model=self.model)
should be
llm_client_config = LLMConfig(api_key=self.api_key, model=self.model, small_model=self.model)
Because if small_model is not defined is defaults to gpt-4.1-nano
- On line 411
embedder_config = OpenAIEmbedderConfig(api_key=self.api_key, model=self.model)
Should be
embedder_config = OpenAIEmbedderConfig(api_key=self.api_key, embedding_model=self.model)
Because OpenAIEmbedderConfig has no model member, only embedding_model
After both patches, it works. Thanks :)
Thanks for pointing out this issue in the MCP server! Would you like to open a PR with these as fixes? Otherwise I will be happy to fix them
@prasmussen15 Sure