graphiti icon indicating copy to clipboard operation
graphiti copied to clipboard

examples/gemini_example.py mentioned in the docs doesn't exist

Open CaliLuke opened this issue 9 months ago • 7 comments

I'm having issues running Graphiti using Gemini and the example on how to do that seems to be missing, so I'm stuck. Could you please update the docs with a working procedure we can look at? This is the snippet from the readme doc.

Make sure to replace the placeholder value with your actual Google API key. You can find more details in the example file at examples/gemini_example.py. Thanks!

CaliLuke avatar Apr 08 '25 17:04 CaliLuke

We had a misconfiguration in the dependencies. Please see: #336

The instructions here do work for Gemini: https://github.com/getzep/graphiti?tab=readme-ov-file#using-graphiti-with-google-gemini

danielchalef avatar Apr 08 '25 22:04 danielchalef

yeah but the docs are still wrong, please update it so other people or llms don't get confused. also your MCP instructions only work for OpenAI.

CaliLuke avatar Apr 08 '25 23:04 CaliLuke

Please upgrade to the latest version >v0.9.3 and run:

poetry add "graphiti-core[google-genai]"

# or

uv add "graphiti-core[google-genai]"

I've tested our example code for configuring Gemini and it works:

from graphiti_core.llm_client.gemini_client import GeminiClient, LLMConfig
from graphiti_core.embedder.gemini import GeminiEmbedder, GeminiEmbedderConfig

# Google API key configuration
api_key = "<your-google-api-key>"

# Initialize Graphiti with Gemini clients
graphiti = Graphiti(
    "bolt://localhost:7687",
    "neo4j",
    "password",
    llm_client=GeminiClient(
        config=LLMConfig(
            api_key=api_key,
            model="gemini-2.0-flash"
        )
    ),
    embedder=GeminiEmbedder(
        config=GeminiEmbedderConfig(
            api_key=api_key,
            embedding_model="embedding-001"
        )
    )
)

We con't currently support the cross-encoder reranker with LLM providers other than OpenAI. We'll investigate adding support for Gemini, and would welcome a contribution.

Please let me know if the above still doesn't work for you.

danielchalef avatar Apr 09 '25 16:04 danielchalef

I think it's pretty self evident that whoever is interested in using Gemini with graphiti is not going to have another paid OpenAi account just for reranking, or at least that's going to be a very limited subset of users.

On Wed, Apr 9, 2025 at 9:32 AM Daniel Chalef @.***> wrote:

Please upgrade to the latest version >v0.9.3 and run:

poetry add "graphiti-core[google-genai]"

or

uv add "graphiti-core[google-genai]"

I've tested our example code for configuring Gemini and it works:

from graphiti_core.llm_client.gemini_client import GeminiClient, LLMConfig from graphiti_core.embedder.gemini import GeminiEmbedder, GeminiEmbedderConfig

Google API key configuration

api_key = ""

Initialize Graphiti with Gemini clients

graphiti = Graphiti( "bolt://localhost:7687", "neo4j", "password", llm_client=GeminiClient( config=LLMConfig( api_key=api_key, model="gemini-2.0-flash" ) ), embedder=GeminiEmbedder( config=GeminiEmbedderConfig( api_key=api_key, embedding_model="embedding-001" ) ) )

We con't currently support the cross-encoder reranker with LLM providers other than OpenAI. We'll investigate adding support for Gemini, and would welcome a contribution.

Please let me know if the above still doesn't work for you.

— Reply to this email directly, view it on GitHub https://github.com/getzep/graphiti/issues/333#issuecomment-2790333923, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAR36NUG57CIRF3M7RP5N32YVDRFAVCNFSM6AAAAAB2W35256VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOJQGMZTGOJSGM . You are receiving this because you authored the thread.Message ID: @.***> danielchalef left a comment (getzep/graphiti#333) https://github.com/getzep/graphiti/issues/333#issuecomment-2790333923

Please upgrade to the latest version >v0.9.3 and run:

poetry add "graphiti-core[google-genai]"

or

uv add "graphiti-core[google-genai]"

I've tested our example code for configuring Gemini and it works:

from graphiti_core.llm_client.gemini_client import GeminiClient, LLMConfig from graphiti_core.embedder.gemini import GeminiEmbedder, GeminiEmbedderConfig

Google API key configuration

api_key = ""

Initialize Graphiti with Gemini clients

graphiti = Graphiti( "bolt://localhost:7687", "neo4j", "password", llm_client=GeminiClient( config=LLMConfig( api_key=api_key, model="gemini-2.0-flash" ) ), embedder=GeminiEmbedder( config=GeminiEmbedderConfig( api_key=api_key, embedding_model="embedding-001" ) ) )

We con't currently support the cross-encoder reranker with LLM providers other than OpenAI. We'll investigate adding support for Gemini, and would welcome a contribution.

Please let me know if the above still doesn't work for you.

— Reply to this email directly, view it on GitHub https://github.com/getzep/graphiti/issues/333#issuecomment-2790333923, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAR36NUG57CIRF3M7RP5N32YVDRFAVCNFSM6AAAAAB2W35256VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOJQGMZTGOJSGM . You are receiving this because you authored the thread.Message ID: @.***>

CaliLuke avatar Apr 09 '25 16:04 CaliLuke

There are many different search strategies that do not rely on a cross-encoder. Please see: https://help.getzep.com/graphiti/graphiti/searching

danielchalef avatar Apr 09 '25 16:04 danielchalef

@CaliLuke Thanks for the feedback. As Daniel mentioned, the cross_encoder is only used when performing advanced searches and selecting the cross_encoder as the reranker method.

To make this work out of the box for openAI (by far the most popular use case, especially for people newer to GenAI), I hacked together a cross-encoder using gpt-4o-mini and logprobs, logit-bias, and setting the max tokens to 1. A similar thing can be done with Gemini or Anthropic, but it isn't a top priority for us currently as we recommend using an actual reranker for production use cases.

We use the open source bge-m3 reranker for our implementation, but the BGERerankerClient can be used as a template for any open source TEI-compatible reranker simply by changing the URI.

Cohere and Voyage AI are also popular cross-encoder reranker providers. We currently don't have pre-made clients for them as we haven't gotten requests from the community for more reranker support. We will likely eventually implement these though.

If there is a particular cross-encoder you want to use, we would happily work with you if you want to contribute. You can use the existing cross-encoder clients as a template.

If, on the other hand, you don't want to use the cross-encoder at all for reranking, you can either provide an openAI key with a free account and it won't be used as along as you don't use the reranker. Another work around is to create a dummy class that inherits from the CrossEncoderClient and implements a default behavior for rank. This will work as long as you don't use the cross-encoder option in search.

Hope this helps with understanding how and why Graphiti works the way it does.

prasmussen15 avatar Apr 09 '25 17:04 prasmussen15

I am still getting openai.OpenAIError: The api_key client option must be set in graphiti_core in /usr/local/lib/python3.11/dist-packages (0.10.5) The workaround that worked for me was setting os.environ['OPENAI_API_KEY'] = "<GEMINI_KEY>"

faisal00813 avatar Apr 30 '25 06:04 faisal00813

@CaliLuke Is this still an issue? Please confirm within 14 days or this issue will be closed.

claude[bot] avatar Oct 03 '25 00:10 claude[bot]

@CaliLuke Is this still an issue? Please confirm within 14 days or this issue will be closed.

claude[bot] avatar Oct 20 '25 00:10 claude[bot]

@CaliLuke Is this still an issue? Please confirm within 14 days or this issue will be closed.

claude[bot] avatar Oct 22 '25 00:10 claude[bot]

@CaliLuke Is this still an issue? Please confirm within 14 days or this issue will be closed.

claude[bot] avatar Oct 29 '25 00:10 claude[bot]

@CaliLuke Is this still an issue? Please confirm within 14 days or this issue will be closed.

claude[bot] avatar Nov 17 '25 00:11 claude[bot]