mem0 icon indicating copy to clipboard operation
mem0 copied to clipboard

Invalid JSON response when using memory.add with gemini-2.5-flash

Open federicoschiaffino opened this issue 4 months ago • 19 comments

🐛 Describe the bug

When I run the following command: memory.add(messages, user_id="rick")

I sometimes get this error: Invalid JSON response: Expecting value: line 1 column 1 (char 0)

When this happen the memory is not updated nor added. Anyone facing the same issue? I'm using gemini-2.5-flash

federicoschiaffino avatar Sep 03 '25 14:09 federicoschiaffino

@federicoschiaffino Can you share the version, config and the example message so that I can recreate this issue. Thanks!

parshvadaftari avatar Sep 04 '25 09:09 parshvadaftari

@parshvadaftari SDK version is 1.33.0

config = {
    "embedder": {
        "provider": "gemini",
        "config": {
            "model": "models/text-embedding-004",
        }
    },
        "llm": {
        "provider": "gemini",
        "config": {
            "model": "gemini-2.5-flash",
            "temperature": 0.0,
            "max_tokens": 2000,
        }
    },
    "vector_store": {
        "provider": "pgvector",
        "config": {
            "dbname": "memory_db",
            "host": "localhost",
            "port": "5432",
            "embedding_model_dims": 768,
        }    
    }
}

I get 2 types of errors: with messages = [{'role': 'user', 'content': 'my brother Will has another daughter called Paula!'}, {'role': 'assistant', 'content': "Thank you for letting me know! So, your brother Will has two daughters, Lia and Paula. I've updated my memory."}] I got: Invalid JSON response: Expecting value: line 1 column 1 (char 0)

And with messages = [{'role': 'user', 'content': 'my brother Will just had a daughter named Lia'}, {'role': 'assistant', 'content': "That's wonderful news! Congratulations to your brother Will and his family on the arrival of Lia!"}, {'role': 'user', 'content': "Will's wife is Emma from Sweden "}, {'role': 'assistant', 'content': "Thank you for sharing that! It's nice to know a bit more about Will's family. So, Lia's mother is Emma, who is from Sweden."}] I got this error: Invalid JSON response: Expecting ',' delimiter: line 11 column 28 (char 259)

Thank you!

federicoschiaffino avatar Sep 04 '25 10:09 federicoschiaffino

I’ve also faced the same issue — sometimes I get the Invalid JSON response: Expecting value: line 1 column 1 (char 0) error when calling memory.add(...). It’s not consistently reproducible, but I noticed it only happens with gemini-2.5-flash. At first, I thought it might be related to the default “thinking” feature, but later the exact same input worked fine without the error.

Here’s my config for reference (same as when the error occurred):

mem0_config = {
    "embedder": {
        "provider": "gemini",
        "config": {
            "model": "gemini-embedding-001",
        }
    },
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "url": os.getenv("QDRANT_URL"),
            "api_key": os.getenv("QDRANT_API_KEY"),
            "embedding_model_dims": 768,
            "collection_name": collection_name,
        }
    },
    "llm": {
        "provider": "gemini",
        "config": {
            "model": "gemini-2.5-flash",
            "temperature": 0.2,
            "max_tokens": 2000,
        }
    }
}

RO-HIT17 avatar Sep 06 '25 22:09 RO-HIT17

@parshvadaftari The reason this happens is because the Google LLM provider that's used in mem0 doesn't use structured output but just asks the LLM "Please return in JSON format". This will not work 100% of the time.

Adding support for structured output should fix this issue and make it much more stable. I'll try to create a PR for this.

ranjithkumar8352 avatar Sep 08 '25 08:09 ranjithkumar8352

Also facing this issue when I attempt to add memories using Gemini. @ranjithkumar8352 could this be fixed in the next release? I see there is a conditional line config_params["response_mime_type"] = "application/json" in mem0/llms/gemini.py, perhaps this needs to be set by default.

shuklak13 avatar Sep 17 '25 21:09 shuklak13

Really sorry for the inconvenience you guys are facing. Will be fixing it in the next release. Expect it in around couple of days. We'll also be releasing new stuff.

parshvadaftari avatar Sep 17 '25 21:09 parshvadaftari

Hey all we recently release 1.0.0 beta, which addresses to the issue you are facing. Feel free to try and test it out and do share the feedback. 🚀

parshvadaftari avatar Sep 18 '25 21:09 parshvadaftari

Hi @parshvadaftari thank you for the new release! I have been testing it and it seems that the former error Expecting value isn't occurring anymore.

However, I now get a warning message: [mem0.memory.main] - WARNING - Empty response from LLM, no memories to extract Each type this warning occurs I notice that the LLM fails to memorise the new info and the memory_db isn't updated.

Does anyone else get this?

federicoschiaffino avatar Sep 19 '25 09:09 federicoschiaffino

Is it with the gemini itself??

parshvadaftari avatar Sep 19 '25 09:09 parshvadaftari

yes this is with gemini-2.5-flash

federicoschiaffino avatar Sep 19 '25 09:09 federicoschiaffino

@federicoschiaffino Can you recreate the venv and start from a fresh one. I wasn't able to recreate the issue you're facing. Below config works for me.

config = {
    "embedder": {
        "provider": "gemini",
        "config": {
            "model": "text-embedding-004",
        }
    },
    "vector_store": {
        "provider": "pgvector",
        "config": {
            "user": "postgres",
            "dbname": "memory_db",
            "host": "127.0.0.1",
            "port": "5432",
            "embedding_model_dims": 768,
        }    
    },
    "llm": {
        "provider": "gemini",
        "config": {
            "model": "gemini-2.5-flash",
            "temperature": 0.2,
            "max_tokens": 2000,
        }
    }
}

parshvadaftari avatar Sep 20 '25 07:09 parshvadaftari

I’m also facing the same issue with the new release. Initially I was hitting the Expecting value error, but after upgrading that’s gone. Now I intermittently see:

Empty response from LLM, no memories to extract

Whenever this happens, the new info isn’t memorized and my memory_db stays unchanged.

For context, here’s my config:

mem0_config = {
    "embedder": {
        "provider": "gemini",
        "config": {
            "model": "models/gemini-embedding-001",
        }
    },
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "url": QDRANT_URL,
            "api_key": QDRANT_API_KEY,
        }
    },
    "llm": {
        "provider": "gemini",
        "config": {
            "model": "gemini-2.5-flash",
            "temperature": 0.2,
            "max_tokens": 2000,
        }
    }
}

It doesn’t happen all the time — sometimes the LLM returns valid memories, but other times the response is completely empty. So it looks like either Gemini occasionally returns a blank response, or Mem0 drops it if the format isn’t exactly what it expects.

RO-HIT17 avatar Sep 20 '25 10:09 RO-HIT17

Testing out the new version 1.0.0b0, I am also seeing inconsistent behavior where Gemini appears to return a different format from what mem0 expects, triggering an error. Here are the errors I get.

This is my config:

mem_config = {
  "llm": {
      "provider": "gemini",
      "config": {
          "model": "gemini-2.0-flash",
      }
  },
  "embedder": {
      "provider": "gemini",
      "config": {
          "model": "models/text-embedding-004",
      }
  },
  "vector_store": {
      "provider": "qdrant",
      "config": {
          "collection_name": "test",
          "embedding_model_dims":768,
          "path": "./qdrant_storage2"
      }
  },
}

shuklak13 avatar Sep 22 '25 06:09 shuklak13

@parshvadaftari SDK version is 1.33.0

config = {
    "embedder": {
        "provider": "gemini",
        "config": {
            "model": "models/text-embedding-004",
        }
    },
        "llm": {
        "provider": "gemini",
        "config": {
            "model": "gemini-2.5-flash",
            "temperature": 0.0,
            "max_tokens": 2000,
        }
    },
    "vector_store": {
        "provider": "pgvector",
        "config": {
            "dbname": "memory_db",
            "host": "localhost",
            "port": "5432",
            "embedding_model_dims": 768,
        }    
    }
}

I haven't changed my config since. Thank you for your time @parshvadaftari

federicoschiaffino avatar Sep 22 '25 08:09 federicoschiaffino

Testing out the new version 1.0.0b0, I am also seeing inconsistent behavior where Gemini appears to return a different format from what mem0 expects, triggering an error. Here are the errors I get.

This is my config:

mem_config = {
  "llm": {
      "provider": "gemini",
      "config": {
          "model": "gemini-2.0-flash",
      }
  },
  "embedder": {
      "provider": "gemini",
      "config": {
          "model": "models/text-embedding-004",
      }
  },
  "vector_store": {
      "provider": "qdrant",
      "config": {
          "collection_name": "test",
          "embedding_model_dims":768,
          "path": "./qdrant_storage2"
      }
  },
}

Hey @shuklak13 Sorry for the trouble, will be fixing it in next minor release for it.

parshvadaftari avatar Sep 25 '25 19:09 parshvadaftari

@parshvadaftari SDK version is 1.33.0

config = {
    "embedder": {
        "provider": "gemini",
        "config": {
            "model": "models/text-embedding-004",
        }
    },
        "llm": {
        "provider": "gemini",
        "config": {
            "model": "gemini-2.5-flash",
            "temperature": 0.0,
            "max_tokens": 2000,
        }
    },
    "vector_store": {
        "provider": "pgvector",
        "config": {
            "dbname": "memory_db",
            "host": "localhost",
            "port": "5432",
            "embedding_model_dims": 768,
        }    
    }
}

I haven't changed my config since. Thank you for your time @parshvadaftari

Hey @federicoschiaffino Sorry for the trouble, we don't seem to have a 1.33.0 SDK. Is it some other SDK which you're referring to?

parshvadaftari avatar Sep 25 '25 20:09 parshvadaftari

Thanks @parshvadaftari. I've tried the latest version of mem0 (0.1.118). I'm seeing a different Gemini response error now - AttributeError: 'NoneType' object has no attribute 'parts' (stack trace). This happens at line 42 of mem0/llms/gemini.py. It seems to be persistent for me - I've had this same error show up multiple times on repeat attempts.

shuklak13 avatar Oct 01 '25 01:10 shuklak13

Hey @shuklak13 it is the same config right? Can you share what messages you're trying to add and if possible code snippet? Also the version of google adk which is installed/used.

parshvadaftari avatar Oct 01 '25 07:10 parshvadaftari

Hi guys, do you have any updates on this issue?

Sprechen avatar Oct 20 '25 14:10 Sprechen