Michelle Avery
Michelle Avery
I'm running into a similar issue with streamGenerateContent, and I suspect the cause is the same. In [this](https://github.com/BerriAI/litellm/blob/b82af5b826553b5d35864c924af3b368a235fd6d/litellm/llms/vertex_ai/vertex_llm_base.py#L173) method: ``` def _check_custom_proxy( self, api_base: Optional[str], custom_llm_provider: str, gemini_api_key: Optional[str], endpoint:...
[This](https://github.com/BerriAI/litellm/issues/8772) seems to the same issue too.
Here you go: ``` from litellm import completion import os GEMINI_API_KEY = # your api key # This works response = completion(model="gemini/gemini-1.5-flash-latest", messages=[{"role": "user", "content": "What model are you?"}], api_key=GEMINI_API_KEY)...