OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

ollama: 'llama2' not found, try pulling it first

Open R3verseIN opened this issue 1 year ago • 16 comments

**STEP 3

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

    AGENT ERROR:
    OpenAIException - Traceback (most recent call last):
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
        raise e
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
        openai_client = openai(
      File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in __init__
        raise openaiError(
    openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Traceback (most recent call last): File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion openai_client = OpenAI( File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init raise OpenAIError( openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 989, in completion raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 962, in completion response = openai_chat_completions.completion( File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 384, in completion raise OpenAIError(status_code=500, message=traceback.format_exc()) litellm.llms.openai.OpenAIError: Traceback (most recent call last): File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion openai_client = OpenAI( File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init raise OpenAIError( openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/r3versein/OpenDevin/opendevin/controller/init.py", line 85, in step action = self.agent.step(state) File "/home/r3versein/OpenDevin/agenthub/langchains_agent/langchains_agent.py", line 172, in step resp = self.llm.completion(messages=messages) File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2796, in wrapper raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2693, in wrapper result = original_function(*args, **kwargs) File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 2093, in completion raise exception_type( File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 8283, in exception_type raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 7069, in exception_type raise AuthenticationError( litellm.exceptions.AuthenticationError: OpenAIException - Traceback (most recent call last): File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion openai_client = openai( File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init raise openaiError( openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

    OBSERVATION:
    OpenAIException - Traceback (most recent call last):
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
        raise e
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
        openai_client = openai(
      File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in __init__
        raise openaiError(
    openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

============== STEP 4

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

    AGENT ERROR:
    OpenAIException - Traceback (most recent call last):
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
        raise e
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
        openai_client = openai(
      File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in __init__
        raise openaiError(
    openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Traceback (most recent call last): File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion openai_client = OpenAI( File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init raise OpenAIError( openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 989, in completion raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 962, in completion response = openai_chat_completions.completion( File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 384, in completion raise OpenAIError(status_code=500, message=traceback.format_exc()) litellm.llms.openai.OpenAIError: Traceback (most recent call last): File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion openai_client = OpenAI( File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init raise OpenAIError( openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/r3versein/OpenDevin/opendevin/controller/init.py", line 85, in step action = self.agent.step(state) File "/home/r3versein/OpenDevin/agenthub/langchains_agent/langchains_agent.py", line 172, in step resp = self.llm.completion(messages=messages) File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2796, in wrapper raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2693, in wrapper result = original_function(*args, **kwargs) File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 2093, in completion raise exception_type( File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 8283, in exception_type raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 7069, in exception_type raise AuthenticationError( litellm.exceptions.AuthenticationError: OpenAIException - Traceback (most recent call last): File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion openai_client = openai( File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init raise openaiError( openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

    OBSERVATION:
    OpenAIException - Traceback (most recent call last):
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
        raise e
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
        openai_client = openai(
      File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in __init__
        raise openaiError(
    openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable**

the executed command is export LLM_EMBEDDING_MODEL="llama2" export LLM_BASE_URL="http://localhost:11434" export LLM_API_KEY="" export WORKSPACE_DIR="/home/r3versein/work/" uvicorn opendevin.server.listen:app --port 3000

R3verseIN avatar Mar 28 '24 16:03 R3verseIN

It's defaulting to using OpenAI for the core model. Can you set LLM_MODEL="ollama/llama2" and see if that fixes it?

rbren avatar Mar 28 '24 16:03 rbren

It's defaulting to using OpenAI for the core model. Can you set LLM_MODEL="ollama/llama2" and see if that fixes it?

STEP 99

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

    AGENT ERROR:
    {"error":"model 'llama2' not found, try pulling it first"}

Traceback (most recent call last): File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 1878, in completion generator = ollama.get_ollama_response( File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/ollama.py", line 198, in get_ollama_response raise OllamaError(status_code=response.status_code, message=response.text) litellm.llms.ollama.OllamaError: {"error":"model 'llama2' not found, try pulling it first"}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/r3versein/OpenDevin/opendevin/controller/init.py", line 85, in step action = self.agent.step(state) File "/home/r3versein/OpenDevin/agenthub/langchains_agent/langchains_agent.py", line 172, in step resp = self.llm.completion(messages=messages) File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2796, in wrapper raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2693, in wrapper result = original_function(*args, **kwargs) File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 2093, in completion raise exception_type( File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 8283, in exception_type raise e File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 8251, in exception_type raise APIConnectionError( litellm.exceptions.APIConnectionError: {"error":"model 'llama2' not found, try pulling it first"}

    OBSERVATION:
    {"error":"model 'llama2' not found, try pulling it first"}

Exited before finishing

but when i run ollama list it gives

r3versein@DESKTOP-IL31CM9:~$ ollama list NAME ID SIZE MODIFIED llama2:latest 78e26419b446 3.8 GB 3 hours ago

R3verseIN avatar Mar 28 '24 17:03 R3verseIN

Oh I bet it's related to this: https://github.com/OpenDevin/OpenDevin/issues/285

rbren avatar Mar 28 '24 19:03 rbren

Actually according to the docs, you should be fine: https://docs.litellm.ai/docs/providers/ollama

So it seems like an ollama issue...

rbren avatar Mar 28 '24 19:03 rbren

It looks like litellm thinks the model name is just llama2. Did you set LLM_MODEL=llama2? Or LLM_MODEL=ollama/llama2? It should be the latter

rbren avatar Mar 28 '24 19:03 rbren

It looks like litellm thinks the model name is just llama2. Did you set LLM_MODEL=llama2? Or LLM_MODEL=ollama/llama2? It should be the latter

yes both done. Still same issue.

R3verseIN avatar Mar 29 '24 01:03 R3verseIN

I got it running using

export LLM_MODEL=ollama/llama2
export LLM_API_KEY=
export LLM_BASE_URL=http://localhost:11434
PYTHONPATH=`pwd` python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"

@R3verseIN can you check your open ports to see if ollama is listening? If not can you run ollama serve. Might be he that you pulled the model but didn't serve

goudbor avatar Mar 29 '24 03:03 goudbor

I got it running using

export LLM_MODEL=ollama/llama2
export LLM_API_KEY=
export LLM_BASE_URL=http://localhost:11434
PYTHONPATH=`pwd` python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"

@R3verseIN can you check your open ports to see if ollama is listening? If not can you run ollama serve. Might be he that you pulled the model but didn't serve

Fixed the issue.

R3verseIN avatar Mar 29 '24 03:03 R3verseIN

I'm running llama2 in LMStudio and I'm running theses:

$Env:LLM_API_KEY="lm-studio"
$Env:LLM_MODEL="ollama/llama2"
$Env:LLM_BASE_URL="http://localhost:1234/v1"
$Env:LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local"

$Env:WORKSPACE_DIR = "C:\Users\Bob\Desktop\OpenDevin\OpenDevin\opendevin\Project01"
python -m pip install -r requirements.txt
python -m uvicorn opendevin.server.listen:app --port 3000

And I get this error:

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


        AGENT ERROR:
        'response'
Traceback (most recent call last):
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\main.py", line 1878, in completion
    generator = ollama.get_ollama_response(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\llms\ollama.py", line 228, in get_ollama_response
    model_response["choices"][0]["message"]["content"] = response_json["response"]
                                                         ~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'response'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Bob\Desktop\OpenDevin\OpenDevin\opendevin\controller\__init__.py", line 85, in step
    action = self.agent.step(state)
             ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bob\Desktop\OpenDevin\OpenDevin\agenthub\langchains_agent\__init__.py", line 172, in step
    resp = self.llm.completion(messages=messages)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\utils.py", line 2796, in wrapper
    raise e
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\utils.py", line 2693, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\main.py", line 2093, in completion
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\utils.py", line 8258, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: 'response'

        OBSERVATION:
        'response'
    Been hitting my head for the last 2 hours without any kind of progress. 
    
    I also tried to run this: 
    python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"
        Traceback (most recent call last):
  File "C:\Users\Bob\Desktop\OpenDevin\OpenDevin\opendevin\main.py", line 7, in <module>
    import agenthub # noqa F401 (we import this to get the agents registered)
    ^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'agenthub'

If anyone has ideas on what to do, I'm all ears. So far, everything works before sending the command in the UI where i get this: Oops. Something went wrong: 'response'

And i get this in the LMStudio:

 [ERROR] Unexpected endpoint or method. (POST /v1/api/generate). Returning 200 anyway
 [ERROR] Unexpected endpoint or method. (POST /v1/api/generate). Returning 200 anyway
...

10htts avatar Mar 29 '24 05:03 10htts

For

 python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"

Try running

PYTHONPATH=`pwd`  python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"

Otherwise, it seems like your ollama server isn't behaving as expected. LiteLLM expects to get results from the endpoint /v1/api/generate, but LMStudio doesn't recognize that endpoint.

I'm not familiar with LMStudio but my guess is you have to run ollama without LMStudio for it to work

rbren avatar Mar 29 '24 14:03 rbren

Think I found why here: https://github.com/BerriAI/litellm?tab=readme-ov-file#supported-providers-docs

llama 2 does not support Embedding?

10htts avatar Mar 29 '24 19:03 10htts

Hmm...I think we're using llamaindex for the embeddings, not litellm

rbren avatar Mar 29 '24 19:03 rbren

I get this error now: [ERROR] Unexpected endpoint or method. (POST /v1/api/embeddings). Returning 200 anyway

Started from scratch with LMStudio (Running nitsuai/llama-2-70b-Guanaco-QLoRA-GGUF/llama-2-70b-guanaco-qlora.Q3_K_S.gguf) and these parameters for LiteLLM: $Env:LLM_API_KEY="lm-studio" $Env:LLM_MODEL="nitsuai/llama-2-70b-Guanaco-QLoRA-GGUF/llama-2-70b-guanaco-qlora.Q3_K_S.gguf" # Doesn't seem to change anything. Tried ollama/llama2 too. $Env:LLM_BASE_URL="http://localhost:1234/v1" $Env:LLM_EMBEDDING_MODEL="llama2"

I'm not taking any more of your time since I do not have sufficient knowledge to have any constructive contribution to the project but if you wish to know more about my setup, do not hesitate.

Thanks for the help and great project!

10htts avatar Mar 29 '24 19:03 10htts

So it seems like an ollama issue.

This is an ollama issue, it means the model has not started running on the ollama server

ishaan-jaff avatar Mar 29 '24 21:03 ishaan-jaff

I'm running llama2 in LMStudio and I'm running theses:

@10htts, LMStudio is already openai compatible. So pass model=openai/llama2, litellm will route this request to OpenAI /chat/completions

ishaan-jaff avatar Mar 29 '24 21:03 ishaan-jaff

Closing in favor of https://github.com/OpenDevin/OpenDevin/issues/417

rbren avatar Apr 01 '24 15:04 rbren

write your environment in terminal this code , ollama run <model name>
this will help you, you can find the model names in website of ollama.

mertsengil avatar Jul 30 '24 08:07 mertsengil