AutoGPT icon indicating copy to clipboard operation
AutoGPT copied to clipboard

In a new installation the OLlama integration doesn't work correctly

Open pimhakkert opened this issue 11 months ago • 4 comments
trafficstars

⚠️ Search for existing issues first ⚠️

  • [X] I have searched the existing issues, and there is no existing issue for my problem

Which Operating System are you using?

Windows Subsystem for Linux (WSL)

Which version of AutoGPT are you using?

Master (branch)

What LLM Provider do you use?

Other (detail in issue)

Which area covers your issue best?

Agents

What commit or version are you using?

06b403f2b05090546ac6edc96e96b2f74602fd9c

Describe your issue.

After doing a full install of AutoGPT (advanced setup), I'm trying to follow the documentation to set up the integration with OLlama. I do everything according to the documentation, but when I select the appropriate model from the block dropdown menu, I get an error saying that the credentials aren't input.

See image for confirmation that I know which models from the dropdown to select. image

Upload Activity Log Content

No response

Upload Error Log Content

No response

pimhakkert avatar Nov 30 '24 21:11 pimhakkert

hi can i work on this issue. please assign it to me :) @ntindle

deepchanddc22 avatar Dec 06 '24 10:12 deepchanddc22

Hi @deepchanddc22,

Thanks for your interest, that would be great, thank you! Looking forward to your contribution.

Torantulino avatar Dec 10 '24 17:12 Torantulino

Hey @deepchanddc22 just checking to see if you were able to get to this issue or ran into any issues?

itsababseh avatar Feb 12 '25 17:02 itsababseh

I need to update the docs for ollama, i found a setup issue where we say to use o.o.o.o and then the port, but we cant to that as everything is in docker, so it has to use the external ip of the system running ollama to connect, i made a little guide here

but it should not be asking for credentials? if it is let me know

  1. Stop Ollama if it is currently running.
  2. Open Command Prompt (CMD) and run:
set OLLAMA_HOST=0.0.0.0:11434
  1. Start the Ollama server with:
ollama serve

This will make Ollama accessible on the local network.
4. Open a new CMD window and find your local IP using:

ipconfig

Look for your IPv4 address (e.g., 192.168.0.39).
5. Set the Ollama host to your local IP with port 11434:

192.168.0.39:11434
  1. Now, try running your graph or model again.

Note: Not all models work with Ollama. Here are the models that do:

  • llama3.2
  • llama3
  • llama3.1:405b
  • dolphin-mistral:latest

Bentlybro avatar Mar 07 '25 08:03 Bentlybro

Bentlybros fix works brilliant, only adjustment i required is to enter

export OLLAMA_HOST=0.0.0.0:11434

instead of set.

Bton123 avatar May 28 '25 21:05 Bton123

I'm happy with this solution, thanks!

pimhakkert avatar Jun 22 '25 11:06 pimhakkert