01
01 copied to clipboard
Local model
Running the server with --model ollama/xxx does not seems to work. It's still using gpt, on open interpreter it works.
Same. Would love to see a list of recommended models
I pushed a fix to this yesterday, you can run the server with --local, select ollama, then select your ollama model. Please let me know if it works for you @kennydd0 or @beamercola !
@tyfiero thanks for the fix! it worked for me :)
Yay! I'm going to close this for now, but anyone can open up a new issue if they have more --local issues.