gpt-pilot icon indicating copy to clipboard operation
gpt-pilot copied to clipboard

[Bug]: Unable To Use LiteLLM with Dolphin Coder

Open TheMindExpansionNetwork opened this issue 1 year ago • 3 comments

Version

Command-line (Python) version

Operating System

Windows 10

What happened?

This is what is happening

I insalled everything for gpt piolet work switch GPT key.

Now I start the server on WSL

So do this

litellm --model ollama/dolphin coder

it pops up this

Welcome to Ubuntu 22.04.2 LTS (GNU/Linux 5.15.146.1-microsoft-standard-WSL2 x86_64)

  • Documentation: https://help.ubuntu.com

  • Management: https://landscape.canonical.com

  • Support: https://ubuntu.com/advantage

  • Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s just raised the bar for easy, resilient and secure K8s cluster deployment.

    https://ubuntu.com/engage/secure-kubernetes-at-the-edge

This message is shown once a day. To disable it please create the /root/.hushlogin file. root@DESKTOP-PVDFNBF:~# litellm --model ollama/dolphincoder INFO: Started server process [555] INFO: Waiting for application startup.

#------------------------------------------------------------#

'This product would be better if...'

https://github.com/BerriAI/litellm/issues/new

#------------------------------------------------------------#

Thank you for using LiteLLM! - Krrish & Ishaan

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new

Docs: https://docs.litellm.ai/docs/simple_proxy

LiteLLM: Test your local endpoint with: "litellm --test" [In a new terminal tab]

INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

I check IP address in cmd

Get this error

(base) PS Z:\GIT\gpt-pilot\pilot> python main.py

------------------ STARTING NEW PROJECT ---------------------- If you wish to continue with this project in future run: python main.py app_id=c198b76a-a773-4be8-af50-7f8089557df5

There was a problem with request to openai API: HTTPConnectionPool(host='172.31.128.1', port=8000): Max retries exceeded with url: /chat/completions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000027A061B9210>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) ? Do you want to try make the same request again? If yes, just press ENTER. Otherwise, type "no".

Not sure what more to do anyone have tips to use this in litellm or ollama

TheMindExpansionNetwork avatar Mar 16 '24 18:03 TheMindExpansionNetwork

please check with your litellm. It seems that the server is not correctly setup. Make a python script to check if your Litellm servr is working

SuperMalinge avatar Mar 19 '24 20:03 SuperMalinge

ollama now provides an OpenAI compatible API. So you don't need to use litellm any longer.

That said, using local models comes with a different problem, i.e. they don't produce clean JSON, and the extra text makes gpt-pilot go into infinite loops trying to correct JSON.

phalexo avatar Mar 20 '24 17:03 phalexo

ollama now provides an OpenAI compatible API. So you don't need to use litellm any longer.

That said, using local models comes with a different problem, i.e. they don't produce clean JSON, and the extra text makes gpt-pilot go into infinite loops trying to correct JSON.

Facing the same issue. Trying to play around prompt formatting to get the correct json but its very unlike to make it work for every possible opensource models.

kuldeepluvani avatar Apr 01 '24 05:04 kuldeepluvani