[Bug]: xai/grok-beta model getting litellm.BadRequestError
Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
Describe the bug and reproduction steps
litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=xai/grok-beta Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
OpenHands Installation
Docker command in README
OpenHands Version
No response
Operating System
None
Logs, Errors, Screenshots, and Additional Context
No response
Are you planning enable using Ollama too ?
Are you planning enable using Ollama too ?
No immediate plans.
Hi @Shpigford , curious about how the AI feature will work. At the moment, any estimate for the timing to have the first release?
This will work only with OpenAI? Or another LLM?
I just found this interesting ruby gem
https://github.com/icebaker/ruby-nano-bots
Is a library that provides an interface to address multiple chatbots.
Among them ChatGPT and Oolama. Also seems to make the whole configuration of assistants very easy. I think it's a good idea to not only concentrate on ChatGPT.
- No dependency (good for maybe hosted if OpenAi raises prices)
- It's one thing to give you the data, but not everyone wants to give it to OpenAi on top of that.
- With more effort for the hardcore selfhosters it can also be made to run independently