Natan Yellin
Natan Yellin
Thanks, you're right! If you're open to contributing, we'd love to accept a PR improving the docs on this.
I think we're hitting this too as a downstream user of litellm - see https://github.com/robusta-dev/holmesgpt/issues/246 Any ETA on getting it fixed?
We're in favor! Would you be interested in opening a PR for this?
@arikalon1 was this fixed?
@kuzaxak any update on this? If itworks, I'll close.
Thanks, it should be fixed now. If you're open to trying again, we've made huge leaps in accuracy in the past month. If something doesn't work well please share and...
I am also interested in this. @octadion is there any workaround for this today? E.g. does OCI expose OpenAI compatible endpoints that you can configure but are less convenient than...
Hi Mohan, I'm just another interested user like you! Not from the LiteLLM team and unfortunately we don't have the resources to implement this ourselves right now. On Mon, Jan...
@HEI204 are you running holmes on Windows in the cmd? Right now we only support bash (which should work with WSL on windows)
Hi, we're starting to clean this up - see https://github.com/robusta-dev/holmesgpt/pull/445 More improvements to come soon! Thanks for the feedback.