rpontual

Results 7 comments of rpontual

Sidekick published a simple workaround, may be useful until Sidekiq 7 is in use. https://github.com/sidekiq-scheduler/sidekiq-scheduler/pull/385

I have similar experience, it works with GPT-4. I have tried using text-generation-webui with the openai extension and the model TheBloke_Mistral-7B-Instruct-v0.2-AWQ, the interface works and I am able to make...

I managed to get open-interpreter to talk to text-generation-webui as follows: I have text-generation-webui running on a separate machine with openai extension activated and a model loaded. I installed open-interpreter...

> I just get the same error. Are you running the most up to date Oogabooga? Can you send what is being sent in terms of JSON to ooga? >...

I am not getting this error. This is what I see in the OI side from the time I launch it (I am using the -v option for the first...

Here is the continuation: `self.optional_params: {} kwargs[caching]: False; litellm.cache: None self.optional_params: {'stream': True, 'extra_body': {}} PROCESSED CHUNK PRE CHUNK CREATOR: ChatCompletionChunk(id='chatcmpl-1707148482206359808', choices=[Choice(delta=ChoiceDelta(content='', function_call=None, role='assistant', tool_calls=None), finish_reason=None, index=0, logprobs=None, message={'role': 'assistant',...

To be clear: (1) I am not using LiteLLM, (2) I realize that the model I am using may require different tuning syntax or different parameters.