Nitan Alexandru Marcel
Nitan Alexandru Marcel
@dnakov :)
> Omg this pr couldnt be bigger hihi. I'm not done
Can someone tell me what's a token limit? Because I don't get any =))) 
Oh, this one -_-. I was close enough tho ``` openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, you requested 8226 tokens...
> @nitanmarcel before you get too far.. Re-Implement OpenAI Re-implement Anthropic Re-Implement Llama Re-Implement Bedrock Re-Implement Groq Re-Implement Google Re-Implement NousResearch > > All of these can just be served...
Would be something like: ``` @process_response(processor=function_to_convert_response) def unsupported_model_call(...):```
> Would be something like: > > ``` > @process_response(processor=function_to_convert_response) > def unsupported_model_call(...):``` > ``` Tho it implies that it supports the same tool format. Can create a pre_processor argument...
> Can someone tell me what's a token limit? Because I don't get any =))) > >  Anyway, I have this to figure out. The chunking...
> > > @nitanmarcel before you get too far.. Re-Implement OpenAI Re-implement Anthropic Re-Implement Llama Re-Implement Bedrock Re-Implement Groq Re-Implement Google Re-Implement NousResearch > > > All of these can...
> Yeah, I've used instructor. But with litellm, you don't need to parse anything raw. I have no interest in maintaining transformations for so many models when it already exists,...