o1-mini use. Is it possible? (There is an issue with no vision and no token limitations for o1 models correct?)
Is your feature request related to a problem? Please describe.
Using o1-Mini with OI - because of no vision and token limits - this means we cant really use the latest and best LLM
Describe the solution you'd like
Is it possible to implement a hybrid approach in Open Interpreter to leverage the strengths of different OpenAI models? Specifically:
Use o1 (latest version) as the primary model for most tasks. Address o1's limitations: a. Lack of vision capabilities b. Inability to set token limits When image analysis is required, automatically switch to GPT-4o (not GPT-4 zero), which has vision capabilities. For tasks requiring specific token limits, consider a fallback mechanism or integration with models that allow this setting.
This approach would utilize o1's advanced features while compensating for its current limitations. If this functionality isn't currently available in Open Interpreter, is it being considered for future development?
Describe alternatives you've considered
No response
Additional context
No response
Just an outside observer here. Love this line of thinking!
-Mac
~Open interpreter is at the edge of what AI can do for humans. Keep up the good work.
Having multiple LLMs power an Open Interpreter session is a cool idea and requires a lot of thinking. We're always open to hearing proposals on how to best accomplish this!