CodeGPT
CodeGPT copied to clipboard
Vision support across OpenAI-Style-Providers (e.g. Azure, Custon OpenAI endpoint)
Describe the need of your request
Models with vision support are becoming more common but as of now, you can only use vision with OpenAI and Claude in CodeGPT.
It would be nice to enable Vision support for other models as well.
Proposed solution
When configuring models for other providers, add a checkbox for enabling vision support on the configured model.
Not all models support vision and therefore it makes sense to configure the support directly with the model.
Alternative: just enable vision support everywhere, where llm-client would support API-requests that support vision. The REST-Endpoint should respond with a suitable error/warning message.
Additional context
I am willing to work on this issue if it is clear how you want it to be configurable in the settings/UI.