Adam Dougal
Adam Dougal
I'm going to close this. It looks like the openai version bump has brought significant changes so a fresh PR/branch would probably be a better approach.
Heya Zsolt, that's great news, thanks!
**Update 22nd April:** After spiking possible technology choices, I believe the best way forward is to: - Use Azure Computer Vision to generate embeddings of the image - Use `GPT-4-vision`...
**Update 23rd April:** - The computer vision and gpt-4-vision model deployment resources are now being provisioned - This is applied if `USE_GPT4_VISION=true` - Unfortunately, `gpt-4-vision` does not support function calling,...
**Update 26th April:** - I'm investigating what changes are going to be required to the app code and index to be able to store and query an additional image embedding...
**Update 1st May:** - We are proceeding with removing LangChain from the tools - Before starting to make this change, I am going to expand the current functional tests to...
**13th May:** Mini code review: - [x] Include original exception when catching and throwing - [x] Move inline patches to annotations - [x] Make sure new files are pep8 compliant...
**14th May Update:** - A PR has been raised to add the call to computer vision to generate embeddings of the images - Next steps are to take these embeddings...
@ferrari-leo Heya, I'm not 100% familiar with the history so not sure if that only started to be supported recently. However, given it is now supported, the only thing stopping...
Can we make these configurable via env vars to save the user needing to modify code?