self-operating-computer
self-operating-computer copied to clipboard
Open source large language model support
Is it possible to run this and point it not at OpenAI but to self hosted large language model to do the thing?
@andzejsp this would be possible. It would likely require some slight changes in prompting and some adjustments in functions to the repo. If someone finds a good provider hosting a model that has a good API and key access then go ahead and add it to the project as a PR
@joshbickett would you consider starting with Ollama?
I created a feature request like this: #35
Yeah, if someone could get a PR of a vision model working locally on the project that'd be great I think
Yeah, if someone could get a PR of a vision model working locally on the project that'd be great I think
Would this work? https://llava-vl.github.io/
https://simonwillison.net/2023/Nov/29/llamafile/
@Andy1996247 it sounds like it may work based on what you mentioned in #101
could this work?
https://github.com/petals-infra/chat.petals.dev#apis
Not very familiar with Petals Chat. It may work, but I think Llama.cpp is most promising
@Andy1996247 @orkutmuratyilmaz @norzog wanted to mention that we added support for the Gemini model in case you're interested. Was merged with PR #110
@joshbickett thanks for the update. We're one step closer to open source LLM support🤘🏻
would be cool if ollama was supported https://github.com/jmorganca/ollama Simply pointing to ollama instance, and bobs your uncle. Not sure how it all works, but ollama was no pain to set up and very usable, just one line of command to set it up :).
would be cool if ollama was supported https://github.com/jmorganca/ollama
Simply pointing to ollama instance, and bobs your uncle. Not sure how it all works, but ollama was no pain to set up and very usable, just one line of command to set it up :).
Currently working on LLaVA support through ollama as we speak :)
Obviously accuracy will be low, but I think it'll be great to finally have support for an open sourced model!
Heads up, I think you should be able to stand up your own OpenAI-compatible API here:
https://llama-cpp-python.readthedocs.io/en/latest/server/#multimodal-models
Then this project can point to your self-hosted API instead of OpenAI.
We now have LLaVa available in the project thanks to a PR from @michaelhhogue!
thanks for the LlaVA support:)