Magentic-One local LLM (Ollama) support?
What feature would you like to be added?
How Magentic-One be used with local LLMs or Ollama?
Why is this needed?
This will enable users to use Magentic-One with open-source LLMs other than OpenAI-API
In this video, it seems demonstrate the usage of ollama and Magnetic One..
https://www.youtube.com/watch?v=-WqHY3uE_K0
That is my video posted above :) I have a semi-functional fork of this that works with ollama and was tested with llama-3.2-11b-vision. Here is a link to the repo: https://github.com/OminousIndustries/autogen-llama3.2
The install steps should be the same as the regular magentic-one install. You can ignore the "Environment Configuration for Chat Completion Client" since the model info is hard coded into the utils.py in my repo (which is a current limitation as it chains it to llama-3.2-11b-vision), but since I was using that for testing, it worked for my purposes!
That is my video posted above :) I have a semi-functional fork of this that works with ollama and was tested with llama-3.2-11b-vision. Here is a link to the repo: https://github.com/OminousIndustries/autogen-llama3.2
The install steps should be the same as the regular magentic-one install. You can ignore the "Environment Configuration for Chat Completion Client" since the model info is hard coded into the utils.py in my repo (which is a current limitation as it chains it to llama-3.2-11b-vision), but since I was using that for testing, it worked for my purposes!
That's legendary! I really appreciate your effort and the video!
That is my video posted above :) I have a semi-functional fork of this that works with ollama and was tested with llama-3.2-11b-vision. Here is a link to the repo: https://github.com/OminousIndustries/autogen-llama3.2 The install steps should be the same as the regular magentic-one install. You can ignore the "Environment Configuration for Chat Completion Client" since the model info is hard coded into the utils.py in my repo (which is a current limitation as it chains it to llama-3.2-11b-vision), but since I was using that for testing, it worked for my purposes!
That's legendary! I really appreciate your effort and the video!
Thanks for the kind words!
For the looks of the fork this doesn't seem to be a big deal and we should allow for model parametrization too, can we make this into the project ?? now more than ever given the crazy amount of API compatible models out there
It is already supported. There is a recent bug fix: #5983.