autogen icon indicating copy to clipboard operation
autogen copied to clipboard

Magentic-One local LLM (Ollama) support?

Open mehulgupta2016154 opened this issue 1 year ago • 4 comments

What feature would you like to be added?

How Magentic-One be used with local LLMs or Ollama?

Why is this needed?

This will enable users to use Magentic-One with open-source LLMs other than OpenAI-API

mehulgupta2016154 avatar Nov 11 '24 13:11 mehulgupta2016154

In this video, it seems demonstrate the usage of ollama and Magnetic One..

https://www.youtube.com/watch?v=-WqHY3uE_K0

009topersky avatar Nov 13 '24 02:11 009topersky

That is my video posted above :) I have a semi-functional fork of this that works with ollama and was tested with llama-3.2-11b-vision. Here is a link to the repo: https://github.com/OminousIndustries/autogen-llama3.2

The install steps should be the same as the regular magentic-one install. You can ignore the "Environment Configuration for Chat Completion Client" since the model info is hard coded into the utils.py in my repo (which is a current limitation as it chains it to llama-3.2-11b-vision), but since I was using that for testing, it worked for my purposes!

OminousIndustries avatar Nov 13 '24 06:11 OminousIndustries

That is my video posted above :) I have a semi-functional fork of this that works with ollama and was tested with llama-3.2-11b-vision. Here is a link to the repo: https://github.com/OminousIndustries/autogen-llama3.2

The install steps should be the same as the regular magentic-one install. You can ignore the "Environment Configuration for Chat Completion Client" since the model info is hard coded into the utils.py in my repo (which is a current limitation as it chains it to llama-3.2-11b-vision), but since I was using that for testing, it worked for my purposes!

That's legendary! I really appreciate your effort and the video!

SlistInc avatar Nov 17 '24 03:11 SlistInc

That is my video posted above :) I have a semi-functional fork of this that works with ollama and was tested with llama-3.2-11b-vision. Here is a link to the repo: https://github.com/OminousIndustries/autogen-llama3.2 The install steps should be the same as the regular magentic-one install. You can ignore the "Environment Configuration for Chat Completion Client" since the model info is hard coded into the utils.py in my repo (which is a current limitation as it chains it to llama-3.2-11b-vision), but since I was using that for testing, it worked for my purposes!

That's legendary! I really appreciate your effort and the video!

Thanks for the kind words!

OminousIndustries avatar Nov 17 '24 15:11 OminousIndustries

For the looks of the fork this doesn't seem to be a big deal and we should allow for model parametrization too, can we make this into the project ?? now more than ever given the crazy amount of API compatible models out there

erodrig avatar Mar 17 '25 18:03 erodrig

It is already supported. There is a recent bug fix: #5983.

ekzhu avatar Mar 17 '25 21:03 ekzhu