Documentation: Using Fara-7b
I'm running MacOS Tahoe 26.1 with Docker Desktop. How do I run Fara-7b as a Computer Use Agent (CUA), as announced in this article below?
https://www.microsoft.com/en-us/research/blog/fara-7b-an-efficient-agentic-model-for-computer-use/
I've gotten the magentic-ui service up and running, with the magentic-ui[ollama] package, and I can access it locally from my web browser.
Now what? Is there a reference document for configuration? There's very little guidance in the README.
Do I run the Fara-7b model in Ollama somehow, or is Ollama just for using a text model? I can't find Fara-7b in the Ollama model library. Do I have to download it from Hugging Face?
Any documentation would be helpful.
Hi @pcgeek86 , I will have an update very soon for this! Sorry for the delay
The instructions are here https://github.com/microsoft/magentic-ui/blob/main/README.md#fara-7b
Please update to the latest Magentic-UI version and install the fara extra.
Thanks so much for your patience!
Just confirming that it is possible to run Fara on Mac with Magentic UI.
However, for GPU usage, you need LM Studio, so the fara_config.yaml becomes :
model_config_local_surfer: &client_surfer
provider: OpenAIChatCompletionClient
config:
model: "microsoft_fara-7b"
base_url: http://<Local IP>:1234/v1
api_key: not-needed
model_info:
vision: true
function_calling: true
json_output: false
family: "unknown"
structured_output: false
multiple_system_messages: false
orchestrator_client: *client_surfer
coder_client: *client_surfer
web_surfer_client: *client_surfer
file_surfer_client: *client_surfer
action_guard_client: *client_surfer
model_client: *client_surfer
I am still having some issues with it recognising the browser content, e.g. after a search, and seems stuck in a loop.
Using demo query it gets to a captcha, and if you click on it to allow to continue , e.g.
then it goes in a loop
Can you share your LM Studio settings for Fara? I was not able to make it work with Magentic UI because of json compatibility issues.
I will try LM Studio myself today to see if there are any differences hosting there vs vLLM that may be causing issues. Will post instructions on the README if I find a fix
Sorry I did not post the fix. It seems it was just because my context was too small. Needs more than 8k. I tried using the settings of the person who created fara-agent, reference in another issue.
Context 8k+, temp 0, topP 0.9
Then it works.
On Mon, 1 Dec 2025 at 6:23 AM, Hussein Mozannar @.***> wrote:
husseinmozannar left a comment (microsoft/magentic-ui#447) https://github.com/microsoft/magentic-ui/issues/447#issuecomment-3592889627
I will try LM Studio myself today to see if there are any differences hosting there vs vLLM that may be causing issues. Will post instructions on the README if I find a fix
— Reply to this email directly, view it on GitHub https://github.com/microsoft/magentic-ui/issues/447#issuecomment-3592889627, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB4R3AE4ZVMHV3QRRLBHWJD37MRZ7AVCNFSM6AAAAACNKMNH3CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTKOJSHA4DSNRSG4 . You are receiving this because you commented.Message ID: @.***>
There are five agents for different purposes (listed in config). Is Fara a good choice for all or just some of them? So far, I have seen configurations really not leveraging this separation of purpose - all agents using the same model...
I was wondering the same.
Le lun. 1 déc. 2025 à 09:18, danilmelnikov @.***> a écrit :
danilmelnikov left a comment (microsoft/magentic-ui#447) https://github.com/microsoft/magentic-ui/issues/447#issuecomment-3593283025
There are five agents for different purposes (listed in config). Is Fara a good choice for all or just some of them? So far, I have seen configurations really not leveraging this separation of purpose - all agents using the same model...
— Reply to this email directly, view it on GitHub https://github.com/microsoft/magentic-ui/issues/447#issuecomment-3593283025, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB4R3ACDRELGOE7SFOLONNL37NGLDAVCNFSM6AAAAACNKMNH3CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTKOJTGI4DGMBSGU . You are receiving this because you commented.Message ID: @.***>
When you launch magentic-ui with the --fara, only the [Fara]WebSurfer agent will be present, Fara-7B was trained to act on its own without an orchestrator and so we wanted the experience here to mimic the training of Fara-7B and to let users experience Fara's capabilities on its own.
This is why the config sets all the other agents to have the FaraWebSurfer config just so that there is no issue with loading Magentic-UI.
To have different models powering different agents and Fara working with the orchestrator, I need to do some additional testing to make sure Fara-7B interacts well with the other agents. I will likely make a new version update in the coming week or next week integrating this.
@ManuInNZ which version of Fara are you using on ML Studio? This one: bartowski/microsoft_Fara-7B-GGUF?
Yes https://huggingface.co/bartowski/microsoft_Fara-7B-GGUF Q5_K_S The L version did not work, but I'm not sure why; it was not considered vision-enabled, even if flagged as such in the U. Tried with Q4_K_S and it worked too.
On Tue, 2 Dec 2025 at 10:18 AM, Manuel Naujoks @.***> wrote:
halllo left a comment (microsoft/magentic-ui#447) https://github.com/microsoft/magentic-ui/issues/447#issuecomment-3598907702
@ManuInNZ https://github.com/ManuInNZ which version of Fara are you using on ML Studio? This one: bartowski/microsoft_Fara-7B-GGUF https://huggingface.co/bartowski/microsoft_Fara-7B-GGUF?
— Reply to this email directly, view it on GitHub https://github.com/microsoft/magentic-ui/issues/447#issuecomment-3598907702, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB4R3AHSSQPQFEXQLM4DWFL37SWDBAVCNFSM6AAAAACNKMNH3CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTKOJYHEYDONZQGI . You are receiving this because you were mentioned.Message ID: @.***>
Why not use the original model from Microsoft and not some third party variant?
Because it was not gguf to be able to use it on my Mac and I did not want to use Olive as it would still be cpu and not you.
On Tue, 2 Dec 2025 at 7:44 PM, Manuel Naujoks @.***> wrote:
halllo left a comment (microsoft/magentic-ui#447) https://github.com/microsoft/magentic-ui/issues/447#issuecomment-3600442315
Why not use the original model from Microsoft and not some third party variant?
— Reply to this email directly, view it on GitHub https://github.com/microsoft/magentic-ui/issues/447#issuecomment-3600442315, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB4R3AFWUIKXLVOTMZXZ24L37UYLXAVCNFSM6AAAAACNKMNH3CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTMMBQGQ2DEMZRGU . You are receiving this because you were mentioned.Message ID: @.***>
Gpu
Sorry for the autocorrect.
Bartowski is a known model provider in the lm studio community.
On Tue, 2 Dec 2025 at 7:47 PM, Emmanuel Auffray @.***> wrote:
Because it was not gguf to be able to use it on my Mac and I did not want to use Olive as it would still be cpu and not you.
On Tue, 2 Dec 2025 at 7:44 PM, Manuel Naujoks @.***> wrote:
halllo left a comment (microsoft/magentic-ui#447) https://github.com/microsoft/magentic-ui/issues/447#issuecomment-3600442315
Why not use the original model from Microsoft and not some third party variant?
— Reply to this email directly, view it on GitHub https://github.com/microsoft/magentic-ui/issues/447#issuecomment-3600442315, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB4R3AFWUIKXLVOTMZXZ24L37UYLXAVCNFSM6AAAAACNKMNH3CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTMMBQGQ2DEMZRGU . You are receiving this because you were mentioned.Message ID: @.***>