JARVIS
JARVIS copied to clipboard
So Mac can not use this?
Macs are not using NVIDIA display card, so Mac can not use this right?
I guess you still can, but using hybrid mode only. https://github.com/microsoft/JARVIS#configuration
I guess you still can, but using
hybridmode only. https://github.com/microsoft/JARVIS#configuration
But the server needs Nvidia display card.

There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:
There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:
![]()
I am also a Mac user and I encountered this issue while running this line of code. Could you please tell me what I should do if it is convenient?
here‘s my issue

The answer to your issue is on line 3 of your screenshot. Install git-lfs and try the model download step again.
The answer to your issue is on line 3 of your screenshot. Install git-lfs and try the model download step again.
Thank you. Your solution is very helpful, but after downloading so many files, the progress is still 0%. Is this a normal situation?
Yes, the LFS objects are rather large. My models folder is 275 GB personally.
Are the LFS objects absolutely necessary? Tryna run this on my macbook air lol (16gb ram, 500gb ssd)
Are the LFS objects absolutely necessary? Tryna run this on my macbook air lol (16gb ram, 500gb ssd)
No, you can run the lite.yaml configuration to use remote models only, although this is quite limited at the moment. I suggest using an external hard drive or SSD to manage these large models.
@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml(that is inference_mode: huggingface) right? Because if we use inference_mode:local(or inference_mode:hybrid),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?
@Fermain So If we deploy JARVIS in macOS, we can only use the
lite.yaml(that isinference_mode: huggingface) right? Because if we useinference_mode:local(orinference_mode:hybrid),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?
comment line 298-300(maybe if you didn't reformat this file) in models_server.py file
"midas-control": {sometmodel here}
you can run without nvidia device.
I have just downloaded the models on my Mac, I don't have the N display card.
And i have started with this models_server.py --config lite.yaml
I got the error messages :
AssertionError: Torch not compiled with CUDA enabled
comment the
"midas-control": {
"model": MidasDetector(model_path=f"{local_fold}/lllyasviel/ControlNet/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt")
}
the models_server started
did you run git lfs install?
yes,git lfs installed
the version is 3.3.0
I mean after you installed git-lfs, you need run git lfs install first
if you did it already, run sh download.sh again
Thanks, I'll try it
There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:
![]()
@Fermain @ethanye77 Did you encountered this error: https://github.com/microsoft/JARVIS/issues/67
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Thanks, I'll try it
Hello, have you resolved this issue? I also reported the same error.

I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh
I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh
git-lfs is not a pip package. You can use homebrew to install it:
brew install git-lfs
The error message states that this is not installed.
I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh
git-lfs is not a pip package. You can use homebrew to install it:
brew install git-lfsThe error message states that this is not installed.
OK,Thank you!
My device is a mackbook M1, how to solve this problem?
Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.
How to use it restrictively?
The readme contains instructions for using the model with the lite.yaml config file instead of the full config.yaml file. Add your API keys to this lite file, and run this instead of config.
My device is a mackbook M1, how to solve this problem?
checkout my first post in this issue:
https://github.com/microsoft/JARVIS/issues/39#issuecomment-1499319851
you don't need to change config.yaml to lite.yaml
我的设备是mackbook M1,如何解决这个问题?
查看我在本期中的第一篇文章:
你不需要
config.yaml改成lite.yaml
Did it work successfully?
it did
@sirlaurie I missed that comment, very helpful - thanks