taozhiyuai
taozhiyuai
不知道pip install onnxruntime 和onnxruntime-gpu是否替换的掉
> 我们的项目是在Linux系统上研发的。另外可以参考 #37 在windows系统中部署AniPortrait。 支持下MAC吧~
@BearSolitarily 你在MAC下安装成功没有?我的总是有错,是不是要调整什么
我的错误是这样的. `(aniportrait) taozhiyu@TAOZHIYUs-MBP aniportrait % python -m scripts.audio2vid --config ./configs/prompts/animation_audio.yaml -W 512 -H 512 Traceback (most recent call last): File "/Users/taozhiyu/miniconda3/envs/aniportrait/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File...
> 我也是mac装decord 装不上,不知什么原因 这个问题上面已经说了
> I had the same error the first time I tried creating my own model from gguf. Then I tried this other modified version that someone else created, and it...
# Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM phi-3-mini-128K-Instruct_q8_0 FROM /Users/taozhiyu/Downloads/M-GGUF/Phi-3-mini-128K-Instruct/Phi-3-mini-128K-Instruct_Q8_0.gguf TEMPLATE """{{ if .System...
happen when v-ram is not enough to run on GPU+V-RAM, so ollama runs it on CPU+HD
> Is it possible to utilize RAM + VRAM?  > > I'm trying to run ~40G model locally on 4090 (24GB) and I have 128GB of RAM from which...
> Is it possible to utilize RAM + VRAM?  > > I'm trying to run ~40G model locally on 4090 (24GB) and I have 128GB of RAM from which...