taozhiyuai

Results 117 comments of taozhiyuai

不知道pip install onnxruntime 和onnxruntime-gpu是否替换的掉

> 我们的项目是在Linux系统上研发的。另外可以参考 #37 在windows系统中部署AniPortrait。 支持下MAC吧~

@BearSolitarily 你在MAC下安装成功没有?我的总是有错,是不是要调整什么

我的错误是这样的. `(aniportrait) taozhiyu@TAOZHIYUs-MBP aniportrait % python -m scripts.audio2vid --config ./configs/prompts/animation_audio.yaml -W 512 -H 512 Traceback (most recent call last): File "/Users/taozhiyu/miniconda3/envs/aniportrait/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File...

> 我也是mac装decord 装不上,不知什么原因 这个问题上面已经说了

> I had the same error the first time I tried creating my own model from gguf. Then I tried this other modified version that someone else created, and it...

# Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM phi-3-mini-128K-Instruct_q8_0 FROM /Users/taozhiyu/Downloads/M-GGUF/Phi-3-mini-128K-Instruct/Phi-3-mini-128K-Instruct_Q8_0.gguf TEMPLATE """{{ if .System...

happen when v-ram is not enough to run on GPU+V-RAM, so ollama runs it on CPU+HD

> Is it possible to utilize RAM + VRAM? ![image](https://private-user-images.githubusercontent.com/883804/323789554-0b8a9ae6-f1bd-410d-9dd2-256506120737.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTM0OTIxOTcsIm5iZiI6MTcxMzQ5MTg5NywicGF0aCI6Ii84ODM4MDQvMzIzNzg5NTU0LTBiOGE5YWU2LWYxYmQtNDEwZC05ZGQyLTI1NjUwNjEyMDczNy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNDE5JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDQxOVQwMTU4MTdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1iNzk4MzA1MzU0ZDdmZTZhOWU5OWU1N2M4YWQ1MmI4ZGQ2ZjFhZTIwYzgzOGMwZGIzMzUwNjQ4MWE2MzEwMzg5JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.p3Ov4Hp0qfQ86TXbU-MJ3BuA81fv6Z9fDzX-Dg3Xdoc) > > I'm trying to run ~40G model locally on 4090 (24GB) and I have 128GB of RAM from which...

> Is it possible to utilize RAM + VRAM? ![image](https://private-user-images.githubusercontent.com/883804/323789554-0b8a9ae6-f1bd-410d-9dd2-256506120737.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTM1MTYyMDIsIm5iZiI6MTcxMzUxNTkwMiwicGF0aCI6Ii84ODM4MDQvMzIzNzg5NTU0LTBiOGE5YWU2LWYxYmQtNDEwZC05ZGQyLTI1NjUwNjEyMDczNy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNDE5JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDQxOVQwODM4MjJaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1lZjUwYjFhMDJiMzIwMTFlZWM5MzBiNmMwNWNjZTM0MTQwNjVlMGUyYmRmYjMzZjcwN2VjMWIwNWFjMTYzZjAzJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.8ZlqjWXXouxg03LtVAX0oZ1vRV8KDsaAC2LrT6sl5Zc) > > I'm trying to run ~40G model locally on 4090 (24GB) and I have 128GB of RAM from which...