fish-speech
fish-speech copied to clipboard
Does this project support multi-gpu inference?
Self Checks
- [X] I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find any relevant information that meets my needs. English 中文 日本語 Portuguese (Brazil)
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [X] Please do not modify this template :) and fill in all the required fields.
1. Is this request related to a challenge you're experiencing? Tell us your story.
Does this project support multi-gpu inference? Can you give me some potential solutions?
2. What is your suggested solution?
NO
3. Additional context or comments
No response
4. Can you help us with this feature?
- [X] I am interested in contributing to this feature.
Currently, we don't support multi-gpu inference. However, you can still deploy multiple instance on each card and run a load balancer to achieve parallel inference.
You can rewrite the infer script yourself to use subprocess too infer in multi-gpus
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.