fish-speech icon indicating copy to clipboard operation
fish-speech copied to clipboard

Does this project support multi-gpu inference?

Open Kingdroper opened this issue 1 year ago • 3 comments

Self Checks

  • [X] I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find any relevant information that meets my needs. English 中文 日本語 Portuguese (Brazil)
  • [X] I have searched for existing issues search for existing issues, including closed ones.
  • [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [X] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • [X] Please do not modify this template :) and fill in all the required fields.

1. Is this request related to a challenge you're experiencing? Tell us your story.

Does this project support multi-gpu inference? Can you give me some potential solutions?

2. What is your suggested solution?

NO

3. Additional context or comments

No response

4. Can you help us with this feature?

  • [X] I am interested in contributing to this feature.

Kingdroper avatar Oct 29 '24 09:10 Kingdroper

Currently, we don't support multi-gpu inference. However, you can still deploy multiple instance on each card and run a load balancer to achieve parallel inference.

Stardust-minus avatar Oct 30 '24 08:10 Stardust-minus

You can rewrite the infer script yourself to use subprocess too infer in multi-gpus

Whale-Dolphin avatar Nov 04 '24 15:11 Whale-Dolphin

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] avatar Dec 05 '24 00:12 github-actions[bot]

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Dec 20 '24 00:12 github-actions[bot]