seamless_communication
seamless_communication copied to clipboard
Dependency issues with fairseq==0.2.1 and CUDA sm_120 (RTX 5090 Blackwell) for Seamless M4T TTS
Hi team,
I am trying to run Seamless M4T TTS on my system with an NVIDIA RTX 5090 Blackwell GPU. The repository requires fairseq==0.2.1, which is only compatible with torch==2.2.2. However, torch 2.2.2 appears to be incompatible with CUDA sm_120 architecture used in the 5090, resulting in dependency issues and failed builds.
Could you please advise:
- If there is a recommended workaround or patch for running Seamless M4T TTS on recent Blackwell GPUs?
- Whether fairseq can be upgraded for newer torch/CUDA compatibility without breaking integration?
- Any suggested steps for making the current pipeline work on sm_120 CUDA?
Thanks in advance for your help!
Hi, has this issue been resolved? Thank you.