seamless_communication icon indicating copy to clipboard operation
seamless_communication copied to clipboard

Dependency issues with fairseq==0.2.1 and CUDA sm_120 (RTX 5090 Blackwell) for Seamless M4T TTS

Open aishwary-intellifAI opened this issue 3 months ago • 1 comments

Hi team,

I am trying to run Seamless M4T TTS on my system with an NVIDIA RTX 5090 Blackwell GPU. The repository requires fairseq==0.2.1, which is only compatible with torch==2.2.2. However, torch 2.2.2 appears to be incompatible with CUDA sm_120 architecture used in the 5090, resulting in dependency issues and failed builds.

Could you please advise:

  • If there is a recommended workaround or patch for running Seamless M4T TTS on recent Blackwell GPUs?
  • Whether fairseq can be upgraded for newer torch/CUDA compatibility without breaking integration?
  • Any suggested steps for making the current pipeline work on sm_120 CUDA?

Thanks in advance for your help!

aishwary-intellifAI avatar Sep 24 '25 11:09 aishwary-intellifAI

Hi, has this issue been resolved? Thank you.

zhp2018 avatar Oct 10 '25 09:10 zhp2018