Kokoro-FastAPI icon indicating copy to clipboard operation
Kokoro-FastAPI copied to clipboard

fix: downgrade PyTorch CUDA version from cu129 to cu126

Open ryan-steed-usa opened this issue 1 month ago • 3 comments

This change might restore support for Maxwell and Pascal architectures.

  • Updated GPU dependency from torch==2.8.0+cu129 to torch==2.8.0+cu126 in pyproject.toml
  • Changed PyTorch CUDA index URL from https://download.pytorch.org/whl/cu129 to https://download.pytorch.org/whl/cu126
  • This change ensures compatibility with CUDA 12.6 runtime while maintaining the same PyTorch version (2.8.0)

ryan-steed-usa avatar Oct 30 '25 06:10 ryan-steed-usa

Closes #406

ryan-steed-usa avatar Oct 30 '25 20:10 ryan-steed-usa

Hey @ryan-steed-usa, is this still a draft?

remsky avatar Nov 05 '25 03:11 remsky

Hi @remsky, I was hoping for confirmation from a Maxwell or Pascal CUDA user but everything seems to work containerized with my Ada Lovelace GPUs. Otherwise I think it's ready to go.

ryan-steed-usa avatar Nov 05 '25 03:11 ryan-steed-usa

Pascal user here. I can confirm my container crashes on remsky/kokoro-fastapi-gpu:latest-amd64 and works fine on ryan-steed-usa/kokoro-fastapi-gpu:latest. It could be useful to have a dedicated build tag for those legacy GPU architectures to keep the latest CUDA version by default

jtabet avatar Dec 14 '25 12:12 jtabet

Thanks for the feedback.

It could be useful to have a dedicated build tag for those legacy GPU architectures to keep the latest CUDA version by default

I agree, unless @remsky prefers to maintain a unified image in which case this workaround should accommodate everyone (for a while anyway). If we want to maintain a separate tag, we might also consider downgrading the entire base image.

ryan-steed-usa avatar Dec 14 '25 17:12 ryan-steed-usa

Thats a great idea. I have an optimization to the build stages on the nvidia image I was planning to push, can take a look to tag by torch versions and roll this in

remsky avatar Dec 15 '25 09:12 remsky