fix: downgrade PyTorch CUDA version from cu129 to cu126
This change might restore support for Maxwell and Pascal architectures.
- Updated GPU dependency from torch==2.8.0+cu129 to torch==2.8.0+cu126 in pyproject.toml
- Changed PyTorch CUDA index URL from https://download.pytorch.org/whl/cu129 to https://download.pytorch.org/whl/cu126
- This change ensures compatibility with CUDA 12.6 runtime while maintaining the same PyTorch version (2.8.0)
Closes #406
Hey @ryan-steed-usa, is this still a draft?
Hi @remsky, I was hoping for confirmation from a Maxwell or Pascal CUDA user but everything seems to work containerized with my Ada Lovelace GPUs. Otherwise I think it's ready to go.
Pascal user here. I can confirm my container crashes on remsky/kokoro-fastapi-gpu:latest-amd64 and works fine on ryan-steed-usa/kokoro-fastapi-gpu:latest. It could be useful to have a dedicated build tag for those legacy GPU architectures to keep the latest CUDA version by default
Thanks for the feedback.
It could be useful to have a dedicated build tag for those legacy GPU architectures to keep the latest CUDA version by default
I agree, unless @remsky prefers to maintain a unified image in which case this workaround should accommodate everyone (for a while anyway). If we want to maintain a separate tag, we might also consider downgrading the entire base image.
Thats a great idea. I have an optimization to the build stages on the nvidia image I was planning to push, can take a look to tag by torch versions and roll this in