Model loading fails silently on Python 3.12 with PyTorch Nightly
Hello, the node is failing to load the joycaption-beta-one model on my system.
My Environment:
- OS: Linux
- GPU: NVIDIA GeForce RTX 5070 Ti
- Python Version: 3.12.3
- PyTorch Version: Latest Nightly Build (
torch-2.9.0+cu128or similar) - Transformers Version: Latest (
4.57.1or similar)
The Problem: When I queue a prompt, the terminal shows "Loading checkpoint shards: 0%" and then the prompt immediately finishes in about 1.5 seconds. The node in the UI shows the error: "Error loading model: 'JC_Models' object has no attribute 'model'".
Important Details:
- This happens in both normal GPU mode and in CPU-only mode (
--cpu). - All my model files and paths are correct. The model shards are fully downloaded.
- My Python environment has been fully updated to resolve all other issues (
setuptools, etc.).
This seems to be a code-level incompatibility between the node and the latest versions of the core AI libraries required for modern hardware.
Thanks for the detailed report! This seems related to Python 3.12 and the latest PyTorch nightly build. I haven’t tested JoyCaption with that combination yet.
Could you please try with a stable PyTorch release (e.g. 2.8.0) and see if the issue still happens? If it works there, I’ll add compatibility updates for 3.12 + nightly in the next patch.
Same thing happens to me on pytorch 2.9 stable, cu130
Testing occurs on CUDA 12.8 and PyTorch 2.8 python 3.12 execution is smooth.