Greyrabbit
Results
2
comments of
Greyrabbit
### Issue: Out of Memory Error with Qwen2VL Model on RTX 3060 (12GB VRAM) #### Environment: - **GPU**: NVIDIA RTX 3060 (12GB VRAM) - **PyTorch Version**: 2.0 (CUDA-compatible) - **Transformers...
same here, i tried generating config.json and model.safetensors.index.json , still not working. If this bug gets fixed , we don't need separate model files for invoke and comfyui, downloading multiple...