faster-whisper
faster-whisper copied to clipboard
limit pytorch version to cudnn8 for `pip install`
Note: Version 9+ of nvidia-cudnn-cu12 appears to cause issues due its reliance on cuDNN 9 (Faster-Whisper does not currently support cuDNN 9). Ensure your version of the Python package is for cuDNN 8.
because all pytorch>=2.4 on conda are started to be compiled with cudnn9,
so this PR can avoid new users who just installed with pip install faster-whisper from getting into the issue:
>>Performing transcription...
Could not locate cudnn_ops_infer64_8.dll. Please make sure it is in your library path!
just after installing.
I personally don't know why faster-whisper isn't compatible with cudnn 9+ yet, but if for some reason they don't want to support it they can add something like this to their code:
1. Add Nvidia cudnn to the installation process - e.g.:
pip install nvidia-cudnn-cu12==8.9.7.29
2. In the entry point for the library add to it the following, which will add to the paths (but not replace them) :
- Tip: You can modify "CUDA_PATH_V1_1" to another version of cuda if you want as well.
def set_cuda_paths():
try:
venv_base = Path(sys.executable).parent
nvidia_base_path = venv_base / 'Lib' / 'site-packages' / 'nvidia'
for env_var in ['CUDA_PATH', 'CUDA_PATH_V12_1', 'PATH']:
current_path = os.environ.get(env_var, '')
os.environ[env_var] = os.pathsep.join(filter(None, [str(nvidia_base_path), current_path]))
logging.info("CUDA paths set successfully")
except Exception as e:
logging.error(f"Error setting CUDA paths: {str(e)}")
logging.debug(traceback.format_exc())
3. Take it a Step Further and add cublas, runtime or whatever else... - e.g.
pip install nvidia-cuda-runtime-cu12==12.1.105
pip install nvidia-cublas-cu12==12.1.3.1
pip install nvidia-cuda-nvrtc-cu12==12.1.105
pip install [fill in the blank with nvidia library]
That way users wouldn't have to worry about installing CUDA/CUDNN globally at all.
Again, not sure why faster-whisper chooses not to update compatibility - I know it's a hassle - but perhaps this is a more elegant solution that would work regardless of whether
according to https://www.github.com/pytorch/pytorch/issues/100974, pip install torch will automatically install cuda/cudnn. (I tested it failed to though on windows, but it may work on linux)
therefore before faster-whisper is compatible with cudnn9, this PR should be a not-bad workaround
@BBC-Esq faster-whisper is not compliant with cuddn 9 as ctranslate2 does not. It can not be compliant without a custom build of ctranslate2 as ctranslate2 is the core for all the cuda stuff.
I wish this would be merged as a workaround until https://github.com/OpenNMT/CTranslate2/issues/1780 is fixed.
When you install faster-whisper in a completely new environment, torch will be installed as the latest one and faster-whisper will not be usable because of this bug.
I hope this workaround will be merged for now until CTranslate2 really supports the cuDNN 9 build.
Since https://github.com/OpenNMT/CTranslate2/pull/1803 is merged and ctranslate2 version is now bumped to 4.5.0,
It seems faster-whisper now needs torch >= 2.4.0
( I got another bug with torch==2.3.1: #1080 )
See this link to a workaround and alternative method of running faster-whisper in general, with the newest ctranslate2==4.5.0 and torch==2.5.0:
https://github.com/SYSTRAN/faster-whisper/issues/1080#issuecomment-2429688038
cTranslate2 just release a new version 5 hours ago https://github.com/OpenNMT/CTranslate2/releases/tag/v4.5.0 and supported cudnn9.
the pytorch current latest version is [win-64/pytorch-2.5.0-py3.12_cuda12.4_cudnn9_0.tar.bz2]. now faster-whisper can work immediately with installing latest pytorch without issue,
and no need to limit lower pytorch version for its version of cudnn to math the cudnn8 requirement of ctranslate now
this PR may be closed