Easy-Wav2Lip icon indicating copy to clipboard operation
Easy-Wav2Lip copied to clipboard

Fixed error for install.py on Mac using MPS

Open JingchengYang4 opened this issue 9 months ago • 0 comments

100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 416M/416M [01:00<00:00, 7.22MB/s] Loading /Users/user/Projects/easywav2lip/checkpoints/Wav2Lip_GAN.pth Traceback (most recent call last): File "/Users/user/Projects/easywav2lip/install.py", line 58, in model = load_model(os.path.join(working_directory, "checkpoints", "Wav2Lip_GAN.pth")) File "/Users/user/Projects/easywav2lip/easy_functions.py", line 102, in load_model checkpoint = _load(path) File "/Users/user/Projects/easywav2lip/easy_functions.py", line 83, in _load checkpoint = torch.load(checkpoint_path) File "/Users/user/miniforge3/envs/wav2lip/lib/python3.9/site-packages/torch/serialization.py", line 1028, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/Users/user/miniforge3/envs/wav2lip/lib/python3.9/site-packages/torch/serialization.py", line 1256, in _legacy_load result = unpickler.load() File "/Users/user/miniforge3/envs/wav2lip/lib/python3.9/site-packages/torch/serialization.py", line 1193, in persistent_load wrap_storage=restore_location(obj, location), File "/Users/user/miniforge3/envs/wav2lip/lib/python3.9/site-packages/torch/serialization.py", line 381, in default_restore_location result = fn(storage, location) File "/Users/user/miniforge3/envs/wav2lip/lib/python3.9/site-packages/torch/serialization.py", line 274, in _cuda_deserialize device = validate_cuda_device(location) File "/Users/user/miniforge3/envs/wav2lip/lib/python3.9/site-packages/torch/serialization.py", line 258, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

It seems like using torch.load defaults to CUDA for some reason, fixed.

JingchengYang4 avatar May 17 '24 18:05 JingchengYang4