audio2photoreal icon indicating copy to clipboard operation
audio2photoreal copied to clipboard

gpu

Open MadeByKit opened this issue 1 year ago • 1 comments

(a2p_env) C:\Users\kit\audio2photoreal>python -m train.train_diffusion --save_dir checkpoints/diffusion/c1_face_test --data_root ./dataset/RLW104/ --batch_size 4 --dataset social --data_format face --layers 8 --heads 8 --timestep_respacing '' --max_seq_length 600 using 0 gpus Traceback (most recent call last): File "C:\Users\kit\miniconda3\envs\a2p_env\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\kit\miniconda3\envs\a2p_env\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\kit\audio2photoreal\train\train_diffusion.py", line 83, in main(rank=0, world_size=1) File "C:\Users\kit\audio2photoreal\train\train_diffusion.py", line 36, in main raise FileExistsError("save_dir [{}] already exists.".format(args.save_dir)) FileExistsError: save_dir [checkpoints/diffusion/c1_face_test] already exists.

(a2p_env) C:\Users\kit\audio2photoreal>python -m train.train_diffusion --save_dir checkpoints/diffusion/c1_face_test --data_root ./dataset/RLW104/ --batch_size 4 --dataset social --data_format face --layers 8 --heads 8 --timestep_respacing '' --max_seq_length 600 using 0 gpus creating data loader... [dataset.py] training face only model ['[dataset.py] sequences of 600'] C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\numpy\core\fromnumeric.py:43: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. result = getattr(asarray(obj), method)(*args, **kwds) C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\numpy\core\fromnumeric.py:43: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object). result = getattr(asarray(obj), method)(*args, **kwds) [dataset.py] loading from... ./dataset/RLW104/data_stats.pth [dataset.py] train | 18 sequences ((8989, 256)) | total len 160523 creating logger... creating model and diffusion... Traceback (most recent call last): File "C:\Users\kit\miniconda3\envs\a2p_env\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\kit\miniconda3\envs\a2p_env\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\kit\audio2photoreal\train\train_diffusion.py", line 83, in main(rank=0, world_size=1) File "C:\Users\kit\audio2photoreal\train\train_diffusion.py", line 54, in main model, diffusion = create_model_and_diffusion(args, split_type="train") File "C:\Users\kit\audio2photoreal\utils\model_util.py", line 42, in create_model_and_diffusion model = FiLMTransformer(**get_model_args(args, split_type=split_type)).to( File "C:\Users\kit\audio2photoreal\model\diffusion.py", line 157, in init self.setup_lip_models() File "C:\Users\kit\audio2photoreal\model\diffusion.py", line 276, in setup_lip_models cp = torch.load(cp_path, map_location=torch.device(self.device)) File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 809, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 1172, in _load result = unpickler.load() File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 1142, in persistent_load typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 1116, in load_tensor wrap_storage=restore_location(storage, location), File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 1086, in restore_location return default_restore_location(storage, str(map_location)) File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 217, in default_restore_location result = fn(storage, location) File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize device = validate_cuda_device(location) File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

MadeByKit avatar Jan 10 '24 10:01 MadeByKit

Hmm it seems that you are attempting to load the model onto a non-gpu supported device (just the cpu). And i see that when loading the model, the map_location is already specified re: cp = torch.load(cp_path, map_location=torch.device(self.device)) (post) Could you double check to see if self.device == 'cpu' in your case please?

evonneng avatar Jan 10 '24 17:01 evonneng

Closing for now due to inactivity, but please feel free to reopen as needed!

evonneng avatar Jan 17 '24 18:01 evonneng