AniTalker icon indicating copy to clipboard operation
AniTalker copied to clipboard

I got a error

Open wildboy2arthur opened this issue 7 months ago • 4 comments

Thanks for sharing, it's very interesting, I also want to make a .npy file. Follow your instructions to perform the installation step by step, making no mistakes until the last step. My error message is as follows:

0%| | 0/1 [00:06<?, ?it/s] Traceback (most recent call last): File "D:\AniTalker\talking_face_preprocessing_back\extract_audio_features.py", line 52, in main(args) File "D:\AniTalker\talking_face_preprocessing_back\extract_audio_features.py", line 34, in main outputs = model(input_values, output_hidden_states=True) File "C:\Users\Fadawan.conda\envs\tfpw\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Fadawan.conda\envs\tfpw\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 1074, in forward encoder_outputs = self.encoder( File "C:\Users\Fadawan.conda\envs\tfpw\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Fadawan.conda\envs\tfpw\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 800, in forward layer_outputs = layer( File "C:\Users\Fadawan.conda\envs\tfpw\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Fadawan.conda\envs\tfpw\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 630, in forward hidden_states, attn_weights, _ = self.attention( File "C:\Users\Fadawan.conda\envs\tfpw\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Fadawan.conda\envs\tfpw\lib\site-packages\transformers\models\hubert\modeling_hubert.py", line 504, in forward attn_weights = nn.functional.softmax(attn_weights, dim=-1) File "C:\Users\Fadawan.conda\envs\tfpw\lib\site-packages\torch\nn\functional.py", line 1818, in softmax ret = input.softmax(dim) RuntimeError: CUDA out of memory. Tried to allocate 8.12 GiB (GPU 0; 12.00 GiB total capacity; 9.68 GiB already allocated; 0 bytes free; 10.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF GPUz

wildboy2arthur avatar Aug 02 '24 06:08 wildboy2arthur