MINTIME-Multi-Identity-size-iNvariant-TIMEsformer-for-Video-Deepfake-Detection icon indicating copy to clipboard operation
MINTIME-Multi-Identity-size-iNvariant-TIMEsformer-for-Video-Deepfake-Detection copied to clipboard

Error, when predict video in examples

Open robingong opened this issue 1 year ago • 1 comments

Thank for your kindness! When I run : python3 predict.py --video_path ./examples/fake_1_face_0.mp4 --model_weights ./MINTIME_XC_Model_checkpoint30 --extractor_weights ./MINTIME_XC_Extractor_checkpoint30 --config config/size_invariant_timesformer.yaml

error occurs:

Custom features extractor weights loaded. /opt/data2/p/fake_face/MINTIME/predict.py:352: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:210.) return torch.tensor([sequence]).float(), torch.tensor([size_embeddings]).int(), torch.tensor([mask]).bool(), torch.tensor([identities_mask]).bool(), torch.tensor([positions]), tokens_per_identity Traceback (most recent call last): File "/opt/data2/p/fake_face/MINTIME/predict.py", line 555, in pred, identity_attentions, aggregated_attentions, identities, frames_per_identity = predict(opt.video_path, clustered_faces, config, opt) File "/opt/data2/p/fake_face/MINTIME/predict.py", line 406, in predict test_pred, attentions = model(features, mask=mask, size_embedding=size_embeddings, identities_mask=identities_mask, positions=positions) File "/opt/data2/condaEnvs/MINTIME/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/opt/data2/condaEnvs/MINTIME/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/opt/data2/condaEnvs/MINTIME/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/opt/data2/condaEnvs/MINTIME/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/opt/data2/condaEnvs/MINTIME/lib/python3.9/site-packages/torch/_utils.py", line 457, in reraise raise exception RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/opt/data2/condaEnvs/MINTIME/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(input, **kwargs) File "/opt/data2/condaEnvs/MINTIME/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(input, **kwargs) File "/opt/data2/p/fake_face/MINTIME/models/size_invariant_timesformer.py", line 228, in forward tokens = self.to_patch_embedding(x) # B x 877 x dim File "/opt/data2/condaEnvs/MINTIME/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/opt/data2/condaEnvs/MINTIME/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 103, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (784x1280 and 2048x512)

robingong avatar Mar 21 '24 10:03 robingong

Hi, follow this: https://github.com/davide-coccomini/MINTIME-Multi-Identity-size-iNvariant-TIMEsformer-for-Video-Deepfake-Detection/issues/4

davide-coccomini avatar Jul 17 '24 06:07 davide-coccomini