meshed-memory-transformer
meshed-memory-transformer copied to clipboard
Problems while running test.py
I tried to test the model, but I got this problem. Does anybody know how to solve it? I used the environment provided here.
python test.py --features_path data/coco_detections.hdf5 --annotation_folder annotations
///////
Meshed-Memory Transformer Evaluation
Evaluation: 0%| | 0/500 [00:00<?, ?it/s]
Traceback (most recent call last):
File "test.py", line 77, in
hi,l wander how many time to take train?is it fast?
when run test.py ,the pretrained model "mesh_memory_transformer.pth"should place in which foler?
the reason may be the version of torch doesn't match the version of CUDA.
when run test.py ,the pretrained model "mesh_memory_transformer.pth"should place in which foler?
It's should be in the root, the same level of test.py.
Hi. When I run the test.py to evaluate, the model just generates <'unk'> without any other token. Is there same problem?
Hi. When I run the test.py to evaluate, the model just generates <'unk'> without any other token. Is there same problem?
Node, I can generate the captions well. It seems like you fail to load the vocab file into the m2 model.
Thanks for your reply. But I load the 'vocab.pkl' file from the original git which shows that the 0 is 'unk'. And when I run the test.py, the model generates many 0 values firstly, then these values translate to token be the 'unk'.
Thanks for your reply. But I load the 'vocab.pkl' file from the original git which shows that the 0 is 'unk'. And when I run the test.py, the model generates many 0 values firstly, then these values translate to token be the 'unk'.
I met the same problem during testing, Beside, I had already load .pth. coco detection and pkl. I am still confused about such a weird thing,