shapeunity
shapeunity copied to clipboard
Test custom images with pre-trained model
I have downloaded the pre-trained models and the huge dataset. Running the vectorization & visualization command with the SU3 dataset works. Now, I would like to test some of my own images with the pre-trained model to see how well it works. However, I'm not sure how to do that. How do I have to organize custom images and what command I have to run to evaluate the pre-trained model with these images?
I also tried to run the prediction command with the SU3 dataset.
python train.py --eval -d 0 -i default --from logs/pretrained-wireframe/checkpoint_latest.pth.tar logs/pretrained-wireframe/config.yaml
However, I'm getting the error that it cannot use a CUDA device. My older Nvidia graphics card is not compatible with cudatoolkit 10.x. But it looks like that the pre-trained model has somewhere saying inside that it wants to run things on CUDA, and not fall back on CPU.
Error log
Traceback (most recent call last):
File "train.py", line 204, in
main()
File "train.py", line 124, in main
checkpoint = torch.load(resume_from)
File "/home/brakebein/anaconda3/envs/shapeunity/lib/python3.7/site-packages/torch/serialization.py", line 608, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/brakebein/anaconda3/envs/shapeunity/lib/python3.7/site-packages/torch/serialization.py", line 787, in _legacy_load
result = unpickler.load()
File "/home/brakebein/anaconda3/envs/shapeunity/lib/python3.7/site-packages/torch/serialization.py", line 743, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/brakebein/anaconda3/envs/shapeunity/lib/python3.7/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/brakebein/anaconda3/envs/shapeunity/lib/python3.7/site-packages/torch/serialization.py", line 151, in _cuda_deserialize
device = validate_cuda_device(location)
File "/home/brakebein/anaconda3/envs/shapeunity/lib/python3.7/site-packages/torch/serialization.py", line 135, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
So, my next thought was that I have to train the model by myself on the CPU. So, I need to run the training command:
python ./train.py -d 0,1,2,3 --identifier baseline config/hourglass.yaml
But there is the next error I cannot resolve. I tried it several times, but it always errors at the same image or data chunk (in the folder that has been created in logs
, 3 npz files has been generated, and in viz
, 000002_img.jpg
has been generated but not the other files (see screenshot)).
logs/.../viz/000000600/

Error log
progress | sum | jmap | jdir | lmap | joff | ldir | dpth | jdep | speed
Running validation... Traceback (most recent call last):
File "./train.py", line 204, in
main()
File "./train.py", line 200, in main
trainer.train()
File "/media/brakebein/Daten4TB/projects/shapeunity/wireframe/trainer.py", line 260, in train
self.train_epoch()
File "/media/brakebein/Daten4TB/projects/shapeunity/wireframe/trainer.py", line 200, in train_epoch
self.validate()
File "/media/brakebein/Daten4TB/projects/shapeunity/wireframe/trainer.py", line 131, in validate
self._plot_samples(i, index, H, target, f"{viz}/{index:06}")
File "/media/brakebein/Daten4TB/projects/shapeunity/wireframe/trainer.py", line 225, in _plot_samples
mask_target = target["jmap"][i].cpu()
IndexError: index 2 is out of bounds for dimension 0 with size 2
Of course, it would be better to avoid the training process. I just want to test some custom images on your pre-trained model, but it doesn't seem to be trivial. Any help is appreciated.
I still couldn't figure out how to test my own images with the pre-trained model. I would like to input a single image and see what wireframe it predicts.
Any hints or help would be appreciated.
Sorry for missing the issue. Currently testing 3rd images are not supported. If you really want to test it, you need to modify the evaluation script to add a new dataset that includes your images. You need to zero-fill the ground truth entries if they are not available.
Thanks for your response. So I guess anytime soon, you won't implement it to test 3rd images.
So, I will try to modify the script. To zero-fill the ground truth entries: do I just need to use the format of the json files that are accompanied with each image? Or do I need to care about the npz files as well?
I think you just need to fill the image. All other entries can be zero. You can either modifier the dataloader or provide a new json/npz. If the image intrinsic is know, you can also put it into https://github.com/zhou13/shapeunity/blob/f04615d6f688fa7eeec47fba344046ee24d67b99/eval_2d3d_metric.py#L168
Hi @Brakebein may I know if you managed to get the testing on custom images to work?
Not yet. But that's mainly because I was working on a different part in my project for the last 2 weeks. If this is done, I will continue with this issue (maybe in 2 weeks). If there is any progress in this matter until then, I would be interested as well.
@Brakebein Any updates on testing on custom images?
No, sorry. I finally tested their 2D approach (https://github.com/zhou13/lcnn) which was actually sufficient for me.
您好,我使用自己的数据测试成功了,具体思路为:首先,自己创建一个data/SU1/001文件夹,并在其中存放512*512的自定义png图片,将自定义图片按照0000/0001/0002的格式重命名;其次,将作者提供的数据(SU3)中的其他对应名称的非png图片数据复制粘贴到该文件夹中;最后,直接使用命令“python train.py --eval -d 0 -i default --from logs/pretrained-wireframe/checkpoint_latest.pth.tar logs/pretrained-wireframe/config.yaml”测试结果即可。