Trained model
Hi,
very excited I read your paper. I would like to ask if there is any chance that you can provide the trained model for the MVExoNet. I'm a student who works on action recognition and your MVExoNet seens very promising for detecting hand keypoints from multible, exocentric views. Of course, the results from the refinement with SVEgoNet are looking even better, but most likely it can't use these egocentric views. I tried to train the model by myself with the code and the data you provided here, but ran into quite some errors. If its not possible to provide the trained model, maybe there is a chance you could add a dependency list for the python packages? The error messages that I get suggest there might be an error with the depedencies.
Thank you very much in advance!
Thank you for being interested. The current code only supports the single-view net training on ego data for now. The dependency follows the original interhand code. Please find the details from this repo. We will prepare multi-view exo images and the MVExoNet soon.
Thank you very much for your reply, I really appreciate it. I don't mean to be impolite, but out of curiosity: Is it possible to clarify when a release might be expected? Thanks again!
hi @CrazyMD , thanks for your interest and patience. I am in the process of preparing the exo images + MVExoNet, targeting a release at this week / early next week. So that people get a chance to experiment before CVPR deadline.
A bit more detail: We will have to release the exo images in the form of videos since individual images take too much space. Along with it we will need to release a video-based dataloader implementation.
hi @CrazyMD, we have just released the exo videos (feel free to request access), and visualization/video_visualizer.py which shows how to load frames from the videos. Unfortunately releasing the model and multi-view dataloader for MVExoNet will take a bit longer; but in theory, you can already get started with the available info/data.
Please let me know if these are useful to you, and/or what things are missing from your perspective.
Hi @kunhe, thank you a lot for releasing the videos and the visualiser. I really appreciate the effort! :)
I briefly tried out the new version along with the previously released material earlier today, but unfortunately ran into runtime errors (e.g. something about the resnet-weights couldn't be loaded. I think that was in src/common/nets/resnet.py --> torch.utils.model_zoo.load_url(model_urls[self.name]), but that is just out of my memory). This may be some problem with a newer version of torchvison on my machine. At some point they changed the API: https://github.com/pytorch/vision/blob/release/2.0/torchvision/models/resnet.py#L356 But I didn't tried that out yet (will do when I'm back at the office). Generally speaking I have the feeling that there may be problems with different versions of the used python packages.
Due to some other deadlines, I can only have a look at the project again on Monday. I'm really looking forward further experiment and give you feedback as soon as possible.
Thanks again,
Veltin
Hi, @CrazyMD I've tested the code with this version torch 1.11.0; torchvision 0.12.0 Please let me know if you have further problems.
Hi @kunhe & @tkhkaeio, Thanks for the hint about the versions. I've installed the packages along with cv2, numpy, pycocotools... in a fresh conda env on python 3.10.12 / Ubuntu 22, set the PYTHONPATH to the dir, and indeed got rid of the error.
As I understood it, the first thing to do is to run dataset.py. This is where I get the next error. I haven't added the exocentric images yet.
(AssemblyHands) gieseke@RS249:~/assemblyhands-toolkit$ /home/gieseke/miniconda3/envs/AssemblyHands/bin/python /home/gieseke/assemblyhands-toolkit/src/dataset/AssemblyHands-Ego/dataset.py data_dir: /home/gieseke/assemblyhands-toolkit/data/assemblyhands output_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego dataset_dir: /home/gieseke/assemblyhands-toolkit/src/dataset/AssemblyHands-Ego Using GPU: 0 Configuration: input_img_shape: (256, 256) output_hm_shape: (64, 64, 64) sigma: 2.5 bbox_3d_size: 400 bbox_3d_size_root: 400 output_root_hm_shape: 64 resnet_type: 50 dataset: AssemblyHands-Ego lr_dec_epoch: [15, 17] end_epoch: 20 lr: 0.0001 lr_dec_factor: 10 train_batch_size: 8 test_batch_size: 8 trans_test: gt bbox_scale: 1.75 cur_dir: /home/gieseke/assemblyhands-toolkit/src/main root_dir: /home/gieseke/assemblyhands-toolkit src_dir: /home/gieseke/assemblyhands-toolkit/src data_dir: /home/gieseke/assemblyhands-toolkit/data/assemblyhands dataset_dir: /home/gieseke/assemblyhands-toolkit/src/dataset output_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego model_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego/model_dump vis_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego/vis log_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego/log result_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego/result num_thread: 4 gpu_ids: 0 num_gpus: 1 continue_train: False print_freq: 500 11-13 13:34:20 Creating val dataset... Load annotation from data/assemblyhands/annotations, mode: val loading annotations into memory... Done (t=4.21s) creating index... index created! Get bbox and root depth from groundtruth annotation DEBUG MODE: Loaded 200 samples Number of annotations in single hand sequences: 204 Number of annotations in interacting hand sequences: 0 Number of skipped annotations: 25 11-13 13:34:28 Creating graph... 11-13 13:34:33 Load checkpoint not found /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego/model_dump/snapshot_20.pth.tar
Visualize GT...
Traceback (most recent call last):
File "/home/gieseke/assemblyhands-toolkit/src/dataset/AssemblyHands-Ego/dataset.py", line 647, in
The error suggests that there is a config object passed to osp.join(). In config.py I saw that there are a lot of paths set and added. Is there maybe a bug in how I set my PYTHONPATH?
Thanks again.
Hi @kunhe & @tkhkaeio, Thanks for the hint about the versions. I've installed the packages along with cv2, numpy, pycocotools... in a fresh conda env on python 3.10.12 / Ubuntu 22, set the PYTHONPATH to the dir, and indeed got rid of the error.
As I understood it, the first thing to do is to run dataset.py. This is where I get the next error. I haven't added the exocentric images yet.
(AssemblyHands) gieseke@RS249:~/assemblyhands-toolkit$ /home/gieseke/miniconda3/envs/AssemblyHands/bin/python /home/gieseke/assemblyhands-toolkit/src/dataset/AssemblyHands-Ego/dataset.py data_dir: /home/gieseke/assemblyhands-toolkit/data/assemblyhands output_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego dataset_dir: /home/gieseke/assemblyhands-toolkit/src/dataset/AssemblyHands-Ego Using GPU: 0 Configuration: input_img_shape: (256, 256) output_hm_shape: (64, 64, 64) sigma: 2.5 bbox_3d_size: 400 bbox_3d_size_root: 400 output_root_hm_shape: 64 resnet_type: 50 dataset: AssemblyHands-Ego lr_dec_epoch: [15, 17] end_epoch: 20 lr: 0.0001 lr_dec_factor: 10 train_batch_size: 8 test_batch_size: 8 trans_test: gt bbox_scale: 1.75 cur_dir: /home/gieseke/assemblyhands-toolkit/src/main root_dir: /home/gieseke/assemblyhands-toolkit src_dir: /home/gieseke/assemblyhands-toolkit/src data_dir: /home/gieseke/assemblyhands-toolkit/data/assemblyhands dataset_dir: /home/gieseke/assemblyhands-toolkit/src/dataset output_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego model_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego/model_dump vis_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego/vis log_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego/log result_dir: /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego/result num_thread: 4 gpu_ids: 0 num_gpus: 1 continue_train: False print_freq: 500 11-13 13:34:20 Creating val dataset... Load annotation from data/assemblyhands/annotations, mode: val loading annotations into memory... Done (t=4.21s) creating index... index created! Get bbox and root depth from groundtruth annotation DEBUG MODE: Loaded 200 samples Number of annotations in single hand sequences: 204 Number of annotations in interacting hand sequences: 0 Number of skipped annotations: 25 11-13 13:34:28 Creating graph... 11-13 13:34:33 Load checkpoint not found /home/gieseke/assemblyhands-toolkit/output/assemblyhands-ego/model_dump/snapshot_20.pth.tar
Visualize GT... Traceback (most recent call last): File "/home/gieseke/assemblyhands-toolkit/src/dataset/AssemblyHands-Ego/dataset.py", line 647, in tester.testset.view_samples() File "/home/gieseke/assemblyhands-toolkit/src/dataset/AssemblyHands-Ego/dataset.py", line 407, in view_samples vis_keypoints( File "/home/gieseke/assemblyhands-toolkit/src/common/utils/vis.py", line 158, in vis_keypoints _img.save(osp.join(vis_dir, filename)) File "/home/gieseke/miniconda3/envs/AssemblyHands/lib/python3.10/posixpath.py", line 76, in join a = os.fspath(a) TypeError: expected str, bytes or os.PathLike object, not Config
The error suggests that there is a config object passed to osp.join(). In config.py I saw that there are a lot of paths set and added. Is there maybe a bug in how I set my PYTHONPATH?
Thanks again.
I get the dataset.py to work if I replace vis_dir with vis_dir.vis_dir in line 158, 160, 234, 236 in vis.py. However I can't say if this has any consequences for other parts of the program using this functions. It may be a better solution to change the argument in the calling function.
Edit: Yes, it works as well when I just call self.cfg.vis_dir instead of self.cfg in line 413 and 419 in dataset.py! So no changes in vis.py necessary.
My problem now is that when running train.py through src/main/ as stated in the InterHand instructions, I get the module not found error:
Traceback (most recent call last):
File "/home/gieseke/assemblyhands-toolkit/src/main/train.py", line 131, in
I'm confused since echo $PYTHONPATH gives "/home/gieseke/assemblyhands-toolkit/", the base directory.
Edit: I can get rid of this error by performing the following steps: -rename src/dataset/AssemblyHands-Ego to src/dataset/AssemblyHandsEgo -put an empty init.py in src/dataset/AssemblyHandsEgo/ -put from "dataset.AssemblyHandsEgo.dataset import Dataset" in the init.py of from src/dataset
So now that train.py runs, the next error is:
Traceback (most recent call last):
File "/home/gieseke/assemblyhands-toolkit/src/main/train.py", line 133, in
which is strange, since: "cat data/assemblyhands/annotations/skeleton.txt" excecuted from the base directory gives the content of skeleton.txt, so the file is there.
Edit: This can be solved be changing the self.annot_path in the init() of dataset.py. Similar problem with self.image_path. Replaced by: self.img_path = "../../data/assemblyhands/images" self.annot_path = "../../data/assemblyhands/annotations"