Fred Fang
Fred Fang
Hi You need to prepare a .h5 file with image_names, xmin, ymin, xmax.ymax (bounding box) x1,y1 x2,y2, ... x16,y16(pose) You can have a look at https://github.com/Fang-Haoshu/multi-human-pose/blob/master/train/data/mpii-box/annot.h5 for more details
Hi, I take the xmin/ymin/xmax/ymax of the keypoints as bbox and extend it by 20%
Yes it's 256x256
Hi, the full network training and testing is now available at: https://github.com/graspnet/graspnet-baseline Please check it out there Best regards
Yes, we also only train for 10 epochs
Hi, This error occurs due to the version of Torch you use and that create the model file is different. You can modify the file 'torch/install/share/lua/5.1/cudnn/BatchNormalization.lua' and change the variable...
Since it may be a common problem, so I will keep it opening in case some one need it
您好 我们是把grasp投影到点云上 参考:https://github.com/Fang-Haoshu/graspnetAPI/blob/master/examples/exam_vis.py#L27
训练的话场景sample 20000个点即可 对的 dataset部分API里的vis grasp其实就是大部分处理代码了