robotic-grasping
robotic-grasping copied to clipboard
Antipodal Robotic Grasping using GR-ConvNet. IROS 2020.
Hello, your setting parameters include split=0.9. Does this mean splitting with 9:1 and doing five-fold cross-validation on the cornell data set? I don't quite understand.
Hi there, Thank you for open-sourcing this repository. I am in the process of evaluating the trained models on my own image data so I can include them in a...
pip uninstall numpy pip install numpy==1.22
How is the highest accuracy of the paper obtained, obtained by evaluating the code? Or is it just the highest accuracy verified during training?
The paper mentions that a difference of less than 30 degrees between the angle of the predicted grasp rectangle and the ground Truth is considered a successful grasp, however this...
I have no problems with the training process, but during the validation of evaluate.py, an error is reported: File "/home/amax/baiyong/robotic-grasping/utils/dataset_processing/grasp.py", line 309, in rotate R = np.array(ValueError: setting an array...
The above figure shows the object image and the result of the grasp quality prediction (q_img in code). It can be seen that the red cube becomes a slightly larger...
Hello, First, thank you for this impressive contribution to the robotics community and for making it open source! I had a couple of questions regarding setting up GR-Convnet on a...
 Are the images in the visualization process feature maps? If not, what are these pictures
In the utils/dataset_processing/image.py, class Image has function zoom for zooming rgb image qand tiff image. Here are something confusing me.I tried factor value equaling 0.5.Two images before and after being...