DOPE.pytorch
DOPE.pytorch copied to clipboard
DOPE (Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects)
DOPE.pytorch
This is an unofficial implementation of DOPE (Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects) trained on self-created synthetic bottle dataset.
Requirement
$ conda create -n DOPE python=3.6
$ pip install -r requirement.txt
$ conda activate DOPE
Usage
Train and Eval
[WIP]
Demo
- Logs [Download]
DOPE.pytorch
- logs
- Jack_Daniels-checkpoint.pth
- Jose_Cuervo-checkpoint.pth
- Data
DOPE.pytorch
- data
- Real_bottle_sequence
- 000001.jpg
- 000002.jpg
...
- _object_settings.json
- _camera_settings.json
- Run
python demo.py
--path_to_data_dir ./data/Real_bottle_sequence
--class_name Jack_Daniels
--checkpoint ./logs/Jack_Daniels-checkpoint.pth
--plot
Reference
- DOPE (Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects) [paper link]
- Some code is borrowed from below repos.
- Official implementation from nvidia (inference code with ROS) [Deep_Object_Pose]
- Realtime_Multi-Person_Pose_Estimation [pytorch_Realtime_Multi-Person_Pose_Estimation]