DexterousHands
DexterousHands copied to clipboard
This is a library that provides dual dexterous hand manipulation tasks through Isaac Gym
Hi folks, kudos on a great benchmark! Do you happen to have success metrics for each task?
When I tried the command python train.py --task=ShadowHandOver --algo=happo There is a ModuleNotFoundError occuring—— Traceback (most recent call last): File "train.py", line 12, in from bidexhands.utils.config import set_np_formatting, set_seed, get_args,...
Hi, I'm wondering how did you get the offline training dataset? Did you train an expert policy then copied the replay buffer or did you create it from human demonstration?
Thanks for your contribution. The dataset of Shadow hand open outward link now cannot be open.
The object file format downloaded from the YCB dataset is sdf. The ycb object file format you provided in the DexterousHands is urdf. How did you obtain the ycb file...
How does Happo train heterogeneous environments?
how to change code so that training can resume from pre-trained weights?
At [data collection example](https://github.com/PKU-MARL/DexterousHands#data-collection), the first command line use `-algo=ppo_collection` which will lead to a wrong path to retrieve the file.
At `README.md`, At [Plotting](https://github.com/PKU-MARL/DexterousHands#Plotting) subsection, It should be "generate" at the comment, not "geenrate".
In the [paper](https://arxiv.org/pdf/2206.08686.pdf) you provide, it is stated that "Each agent i follows a shared policy". However, in the codebase, I only found implementations that resemble [MAPPO](https://github.com/marlbenchmark/on-policy)'s "SeperatedBuffer" and "SeperatedRunner",...