ManiSkill-Learn
ManiSkill-Learn copied to clipboard
How to get reproducible deterministic evaluation results?
I evaluate the example pre-trained models on 100 trajectories. I set the seed to 0. I run the following command twice:
python -m tools.run_rl configs/bc/mani_skill_point_cloud_transformer.py --gpu-ids=0 --evaluation \
--work-dir=./test/OpenCabinetDrawer_1045_link_0-v0_pcd \
--resume-from=./example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_PN_Transformer.ckpt \
--cfg-options "env_cfg.env_name=OpenCabinetDrawer_1045_link_0-v0" \
"eval_cfg.save_video=False" \
"eval_cfg.num=100" \
"eval_cfg.num_procs=10" \
"eval_cfg.use_log=True" \
--seed=0
For the first run, the Success or Early Stop Rate
is 0.81. For the second time, the result is 0.84.
It seems that the generated seed (using following code) is different although I set the seed to 0 explictly.
https://github.com/haosulab/ManiSkill-Learn/blob/9742da932448a5234222cf94381ca0f861dc83fd/mani_skill_learn/env/evaluation.py#L72-L74
So how can I control the determinism through seed?
In addition, I have a queation about the ManiSkill environment. I notice that there are shadows of objects and robots in the rendered image in the first version of your arxiv paper, like this:
But the world frame image I get is like this (I change the resolution to 256*256). How to make the image more realistic like the image shown above?
For the first run, the Success or Early Stop Rate is 0.81. For the second time, the result is 0.84.
It seems that the generated seed (using following code) is different although I set the seed to 0 explictly.
os.getpid() is not deterministic between different runs
I notice that there are shadows of objects and robots in the rendered image in the first version of your arxiv paper, like this
We used a special renderer to improve the aesthetics in our arxiv paper. While for actual env, to accelerate training and minimize the rendering time, we intentionally used a simple renderer. Well-rendered scenes 1. significantly slows down fps 2. still has a large domain gap from real scenes, and requires sim2real vision modules like CycleGANs.
The key to the rendering includes the following modifications
- Add environment map (https://github.com/haosulab/SAPIEN/blob/12a83f9fd83b81a6211d8b4b6146c80b74fea93f/python/pysapien_content.hpp#L857-L858)
- Enable shadows and tune their parameters when adding lights (https://github.com/haosulab/SAPIEN/blob/12a83f9fd83b81a6211d8b4b6146c80b74fea93f/python/pysapien_content.hpp#L816-L829)
- Fine tune the material parameters for the object and the ground.
Thanks for your reply, I will try the well-rendered scenes.
For the first question, I know os.getpid()
is not deterministic between different runs. However, I find the number generated by np.random.randint(0, 10000)
is different although I set the numpy seed to 0 explictly.