IQ-Learn icon indicating copy to clipboard operation
IQ-Learn copied to clipboard

Issue on Ant-v2 expertd data and Humanoid-v2 random seed Experiments

Open XizoB opened this issue 2 years ago • 1 comments

Hi~Thank you very much for sharing your paper and source code !!! I am new to inverse RL and I want to implement your method on the robot recently. About Ant-v2

  1. And I found that the reward for each step in your Ant-v2 expert data is 1. Why set the reward like this? And how to run sqil correctly in your code

About random seeds

  1. I found that the results with different random seeds in the humanoid experiments are very different, some results are around 1500 points, is it because the number of learning steps is only 50000 or the expert data is 1?

I runned with this python train_iq.py env=humanoid agent=sac expert.demos=1 method.loss=v0 method.regularize=True agent.actor_lr=3e-05 seed=0/1/2/3/4/5 agent.init_temp=1 seed Your work is very valuable and I look forward to your help in solving my doubts.

XizoB avatar Sep 22 '22 02:09 XizoB

Hi, we only the expert_rewards for SQIL where the expert gets a reward 1 and the policy gets a reward 0. Storing fake rewards of 1 for the expert data makes this easy to implement. Nevertheless, for IQ-Learn we don't use expert rewards and this field is never used.

The stochasticity you observe is likely because of using only 1 expert demo to train on, leading to high variance on the seeds. Trying to reduce the temperature to maybe 0.5 could help with this.

div-garg avatar Nov 01 '22 18:11 div-garg