curiosity-driven-exploration-pytorch
curiosity-driven-exploration-pytorch copied to clipboard
Curiosity-driven Exploration by Self-supervised Prediction
Hello, I trained the model according to the parameters at the beginning but it never converged, I would like to ask what is the problem, can you provide the parameters...
Int reward values are anomaly small for sparse reward environments. This is normal? if so, why? Venture example: 
There was an error when I ran the file eval.py.can you tell me why?
Hi, I can only see that you optimize the intrinsic loss in your code. Can you point me to the line where you add the intrinsic rewards to the actual...
In the following locations of the code: https://github.com/jcwleo/curiosity-driven-exploration-pytorch/blob/master/envs.py#L188-L189 https://github.com/jcwleo/curiosity-driven-exploration-pytorch/blob/master/envs.py#L286-L287 history is updated assuming the history size is 4. Shouldn't it instead be ``` self.history[:self.history_size-1, :, :] = self.history[1:, :, :]...
I got many errors like: File ".../curiosity-driven-exploration-pytorch/envs.py", line 266, in run obs, reward, done, info = self.env.step(action) File ".../envs/p3-torch10/lib/python3.6/site-packages/nes_py/wrappers/binary_to_discrete_space_env.py", line 67, in step return self.env.step(self._action_map[action]) File ".../envs/p3-torch10/lib/python3.6/site-packages/gym/wrappers/time_limit.py", line 31, in...
Hello, I would like to ask whether you have tested your implementation in ViZDoom environment, like what was done in the original paper? Thanks!

Hello, could you please explain the version of each package you are using?