chainerrl icon indicating copy to clipboard operation
chainerrl copied to clipboard

ChainerRL is a deep reinforcement learning library built on top of Chainer.

Results 65 chainerrl issues
Sort by recently updated
recently updated
newest added

I am a user of channerl, and my code break if I use the same agent to evaluate (note that in train&evaluation, env could be different), so I request to...

I would like to use DQN to control the field degree of freedom robot arm. However, the DQN in this library has only one output if the action space is...

Deepmind Lab is becoming more frequently used in DRL research. We should try and add support for Deepmind Lab if possible.

enhancement

Current examples don't specify in what configuration they work well, except newer ones (train_pcl_gym.py and train_reinforce_gym.py). Such instructions are important because users can easily confirm that the implementations actually work....

enhancement

I am doing research on reinforcement learning using MujoCo. I am doing reinforcement learning on my own robot arm, and I want the data on the distortion of the link...

I use agent.TRPO in my program. But I don't know δ value. Do you know what program I should check?

I've encountered segmentation fault error when using chainerrl with GPU in Docker. The error occurs if I `import chainerrl` first, then perform a `cuda.get_device(args).use()`. The quick fix my colleague and...

Hello, I have one question. In the paper of IQN, quantile huber loss function is delta_{ij} < 0. But chainerrl iqn code is delta_{ij} > 0. I think this inequlity...

bug

Resolves #573 - [ ] Compare its performance on Atari to the paper