dreamer-pytorch
dreamer-pytorch copied to clipboard
Exploding KL Divergence Loss
I tried running the agent on the Walker Walk environment and the KL Divergence loss seems to be growing exponentially and causing nans. But I have not made any changes to the codebase before running the agent. Do you know what might be causing this issue?