Chris Nota
Chris Nota
Good to know. Will probably have to do some additional changes. Have you tried SAC/DDPG?
Blocked on #219 for now
This is the "deep mind" style testing. I've been thinking about the best way to handle this, there's a few options. I'm still not happy with the `experiments` package as...
Maybe. For most of the standard Gym environments, the environment isn't really the bottleneck, so it's not clear that you get much of a speedup. However, it could be useful...
While this is a useful thing to have, easy to implement, and quite a few papers try to use this, there are a few issues I haven't fully thought through....
The PPO implementation is a `ParallelAgent`/`ParallelPreset`, so it is not compatible with `SingleEnvExperiment`. Try using a `ParallelEnvExperiment` and setting `ppo.hyperparameters(n_envs=1)`.
I don't think this is a bug, but it would probably be useful for the experiment types to enforce the agent type and throw a helpful error message instead of...
Merged #241 to develop for now. It should allow n_envs=1 to work.
I'm closing this on the basis that `MultiagentEnvironment` is the abstract class, and we might want implementations that are not petting-zoo based. The current `MultiagentPettingZooEnvironment` is tightly coupled with the...
I think this is fine for single agent now. #288 will handle the multiagent case.