Federico Belotti
Federico Belotti
Hi @drblallo and @geranim0! The one thing that you could do to enable self-play is: * Create a new agent (inheriting from an already defined one, like Dreamer-V3 for example)...
> Hi @defrag-bambino, thank you for reporting this problem. > > Which action space are you using? Are they continuous actions? In this case, we assume that continuous actions have...
> I've tried to work around it using np.squeeze() and np.expand_dims() in relevant places of my env wrapper. This seems to work for now. However, after a few seconds it...
Hi @defrag-bambino, we're sorry but right now Multi-Agent RL (MARL) is not supported, so your actions and observations space must be unrelated from the number of agents, which are considered...
Maybe there could be a solution as explained in #241
We can also test the environments without creating the expensive game underneath them by mocking the game creation and testing everything else
Hi @samlobel and thank you for using sheeprl for your experiments! If you want we can start with a standard DDQN agent: you can have a look at our [PPO...
@samlobel you can now try out directly the [main branch](https://github.com/Eclectic-Sheep/sheeprl/tree/main)
I've run an experiment on a [Lightning Studio](https://lightning.ai/studios) with an A10G GPU with the following command: ```bash python sheeprl.py exp=dreamer_v3_100k_ms_pacman fabric.devices=1 fabric.precision=32 fabric.accelerator=gpu ``` It has run in less than...