ray
                                
                                 ray copied to clipboard
                                
                                    ray copied to clipboard
                            
                            
                            
                        multi-agent dreamerv3
Description
Saw the dreamerv3 code in rllib has this error flag on multi-agent not supported currently. Is there any plan to support it in the future? What's the approach to extend to multi-agent dreamerv3?
if self.is_multi_agent():
            raise ValueError("DreamerV3 does NOT support multi-agent setups yet!")
Use case
Mult-agent env and agent learning
@janetwise Yes, we want to support all online algorithms in multi-agent mode, so DreamerV3 as well. We are still in the process of moving off-policy algorithms over to the new stack and DreamerV3 comes after these. Until the summit we plan to be ready with this move.
What’s the technical approach and suggestion if I work on extending dreamer to multiple agents? Will the approach wrapping PPO to multiple agents apply in a similar way?
@janetwise This approach (using the MultiEnv) will not work for DreamerV3 unless you are also implementing a multi-agent algorithms.dreamerv3.utils.env_runner.EnvRunner that deals with multi-agent observations and actions similar to our .env.multi_agent_env_runner.MultiAgentEnvRunner.
In the future we want to bring DreamerV3 also onto our default env runners, but this will not happen before the Summit 2024.