smacv2 icon indicating copy to clipboard operation
smacv2 copied to clipboard

[Error] value error by run examples/rllib/run_ppo.py

Open TongsYu opened this issue 1 year ago • 2 comments

Hello, thank you for your contribution of rllib version. When I run run_ppo.py and run_qmix.py, there is a Value Error as below

(RolloutWorker pid=3950203) File "../python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 682, in init (RolloutWorker pid=3950203) self.policy_dict, self.is_policy_to_train = self.config.get_multi_agent_setup( (RolloutWorker pid=3950203) File "../python3.9/site-packages/ray/rllib/algorithms/algorithm_config.py", line 2728, in get_multi_agent_setup (RolloutWorker pid=3950203) raise ValueError( (RolloutWorker pid=3950203) ValueError: observation_space not provided in PolicySpec for default_policy and env does not have an observation space OR no spaces received from other workers' env(s) OR no observation_space specified in config! ValueError: observation_space not provided in PolicySpec for default_policy and env does not have an observation space OR no spaces received from other workers' env(s) OR no observation_space specified in config!

while my tensorflow version is 2.12, ray version is 2.4.0, python version is 3.9

Waiting for you answer, thx

TongsYu avatar Jun 02 '23 09:06 TongsYu

The RLlib example hasn't been tested and so likely doesn't work. I am going to do some maintenance later this week though when maybe I will have time to fix this.

benellis3 avatar Jun 20 '23 11:06 benellis3

The RLlib example hasn't been tested and so likely doesn't work. I am going to do some maintenance later this week though when maybe I will have time to fix this.

Thank you, looking forward to your reply.

TongsYu avatar Jul 30 '23 06:07 TongsYu