deep-rl-class
deep-rl-class copied to clipboard
5unit Hands-on
-
First, the best way to get a response fast is to ask the community it on #rl-study-group in our Discord server: https://www.hf.co/join/discord
-
If you prefer you can ask here, please be specific.
https://huggingface.co/learn/deep-rl-course/unit5/hands-on
When I ran the sample, I followed the documentation exactly until an error occurred at the beginning of the training
mlagents-learn ./config/ppo/SnowballTarget.yaml --env=./training-envs-executables/linux/SnowballTarget/SnowballTarget --run-id="SnowballTarget1" --no-graphics
Exception in thread Thread-2 (trainer_update_func): Traceback (most recent call last): File "/root/miniconda3/envs/mlagents/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/root/miniconda3/envs/mlagents/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/mnt/d/model/ml-agents-develop/ml-agents/ml-agents/mlagents/trainers/trainer_controller.py", line 297, in trainer_update_func trainer.advance() File "/mnt/d/model/ml-agents-develop/ml-agents/ml-agents/mlagents/trainers/trainer/rl_trainer.py", line 293, in advance self._process_trajectory(t) File "/mnt/d/model/ml-agents-develop/ml-agents/ml-agents/mlagents/trainers/ppo/trainer.py", line 91, in _process_trajectory ) = self.optimizer.get_trajectory_value_estimates( File "/mnt/d/model/ml-agents-develop/ml-agents/ml-agents/mlagents/trainers/optimizer/torch_optimizer.py", line 190, in get_trajectory_value_estimates value_estimates, next_memory = self.critic.critic_pass( File "/mnt/d/model/ml-agents-develop/ml-agents/ml-agents/mlagents/trainers/torch_entities/networks.py", line 487, in critic_pass value_outputs, critic_mem_out = self.forward( File "/mnt/d/model/ml-agents-develop/ml-agents/ml-agents/mlagents/trainers/torch_entities/networks.py", line 499, in forward encoding, memories = self.network_body( File "/root/miniconda3/envs/mlagents/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/miniconda3/envs/mlagents/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/mnt/d/model/ml-agents-develop/ml-agents/ml-agents/mlagents/trainers/torch_entities/networks.py", line 244, in forward encoding = self._body_endoder(encoded_self) File "/root/miniconda3/envs/mlagents/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/miniconda3/envs/mlagents/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/mnt/d/model/ml-agents-develop/ml-agents/ml-agents/mlagents/trainers/torch_entities/layers.py", line 169, in forward return self.seq_layers(input_tensor) File "/root/miniconda3/envs/mlagents/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/miniconda3/envs/mlagents/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/mlagents/lib/python3.10/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/root/miniconda3/envs/mlagents/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/miniconda3/envs/mlagents/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/mlagents/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 125, in forward return F.linear(input, self.weight, self.bias) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)