Why is my test object not moving? Is MLagent not responding or any code issue in my project?
I am a beginner in MLagents in UNITY and my agent (the cube) is not moving Specific error from cmd:
(venv) C:\Users\Ricardo\My project>mlagents-learn --run-id=Test1
┐ ╖
╓╖╬│╡ ││╬╖╖
╓╖╬│││││┘ ╬│││││╬╖
╖╬│││││╬╜ ╙╬│││││╖╖ ╗╗╗
╬╬╬╬╖││╦╖ ╖╬││╗╣╣╣╬ ╟╣╣╬ ╟╣╣╣ ╜╜╜ ╟╣╣
╬╬╬╬╬╬╬╬╖│╬╖╖╓╬╪│╓╣╣╣╣╣╣╣╬ ╟╣╣╬ ╟╣╣╣ ╒╣╣╖╗╣╣╣╗ ╣╣╣ ╣╣╣╣╣╣ ╟╣╣╖ ╣╣╣
╬╬╬╬┐ ╙╬╬╬╬│╓╣╣╣╝╜ ╫╣╣╣╬ ╟╣╣╬ ╟╣╣╣ ╟╣╣╣╙ ╙╣╣╣ ╣╣╣ ╙╟╣╣╜╙ ╫╣╣ ╟╣╣
╬╬╬╬┐ ╙╬╬╣╣ ╫╣╣╣╬ ╟╣╣╬ ╟╣╣╣ ╟╣╣╬ ╣╣╣ ╣╣╣ ╟╣╣ ╣╣╣┌╣╣╜
╬╬╬╜ ╬╬╣╣ ╙╝╣╣╬ ╙╣╣╣╗╖╓╗╣╣╣╜ ╟╣╣╬ ╣╣╣ ╣╣╣ ╟╣╣╦╓ ╣╣╣╣╣
╙ ╓╦╖ ╬╬╣╣ ╓╗╗╖ ╙╝╣╣╣╣╝╜ ╘╝╝╜ ╝╝╝ ╝╝╝ ╙╣╣╣ ╟╣╣╣
╩╬╬╬╬╬╬╦╦╬╬╣╣╗╣╣╣╣╣╣╣╝ ╫╣╣╣╣
╙╬╬╬╬╬╬╬╣╣╣╣╣╣╝╜
╙╬╬╬╣╣╣╜
╙
Version information:
ml-agents: 0.29.0,
ml-agents-envs: 0.29.0,
Communicator API: 1.5.0,
PyTorch: 1.13.1+cpu
[INFO] Listening on port 5004. Start training by pressing the Play button in the Unity Editor.
Traceback (most recent call last):
File "C:\Users\Ricardo\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\Ricardo\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Ricardo\My project\venv\Scripts\mlagents-learn.exe\__main__.py", line 7, in <module>
File "c:\users\ricardo\my project\venv\lib\site-packages\mlagents\trainers\learn.py", line 260, in main
run_cli(parse_command_line())
File "c:\users\ricardo\my project\venv\lib\site-packages\mlagents\trainers\learn.py", line 256, in run_cli
run_training(run_seed, options, num_areas)
File "c:\users\ricardo\my project\venv\lib\site-packages\mlagents\trainers\learn.py", line 132, in run_training
tc.start_learning(env_manager)
File "c:\users\ricardo\my project\venv\lib\site-packages\mlagents_envs\timers.py", line 305, in wrapped
return func(*args, **kwargs)
File "c:\users\ricardo\my project\venv\lib\site-packages\mlagents\trainers\trainer_controller.py", line 173, in start_learning
self._reset_env(env_manager)
File "c:\users\ricardo\my project\venv\lib\site-packages\mlagents_envs\timers.py", line 305, in wrapped
return func(*args, **kwargs)
File "c:\users\ricardo\my project\venv\lib\site-packages\mlagents\trainers\trainer_controller.py", line 105, in _reset_env
env_manager.reset(config=new_config)
File "c:\users\ricardo\my project\venv\lib\site-packages\mlagents\trainers\env_manager.py", line 68, in reset
self.first_step_infos = self._reset_env(config)
File "c:\users\ricardo\my project\venv\lib\site-packages\mlagents\trainers\subprocess_env_manager.py", line 446, in _reset_env
ew.previous_step = EnvironmentStep(ew.recv().payload, ew.worker_id, {}, {})
File "c:\users\ricardo\my project\venv\lib\site-packages\mlagents\trainers\subprocess_env_manager.py", line 101, in recv
raise env_exception
mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
The environment does not need user interaction to launch
The Agents' Behavior Parameters > Behavior Type is set to "Default"
The environment and the Python interface have compatible versions.
If you're running on a headless server without graphics support, turn off display by either passing --no-graphics option or build your Unity executable as server build.
I invoke the process using mlagents-learn --run-id=Test1 or similar
Code from the agent:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
using Unity.MLAgents.Actuators;
public class CharacterAgent : Agent
{
[SerializeField] private Transform target;
Rigidbody rBody;
public override void Initialize()
{
rBody = GetComponent<Rigidbody>();
}
public override void OnEpisodeBegin()
{
// transform.position = new Vector3(Random.Range(-6.0f,-2.0f), Random.Range(-2.5f, 0.0f), Random.Range(-4.0f, 4.0f));
this.transform.position = new Vector3(-3.45f,1.28f,0);
target.position = new Vector3(3.55f,0.75f,0.57f);
}
public override void CollectObservations(VectorSensor sensor)
{
sensor.AddObservation((Vector3)transform.position);
sensor.AddObservation((Vector3)target.position);
}
public override void OnActionReceived(ActionBuffers actions)
{
float moveX = actions.ContinuousActions[0];
float moveZ = actions.ContinuousActions[1];
Debug.Log(moveX);
Debug.Log(moveZ);
float on_speed = 5f;
this.rBody.position += new Vector3(moveX,0,moveZ) * Time.deltaTime * on_speed;
}
public override void Heuristic(in ActionBuffers actionsOut)
{
ActionSegment<float> continousActions = actionsOut.ContinuousActions;
continousActions[0] = Input.GetAxisRaw("Horizontal");
continousActions[1] = Input.GetAxisRaw("Vertical");
}
private void OnTriggerEnter(Collider collision)
{
if(collision.TryGetComponent<MyTarget>(out MyTarget target)) {
AddReward(10f);
EndEpisode();
}
if (collision.TryGetComponent<MyWalls>(out MyWalls walls))
{
AddReward(-2f);
EndEpisode();
}
}
}
Code of the goal (sphere):
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class MyTarget : MonoBehaviour
{
}
Code of the walls:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class MyWalls : MonoBehaviour
{
}
Unity logs when I debug:
[Adaptive Performance] No Provider was configured for use. Make sure you added at least one Provider in the Adaptive Performance Settings.
UnityEngine.AdaptivePerformance.AdaptivePerformanceInitializer:Initialize () (at ./Library/PackageCache/[email protected]/Runtime/Core/AdaptivePerformanceInit.cs:60)
Couldn't connect to trainer on port 5004 using API version 1.5.0. Will perform inference instead.
UnityEngine.Debug:Log (object)
Unity.MLAgents.Academy:InitializeEnvironment () (at ./Library/PackageCache/[email protected]/Runtime/Academy.cs:459)
Unity.MLAgents.Academy:LazyInitialize () (at ./Library/PackageCache/[email protected]/Runtime/Academy.cs:279)
Unity.MLAgents.Academy:.ctor () (at ./Library/PackageCache/[email protected]/Runtime/Academy.cs:248)
Unity.MLAgents.Academy/<>c:<.cctor>b__83_0 () (at ./Library/PackageCache/[email protected]/Runtime/Academy.cs:117)
System.Lazy`1<Unity.MLAgents.Academy>:get_Value ()
Unity.MLAgents.Academy:get_Instance () (at ./Library/PackageCache/[email protected]/Runtime/Academy.cs:132)
Unity.MLAgents.Agent:LazyInitialize () (at ./Library/PackageCache/[email protected]/Runtime/Agent.cs:451)
Unity.MLAgents.Agent:OnEnable () (at ./Library/PackageCache/[email protected]/Runtime/Agent.cs:365)
I have same problem
Do you have Decision Requester component?
I do, I found some solutions but still not working, and the same error appears... is that a version issue from python/torch?
Your behavior type should be default if you want train it.
I do, I found some solutions but still not working, and the same error appears... is that a version issue from python/torch?
I saw your behavior type is Heuristic. Change your behavior type to default if you want it controlled by python
I do, I found some solutions but still not working, and the same error appears... is that a version issue from python/torch?
I saw your behavior type is Heuristic. Change your behavior type to default if you want it controlled by python
I see, then I will re-do the project and update it to default... because I've made a lot of changes looking up in some tutorials and how-to-solve guides with similar issues to prevent any config conflict with this
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale. Please open a new issue for related bugs.