GDAE
GDAE copied to clipboard
problem with simultion
Hi, thanks for sharing your marvelous work! I got some issues with the simulation using GADE, becasue I don't have an actual car.
I have trained the model from your another work "DRL navigation", and changed the according code in https://github.com/reiniscimurs/GDAE/blob/fc793eda8de23bed98ba3acd32908843c535510f/Code/GDAM.py, and I modified the topic depicited in GADM_env.py which alligned with my own robot topic as shown.
Meanwhile the SLAM_TOOBOX is used for map construction, but It seemed that there was no goal is published and the robot cannot move at all. I checked the self.nodes and found it was not empty.
Here were my RVIZ screenshot and rqt_graph, and I tried to run " rostopic echo / /global_goal_publisher ", but no response received. Could your give me some advice about this problem?
Hi,
have you tried visualizing all the nodes within the rviz? All node topics are published and you should have the possibility to visualize them as markers in rviz. Then you can see if there are any nodes that are being recorded and processed.
What are your log outputs? If you would not have any goal nodes or any nodes to select goals from the program would encounter an indexing error.
Thanks for your patience and time.
In terminal 1, I run roslaunch slam_toolbox online_sync.launch , and here is the output.
In terminal 2, I run python GDAM.py, and here is the modified code.
`import torch import torch.nn as nn import torch.nn.functional as F import numpy as np from GDAM_env import ImplementEnv from GDAM_args import d_args import time
class Actor(nn.Module): def init(self): super(Actor, self).init()
self.layer_1 = nn.Linear(24, 800)
self.layer_2 = nn.Linear(800, 600)
self.layer_3 = nn.Linear(600, 2)
self.tanh = nn.Tanh()
def forward(self, s):
s = F.relu(self.layer_1(s))
s = F.relu(self.layer_2(s))
a = self.tanh(self.layer_3(s))
return a
env = ImplementEnv(d_args)
Set the parameters for the implementation
#device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # cuda or cpu # Load the model checkpoint file_name = "/home/zhengchen/DRL-robot-navigation/TD3/pytorch_models/TD3_velodyne_actor.pth" actor = Actor() try: actor.load_state_dict(torch.load(file_name)) except: raise ValueError("Could not load the stored model parameters")
print("running here") print("running here") print("running here")
while True:
action = torch.tensor([0.0, 0.0])
s2, toGoal = env.step(action)
s = np.append(s2, toGoal)
s = np.append(s, action.numpy())
print("running here 222")
print("running here 222")
print("running here 222")
print("running here 222")
while True:
a = actor(torch.tensor(s, dtype=torch.float).unsqueeze(0))
aIn = a.clone()
aIn[0, 0] = (aIn[0, 0] + 1) / 4
print(aIn)
print(aIn)
print(aIn)
print(aIn)
print(aIn)
print(aIn)
print(aIn)
print(aIn)
print(aIn)
s2, toGoal = env.step(aIn[0])
s = np.append(s2, a[0].detach().numpy())
s = np.append(s, toGoal)`
and here is the output.
I didn't see any marker like the project https://github.com/reiniscimurs/DRL-robot-navigation as shown in Rviz.
I checked that I setted the goal as x=4, y =-1.75 in GDAM_args.py. And I used the gazebo control lib as same as the /DRL-robot-navigation project.
I am confused there is no output like "running here" or "running here 222" in terminal 1, but after I used "Ctrl + C "to stop this terminal, here are some outputs as shown.
It seems that the number of lidar data from ROS topic /r1/front_laser/scan 360. The laser_in[381:420] laser_in[421:460], and further list are empty in "GDAM_env.py", line 211. I am sure that I didn't change the code of the multi_robot_scneario.launch in project DRL-robot-navigation. Could your give me some suggestion to run it correctly?
Thanks for the images and code snippets.
I am a bit confused at what point does the ValueError: min()....
appear. Does it appear when you execute the program or only after killing it with Ctrl+c?
In any case, it looks like the program can load the weights and get into the step method of the environment. I assume it could be some sensor issue here and some compatibility thing. Could you provide the full step method code in your implementation? In GDAM we are expecting to have 2 rpLidars as inputs with 1440 laser reading each. From these 1440 values we then take the 720 values that are 180 degree field of view in front of the laser. I assume your setup looks different, so there could be some issue how the laser data is obtained as this code would need to be adapted. It just could be that it is stuck in one of those while loops. What lidar are you using and how many sensor values does it have?
Thanks for your time and patience. ValueError: min()....only appears after I execute the command "Ctrl+c". The lidar sensor I use is as same as the lidar sensor defined in the DRL-navigation project. I didn't change anything at all. And I just subscribe the same topic "r1/front_laser/scan" for the two laser_in as inputs. And It is true that the lidar data only contains 380 values, cause the code "min(laser_in[381:420]), min(laser_in[421:460]),.... " in line 201 cannot execute correctly. Is the lidar you use have 360 degree field of view and then you reduce the values ? Cause I think the lidar used in DRL-nav project only contains 360 values, which may be the cause for this stuck.
So the lidars for GDAM are real physical RpLidar sensors. This means you automatically get 360 degree range with 1440 samples and have to limit the range by using only the frontal 180 degrees. This is done by collecting only the front readings in step method: https://github.com/reiniscimurs/GDAE/blob/main/Code/GDAM_env.py#L184-L196
In DRL-robot-navigation we use simulated lidars. There you can constrain the field of view in the sensor setup: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/xacro/laser/hokuyo.xacro#L42-L43 Additionally, you can set the amount of samples you will get out of the simulated sensor. In order to use the simulated sensor in GDAM, you will have to properly update the code for sensors so that it makes sense in the implementation. You can use a single lidar sensor and do not require both as in GDAM current code. But you will have to make sure that the gaps are sliced properly.
Hi, thanks for your suggestion.
I modified the lidar's sample values for implemention, and there is no warning or error for that.
But the robot still cannot move. I checked whether the goal was published by the command "
rostopic echo /move_base/TrajectoryPlannerROS/global_plan ", and the output was"
WARNING: no messages received and simulated time is active.
Is /clock being published?"
It seemed there is no goal generated, which I believe is the reason for the stuck.
In the GADM_env.py
self.vel_pub = rospy.Publisher('/cmd_vel', Twist, queue_size=1) self.vel_pub_rob = rospy.Publisher('/r1/cmd_vel', Twist, queue_size=1) topic_nodes_pub = 'vis_mark_array_node' self.nodes_pub = rospy.Publisher(topic_nodes_pub, Marker, queue_size=3) topic_nodes_pub_closed = 'vis_mark_array_node_closed' self.nodes_pub_closed = rospy.Publisher(topic_nodes_pub_closed, Marker, queue_size=3) topic_map_nodes = 'vis_mark_array_map_nodes' self.map_nodes_viz = rospy.Publisher(topic_map_nodes, Marker, queue_size=3) topic = 'vis_mark_array' self.publisher = rospy.Publisher(topic, MarkerArray, queue_size=3) **self.global_goal_publisher = rospy.Publisher('global_goal_publisher', MarkerArray, queue_size=1)** self.navLaser = rospy.Subscriber('/r1/front_laser/scan', LaserScan, self.Laser_callback, queue_size=1) self.navLaserTop = rospy.Subscriber('/r1/front_laser/scan', LaserScan, self.Laser_callback_Top, queue_size=1) self.navOdom = rospy.Subscriber('/r1/odom', Odometry, self.Odom_callback, queue_size=1) **self.path = rospy.Subscriber('/move_base/TrajectoryPlannerROS/global_plan', Path, self.path_callback, queue_size=1) self.client = actionlib.SimpleActionClient('move_base', MoveBaseAction) self.client.wait_for_server() self.listener = tranf.TransformListener()**
I don't understand the meaning of the subscriber self.path and self.client. Are they are published by another ROS package? Plus, I cannot echo any data from the topic of self.global_goal_publisher, but I can echo the goal point published by the DRL-navigation project https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/TD3/velodyne_env.py
self.publisher = rospy.Publisher("goal_point", MarkerArray, queue_size=3) self.publisher2 = rospy.Publisher("linear_velocity", MarkerArray, queue_size=1) self.publisher3 = rospy.Publisher("angular_velocity", MarkerArray, **queue_size=1)**
Can you give me some advice for this problem? Thanks.
Sorry for the code messy. Here is an image for the publisher and subscriber.
Ok that is actually a good catch. Yes, move base is a path planner that you need to launch beforehand: https://wiki.ros.org/move_base and is part of navigation stack with its own setup: https://wiki.ros.org/navigation/Tutorials/RobotSetup
Essentially, there is a callback to get the generated path from the move_base plugin in self.path
. But in order to get it, you require a client that will communicate with this plugin in self.client
. This client needs to establish communication with the move_base server with self.client.wait_for_server()
. What could be happening is that you do not have a server for communication running, so the client is perpetually waiting to establish communication.
There are 2 things you could try:
- Set the timeout for the
self.client.wait_for_server()
with something similar toself.client.wait_for_server(timeout=Duration(5))
. I do not recall actual python code for this, but see the c++ implementation as guidance here: https://wiki.ros.org/actionlib_tutorials/Tutorials/SimpleActionClient This should allow you to see if the code is stuck waiting for the move_base server - Remove the move_base and path planner code. It is not strictly necessary as it only gives a couple of more points of interest along the way the robot has already explored. This would not be critical to getting the code to run. You can do this by removing any use of
self.path
andself.client
and the related functionality.
Thank you so much for your advice! It finally worked !
I added a move_base plugin to my workspace with supported .yaml files. Then I can run in simulation perfectly.
One to go is the trained model seems not to be good, and the robot is simply rotating. I will try to train it for another time and see how it comes.
Thank your again for sharing your amazing work and your help.
That is great!
I still do not see any other possible nodes though. These should be added based on laser scans and shown as blue dots in rviz. For the network performance, I would still first suggest to check if the laser input is actually what you expect. Without changing how the data is read and processed in the step method I would not expect the network to work either.
Hi, thanks for sharing your marvelous work! I got some issues with the simulation using GADE, becasue I don't have an actual car.
I have trained the model from your another work "DRL navigation", and changed the according code in https://github.com/reiniscimurs/GDAE/blob/fc793eda8de23bed98ba3acd32908843c535510f/Code/GDAM.py, and I modified the topic depicited in GADM_env.py which alligned with my own robot topic as shown.
Meanwhile the SLAM_TOOBOX is used for map construction, but It seemed that there was no goal is published and the robot cannot move at all. I checked the self.nodes and found it was not empty.
Here were my RVIZ screenshot and rqt_graph, and I tried to run " rostopic echo / /global_goal_publisher ", but no response received. Could your give me some advice about this problem?
![]()
![]()
![]()
At this point, I'm getting lots of bugs, the link above isn't working, could you tell me which part of the code needs to be modified? I've trained the model, but how do I use it with GADE