Autonomous-Driving-in-Carla-using-Deep-Reinforcement-Learning
Autonomous-Driving-in-Carla-using-Deep-Reinforcement-Learning copied to clipboard
Training exits before the while loop criteria is met
Hello Idree,
I am running a new training and the code is exiting before meeting the criteria, why is this happening and how can I have a complete training without being stopped too frequently?
Thank you.
I also have similar problem, the training process being stopped too frequently, while I'm using carla of version 0.9.14.
I also have similar problem, the training process being stopped too frequently, while I'm using carla of version 0.9.14.
a simple solution is to make x_driver.py && environment.py sleep longer time, while decrease the efficiency.
I also have similar problem, the training process being stopped too frequently, while I'm using carla of version 0.9.14.
a simple solution is to make x_driver.py && environment.py sleep longer time, while decrease the efficiency.
Can you explain details? Thank you
I also have similar problem, the training process being stopped too frequently, while I'm using carla of version 0.9.14.
a simple solution is to make x_driver.py && environment.py sleep longer time, while decrease the efficiency.
Can you explain details? Thank you
Sorry for seeing this message so late. I have solved this problem, just ignore messages I mentioned before, the true solution is retraining the vae network, as this network might output nan value when using original network parameters.
I also have similar problem, the training process being stopped too frequently, while I'm using carla of version 0.9.14.
a simple solution is to make x_driver.py && environment.py sleep longer time, while decrease the efficiency.
Can you explain details? Thank you
Sorry for seeing this message so late. I have solved this problem, just ignore messages I mentioned before, the true solution is retraining the vae network, as this network might output nan value when using original network parameters.
Hello, I have the same question, can you explain in detail? Thank you!
I also have similar problem, the training process being stopped too frequently, while I'm using carla of version 0.9.14.
a simple solution is to make x_driver.py && environment.py sleep longer time, while decrease the efficiency.
Can you explain details? Thank you
Sorry for seeing this message so late. I have solved this problem, just ignore messages I mentioned before, the true solution is retraining the vae network, as this network might output nan value when using original network parameters.
Hello, I have the same question, can you explain in detail? Thank you!
simply use vae.py to retrain vae network, don't use the model parameters under autoencoder/model/current
I also have similar problem, the training process being stopped too frequently, while I'm using carla of version 0.9.14.
a simple solution is to make x_driver.py && environment.py sleep longer time, while decrease the efficiency.
Can you explain details? Thank you
Sorry for seeing this message so late. I have solved this problem, just ignore messages I mentioned before, the true solution is retraining the vae network, as this network might output nan value when using original network parameters.
Hello, I have the same question, can you explain in detail? Thank you!
simply use vae.py to retrain vae network, don't use the model parameters under autoencoder/model/current
Thank you very much!
can you plz tell what system configuration is needed to run this? my system configuration is quite low :( Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz 2.30 GHz 8.00 GB 64-bit operating system, x64-based processor
@WangJuan6 @Oliverbihop @Michael-Fuu