Markus Hinsche
Markus Hinsche
Sorry, I am afraid this issue was not addressed yet. It is currently not planned to do by us actively, but we are happy if somebody wants to contribute this...
Please refer to https://github.com/carla-simulator/imitation-learning/blob/master/agents/imitation/imitation_learning.py. This controller has to read the model/checkpoint file. This is how you can run it in the carla environment.
You have to go to yet another repository (the Carla main repository) https://github.com/carla-simulator/carla/blob/master/Deprecated/PythonClient/driving_benchmark_example.py, and switch out the agent for the one in imitation_learning.py
Currently the code only supports training a model from scratch. It shouldn't be too hard to support fine-tuning, e.g., by reading an old checkpoint and continuing training from there.
Hi Chris, It happened to me too sometimes that the values for the outputs didn't match the distribution of the data. Some things that might work: - try different hyper-parameters...
> I realized that after 90000 iterations, the model started to have these values explode I experienced the same behavior (this is very obvious to see in Tensorboard). Every time...
We didn't add additional training images/sequences so I would have to research to find out myself
>Hi, I noticed that for every training sample the network outputs predictions for all 5 output branches, but the loss is then (correctly) calculated using the output from the branch...
> So if for example we have a batch of 20 images, 10 of them are Right and 10 of them are Straight, does this mean the Left and Follow...