GRIP
GRIP copied to clipboard
GRIP++ on NGSIM
Hi Xin, Thank you very much for your paper and code. It is very helpful and inspirational. Right now, I am running your code. It works very well on ApolloScape. But I get some trouble when testing the code on NGSIM I-80 and US-101 datasets. The errors are extremely high. [Test_Epoch0] All_All: 71.520 78.297 83.292 82.014 4.246 71.224 79.316 82.848 83.530 7.253 643.541 data_process.py is utilized to process the NGSIM dataset. The Past 3 seconds are used to predict the next 5 seconds. 2 frames per second. Can you give me some advice to modify the code? Thank you for your time.
@KP-Zhang At first, when I uesd ngsim dataset, there would be results that were as high as you got, there are some tips I can share.
- Check your downsampling method. You should downsample the frame instead of downsampling the index according to Frame_ID or Vehicle_ID.
- If you increase the feature dimension, you need to change some features index, such as [mask] index in main.py [line:195].
- Need to modify the RMSE calculation formula, convert feet to meters.
These are all the problems I encountered when dealing with the ngsim dataset, I hope to help you
@chaosles Thank you for your first tip. I made a mistake during downsampling the data. Thank you for your help.
@KP-Zhang At first, when I uesd ngsim dataset, there would be results that were as high as you got, there are some tips I can share.
- Check your downsampling method. You should downsample the frame instead of downsampling the index according to Frame_ID or Vehicle_ID.
- If you increase the feature dimension, you need to change some features index, such as [mask] index in main.py [line:195].
- Need to modify the RMSE calculation formula, convert feet to meters.
These are all the problems I encountered when dealing with the ngsim dataset, I hope to help you
Hi Xin, Thank you very much for your paper and code. Can you provide the processed NGSIM dataset,Thank you for your time.
sorry, I am not Xin, and he used directly data process code of conv-social-pool (https://github.com/nachiket92/conv-social-pooling/blob/master/preprocess_data.m)
sorry, I am not Xin, and he used directly data process code of conv-social-pool (https://github.com/nachiket92/conv-social-pooling/blob/master/preprocess_data.m)
Thanks, Have you reproduced the results of the NGSIM dataset in the paper?
Hi, @chaosles @xincoder ,
How do you calculate RMSE on NGSIM?
I notice that in line 53 of main.py.
In my mind, if we calculate RMSE, the equation will be
overall_loss_time = (overall_sum_time / overall_num_time)**0.5
instead of
overall_sum_time = np.sum(all_overall_sum_list**0.5, axis=0) overall_num_time = np.sum(all_overall_num_list, axis=0) overall_loss_time = (overall_sum_time / overall_num_time)
def compute_RMSE(pra_pred, pra_GT, pra_mask, pra_error_order=2): pred = pra_pred * pra_mask # (N, C, T, V)=(N, 2, 6, 120) GT = pra_GT * pra_mask # (N, C, T, V)=(N, 2, 6, 120) x2y2 = torch.sum(torch.abs(pred - GT)**pra_error_order, dim=1) # x^2+y^2, (N, C, T, V)->(N, T, V)=(N, 6, 120) overall_sum_time = x2y2.sum(dim=-1) # (N, T, V) -> (N, T)=(N, 6) overall_mask = pra_mask.sum(dim=1).sum(dim=-1) # (N, C, T, V) -> (N, T)=(N, 6) overall_num = overall_mask return overall_sum_time, overall_num, x2y2
def display_result(pra_results, pra_pref='Train_epoch'): all_overall_sum_list, all_overall_num_list = pra_results overall_sum_time = np.sum(all_overall_sum_list**0.5, axis=0) overall_num_time = np.sum(all_overall_num_list, axis=0) overall_loss_time = (overall_sum_time / overall_num_time) overall_log = '|{}|[{}] All_All: {}'.format(datetime.now(), pra_pref, ' '.join(['{:.3f}'.format(x) for x in list(overall_loss_time) + [np.sum(overall_loss_time)]])) my_print(overall_log) return overall_loss_time
Hi @KP-Zhang , thank you for your question. As I commented for another question (https://github.com/xincoder/GRIP/issues/6#issuecomment-737605154), the implementation of this function calculates the equation you mentioned above. Computing "sqrt" in the denominator has a numerical stability problem. Thus, in the released code, we only use it to monitor the training and validation performance of the model. The implementation does not impact the testing results. If we want to get the results on the testing set, we have to submit the predicted results to the Baidu Apolloscape website. For the NGSIM dataset, I calculated the RMSE once using the correct equation after training the model.
@xincoder Thank you for your clarification.
Hi @xincoder , Thank you for your patience. As you mentioned in https://github.com/xincoder/GRIP/issues/11. "在使用conv-social-pool文章的code生成训练数据之后,我并没有使用data_process.py,而是直接修改了data_loader从而加载处理好的数据。" Is it possible that you can share your code for the NGSIM dataset or the data loader code with me? It will be very helpful. If possible, my email address is [email protected]. Thank you for your attention.
Hi @KP-Zhang , I am doing international travel and waiting for the VISA now. Before started traveling, I shut down my servers and am not able to access the GRIP's code now. Sorry about that. I do not know how long it will take to get the VISA during the pandemic period. I will send you the code once I get access to it. Thank you.
Hi @xincoder , thank you for your sharing.
Hi @KP-Zhang , I am doing international travel and waiting for the VISA now. Before started traveling, I shut down my servers and am not able to access the GRIP's code now. Sorry about that. I do not know how long it will take to get the VISA during the pandemic period. I will send you the code once I get access to it. Thank you.
Hi, @chaosles @xincoder ,Is it possible that you can share your code for the NGSIM dataset or the data loader code with me? It will be very helpful. my email address is [email protected]. Thank you for your attention.
Hi @guoyage , as we mentioned in our paper, we use the conv-social-pooling code directly. We did not do any modification to the processing of the NGSIM dataset. Thank you very much.
Hi @guoyage , as we mentioned in our paper, we use the conv-social-pooling code directly. We did not do any modification to the processing of the NGSIM dataset. Thank you very much.
Thank you for your clarification. But I get some trouble when testing the code on NGSIM I-80 and US-101 datasets. The errors are extremely high. and i also use the conv-social-pooling code, So I don’t know what going wrong , Is it possible that you can share your code for the NGSIM dataset ???? This will help me a lot. Thanks any way
Hi @guoyage, as you may have already noticed that the U.S. embassy and consulates in China have been canceling immigrant and nonimmigrant visa appointments in the past few months. I am still stuck in China and am not able to access my own server.
Hi @guoyage , as we mentioned in our paper, we use the conv-social-pooling code directly. We did not do any modification to the processing of the NGSIM dataset. Thank you very much.
Thank you for your clarification. But I get some trouble when testing the code on NGSIM I-80 and US-101 datasets. The errors are extremely high. and i also use the conv-social-pooling code, So I don’t know what going wrong , Is it possible that you can share your code for the NGSIM dataset ???? This will help me a lot. Thanks any way
@xincoder Hello! I want to ask after you use directly data process code of conv-social-pool, how do you modify the relevant code of data_loader? If it is convenient to you, can you send the data_loader code to my email [email protected]? Thanks very much!
Hi @Xiejc97, as I mentioned above, I am stuck in China because of the COVID19 travel restriction, thus I am not able to access my server now. Sorry about that. Even so, if I am correct, I used the scipy.io.loadmat to load *.mat files generated by conv-social-pool.
@KP-Zhang Hello! I want to ask after you use directly data process code of conv-social-pool, how do you modify the relevant code of data_loader? If it is convenient to you, can you send the data_loader code to my email [email protected]? Thanks very much!
@xincoder Hello, I have encountered similar problems with other people in the process of training NGSIM. At present, is it convenient for you to send the relevant code to my email [email protected]?Thank you very much.
@xincoder Thank you for your wonderful sharing, after reading your article benefit a lot. A recent issue that needs to be addressed is the loading of NGSIM data sets onto the GRIP network and completion of evaluation. If it is convenient, can you send me a copy of the code for NGSIM processing about data_loader?
my email [email protected]
起初,当我使用ngsim数据集时,会有和你一样高的结果,我可以分享一些技巧。
- 检查缩减采样方法。您应该根据Frame_ID或Vehicle_ID对帧进行缩减采样,而不是对索引进行缩减采样。
- 如果增加特征维度,则需要更改某些特征索引,例如 [line:195] main.py 中的 [mask] 索引。
- 需要修改RMSE计算公式,将英尺转换为米。
这些都是我在处理ngsim数据集时遇到的问题,希望对您有所帮助
Hello, there was a problem when I reproduced the result of GRIP++ on NGSIM. I changed the evaluation criteria in the source code, but the error was still large. Could you share a copy of the duplicate code of grip++ with me? Thank you.my email [[email protected]]
Hi @guoyage, as you may have already noticed that the U.S. embassy and consulates in China have been canceling immigrant and nonimmigrant visa appointments in the past few months. I am still stuck in China and am not able to access my own server.
Hello, have you received a copy of the program that runs on the ngsim dataset from the author, and if so, could you send me a copy?
起初,当我使用ngsim数据集时,会有和你一样高的结果,我可以分享一些技巧。
- 检查缩减采样方法。您应该根据Frame_ID或Vehicle_ID对帧进行缩减采样,而不是对索引进行缩减采样。
- 如果增加特征维度,则需要更改某些特征索引,例如 [line:195] main.py 中的 [mask] 索引。
- 需要修改RMSE计算公式,将英尺转换为米。
这些都是我在处理ngsim数据集时遇到的问题,希望对您有所帮助
您好,请问您知道他是如何将deo处理好的.mat文件导入到main模型的吗?能否将修改完的代码发送让我参考一下,我的邮箱[email protected]
Nice work! @xincoder
COVID19 Disappeared. So can u upload the dataloader.py related to the NGSIM dataset? Looking forward to your reply.
Hi, @chaosles @xincoder , How do you calculate RMSE on NGSIM? I notice that in line 53 of main.py. In my mind, if we calculate RMSE, the equation will be
overall_loss_time = (overall_sum_time / overall_num_time)**0.5
instead ofoverall_sum_time = np.sum(all_overall_sum_list**0.5, axis=0) overall_num_time = np.sum(all_overall_num_list, axis=0) overall_loss_time = (overall_sum_time / overall_num_time)
def compute_RMSE(pra_pred, pra_GT, pra_mask, pra_error_order=2): pred = pra_pred * pra_mask # (N, C, T, V)=(N, 2, 6, 120) GT = pra_GT * pra_mask # (N, C, T, V)=(N, 2, 6, 120) x2y2 = torch.sum(torch.abs(pred - GT)**pra_error_order, dim=1) # x^2+y^2, (N, C, T, V)->(N, T, V)=(N, 6, 120) overall_sum_time = x2y2.sum(dim=-1) # (N, T, V) -> (N, T)=(N, 6) overall_mask = pra_mask.sum(dim=1).sum(dim=-1) # (N, C, T, V) -> (N, T)=(N, 6) overall_num = overall_mask return overall_sum_time, overall_num, x2y2
def display_result(pra_results, pra_pref='Train_epoch'): all_overall_sum_list, all_overall_num_list = pra_results overall_sum_time = np.sum(all_overall_sum_list**0.5, axis=0) overall_num_time = np.sum(all_overall_num_list, axis=0) overall_loss_time = (overall_sum_time / overall_num_time) overall_log = '|{}|[{}] All_All: {}'.format(datetime.now(), pra_pref, ' '.join(['{:.3f}'.format(x) for x in list(overall_loss_time) + [np.sum(overall_loss_time)]])) my_print(overall_log) return overall_loss_time
I think there is a huge hole in the way you calculate it, and I think your overall_loss_time = (overall_sum_time / overall_num_time)**0.5 code should be replaced with the overall_loss_time = (2*overall_sum_time / overall_num_time)**0.5 code. At the same time, I checked your latest article "AI-TP: Attention-Based Interaction-Aware Trajectory Prediction for Autonomous Driving". I am very confused about the RMSE calculation method you used in this article. If it is your previous calculation method, then the RMSE you get will be much smaller than the actual RMSE, which will be a serious error. Maybe my understanding is wrong, I hope you can provide further explanation to eliminate my confusion and prove the reliability of the conclusion of your article. Looking forward to your reply.