argoverse-forecasting
argoverse-forecasting copied to clipboard
Results of LSTM Social model on validation set are worse than the const velocity model.
Hi, @jagjeet-singh
Thanks for sharing the baseline code. I'm trying to train the LSTM Social model and evaluate on the validation set. The results of LSTM Social model on validation set are worse than the const velocity model.
Results on LSTM Social Model:
------------------------------------------------
Prediction Horizon : 30, Max #guesses (K): 6
------------------------------------------------
{'minADE': 13.345417598687021, 'minFDE': 25.38952803770351, 'MR': 0.9912342926631537, 'DAC': 0.9880674908796109}
Results on Const Velocity Model:
------------------------------------------------
Prediction Horizon : 30, Max #guesses (K): 6
------------------------------------------------
{'minADE': 2.7151615658689465, 'minFDE': 6.05341305248324, 'MR': 0.742146331576814, 'DAC': 0.9222993514389948}
I'm using the default parameters for training. Could you please help me in sorting this issue?
I'm using the following scripts to do the same:
*Training*:
python lstm_train_test.py \
--train_features ../features/forecasting_features/forecasting_features_train.pkl \
--val_features ../features/forecasting_features/forecasting_features_val.pkl \
--test_features ../features/forecasting_features/forecasting_features_val.pkl \
--use_social --use_delta --normalize --obs_len 20 --pred_len 30 \
--model_path ./saved_models \
--traj_save_path ./saved_trajectories/lstm_social/rollout30_traj_sept.pkl
*Generating Forecast*:
python lstm_train_test.py \
--test_features ../features/forecasting_features/forecasting_features_val.pkl \
--use_social --use_delta --normalize --obs_len 20 --pred_len 30 --test \
--model_path ./saved_models/lstm_social/LSTM_rollout30.pth.tar \
--traj_save_path ./saved_trajectories/lstm_social/rollout30_traj_sept.pkl
*Metrics*:
python eval_forecasting_helper.py --metrics \
--gt ../features/dataset/ground_truth/ground_truth_val.pkl \
--forecast ./saved_trajectories/lstm_social/rollout30_traj_sept.pkl \
--horizon 30 --obs_len 20 \
--features ../features/forecasting_features/forecasting_features_val.pkl \
--miss_threshold 2 --max_n_guesses 6
Generating Ground Truth:
import pandas as pd
import os
import pickle
df = pd.read_pickle("./forecasting_features/forecasting_features_val.pkl")
save_path = "./ground_truth_data"
if not os.path.exists(save_path):
os.makedirs(save_path)
val_gt = {}
for i in range(len(df)):
seq_id = df.iloc[i]['SEQUENCE']
curr_arr = df.iloc[i]['FEATURES'][20:][:, 3:5]
val_gt[seq_id] = curr_arr
with open(save_path + '/ground_truth_val.pkl', 'wb') as f:
pickle.dump(val_gt, f)
Hi, @jagjeet-singh
Thanks for sharing the baseline code. I'm trying to train the LSTM Social model and evaluate on the validation set. The results of LSTM Social model on validation set are worse than the const velocity model.
Results on LSTM Social Model:
------------------------------------------------ Prediction Horizon : 30, Max #guesses (K): 6 ------------------------------------------------ {'minADE': 13.345417598687021, 'minFDE': 25.38952803770351, 'MR': 0.9912342926631537, 'DAC': 0.9880674908796109}
Results on Const Velocity Model:
------------------------------------------------ Prediction Horizon : 30, Max #guesses (K): 6 ------------------------------------------------ {'minADE': 2.7151615658689465, 'minFDE': 6.05341305248324, 'MR': 0.742146331576814, 'DAC': 0.9222993514389948}
I'm using the default parameters for training. Could you please help me in sorting this issue?
I'm using the following scripts to do the same:
*Training*: python lstm_train_test.py \ --train_features ../features/forecasting_features/forecasting_features_train.pkl \ --val_features ../features/forecasting_features/forecasting_features_val.pkl \ --test_features ../features/forecasting_features/forecasting_features_val.pkl \ --use_social --use_delta --normalize --obs_len 20 --pred_len 30 \ --model_path ./saved_models \ --traj_save_path ./saved_trajectories/lstm_social/rollout30_traj_sept.pkl *Generating Forecast*: python lstm_train_test.py \ --test_features ../features/forecasting_features/forecasting_features_val.pkl \ --use_social --use_delta --normalize --obs_len 20 --pred_len 30 --test \ --model_path ./saved_models/lstm_social/LSTM_rollout30.pth.tar \ --traj_save_path ./saved_trajectories/lstm_social/rollout30_traj_sept.pkl *Metrics*: python eval_forecasting_helper.py --metrics \ --gt ../features/dataset/ground_truth/ground_truth_val.pkl \ --forecast ./saved_trajectories/lstm_social/rollout30_traj_sept.pkl \ --horizon 30 --obs_len 20 \ --features ../features/forecasting_features/forecasting_features_val.pkl \ --miss_threshold 2 --max_n_guesses 6
Generating Ground Truth:
import pandas as pd import os import pickle df = pd.read_pickle("./forecasting_features/forecasting_features_val.pkl") save_path = "./ground_truth_data" if not os.path.exists(save_path): os.makedirs(save_path) val_gt = {} for i in range(len(df)): seq_id = df.iloc[i]['SEQUENCE'] curr_arr = df.iloc[i]['FEATURES'][20:][:, 3:5] val_gt[seq_id] = curr_arr with open(save_path + '/ground_truth_val.pkl', 'wb') as f: pickle.dump(val_gt, f)
Did you find a solution? I have a same issue
I have the same issue, does anybody know why?