Results 10 comments of Jiageng Mao

I solved this issue by downgrading the Transformer version with `pip install git+https://github.com/huggingface/[email protected] `

@abhigoku10 @haomengz @Gear420 We have released the code and data for evaluation [here](https://drive.google.com/drive/folders/1NCqPtdK8agPi1q3sr9-8-vPdYj08OCAE?usp=sharing).

You may refer to the code [here](https://drive.google.com/drive/folders/1ycDtlH08DRwBsP9r23UmN4BQfI8b1ngU). Hope it can help.

Thanks for your interest in our work! We pre-processed the perception results from the modules in uniad. The data caching process is deeply integrated into the uniad repo, so it...

@abhigoku10 I refer mostly to the cache code in VAD but it is on another codebase. I currently don't have plans to immigrate to this repo.

@abhigoku10 @cherry956 nuScenes data caching should be [here](https://drive.google.com/drive/folders/1ycDtlH08DRwBsP9r23UmN4BQfI8b1ngU?usp=sharing) if I recalled correctly. For customized caching, please play with it yourself.

@frkmac3 V2 version was tested under the setting that is similar to ST-P3 (average over average, but gt occ map is different, code is mainly from [here](https://github.com/E2E-AD/AD-MLP/blob/main/deps/stp3/stp3/planning_metrics.py)). In V3 we...

@abhigoku10 1. Yes. Just change the train.json 2. Yes. We use different codebases to fine-tune different llms. I've tried llama2-7b before and it is very easy to modify. 3. I've...