panacea
panacea copied to clipboard
[CVPR2024] Official Repository of Paper "Panacea: Panoramic and Controllable Video Generation for Autonomous Driving"
``` ln -s /data/code/StreamPETR/data/nuscenes/data/Dataset/nuScenes/gen-nuscenes-val/ gen-nuscenes-val ln -s /data/code/StreamPETR/data/nuscenes/data/Dataset/nuScenes/gen-nuscenes-train/ gen-nuscenes-train ``` I know the function of this step is soft linking, but what does it do in the overall system?
@wenyuqing Hi, thanks for your great work. The training code is not available. Do you have a schedule to release? Thanks.
Thanks for your amazing work in enhancing the performance of the downstream AV tasks. Could you please elaborate on the process you used to generate your gen-nuScenes dataset? Additionally, what...
hallo, a very nice work! I have a question, when i run the inference code according to default settings on 8 * 32G, it occurred to me out of memory,...
Hi, how can we reproduce the evaluation on StreamPETR-S (single frame), using only key frames generated? Please shed some light!
Thanks for your great work. However, in our experiment, we tried different text prompts according to your instruction (the dataset preparation and inference code), the video generation results are almost...
Hi thanks for making the code open source. I was able to run inferencing with frame length 4 on 2 A6000 GPUs. I wanted to ask how can I assign...
The code you released does not include the training code. Is there a plan to release the training code
hello, thank you for your excellent work! I had a problem when I was trying to run inference.py. Though the model successfully output the video, but in this video, only...