ViDAR
ViDAR copied to clipboard
Problems with image normalization parameter settings
https://github.com/OpenDriveLab/ViDAR/blob/4d773e696677cccbae3910ff114d6542725010a3/projects/configs/vidar_pretrain/nusc_fullset/vidar_full_nusc_1future.py#L46 Hi, great work on Vidar! I have a quick question regarding the image normalization settings. In the config, the normalization for image inputs is set with std=1, which implies no standardization is applied on the pixel values (e.g., just using mean subtraction).
Could you clarify the reason behind this design choice? Was it found to be more effective during training, or is it related to the specific properties of the generative learning framework in latent space?
Thanks in advance for your explanation!