Inquiry Regarding LiM3D Training Efficiency and Resource Requirements
Thank you for your outstanding work on the LiM3D framework. I am currently working on reproducing your code and applying it to a much smaller dataset than SemanticKITTI for experimental purposes. During the first training stage, I encountered several issues and would like to seek your advice: Due to limited resources (two A40 GPUs with 48GB memory), I frequently run into out-of-memory errors, and the training process is very slow. According to your paper, you used four A100 GPUs (80GB each) for training. I would like to ask: 1.Does the LiM3D framework have a high dependency on GPU memory or computational resources? 2.Are there any recommended settings (e.g., batch size, feature dimensions, gradient accumulation, etc.) for training on smaller datasets or with limited hardware? 3.In the first training stage, is there a suggested mIoU threshold or indicator to determine when to stop training or consider it complete? Thank you very much for your time, and I look forward to your kind guidance.