LMDrive
LMDrive copied to clipboard
Minimum gpu memory to run evaluation?
Hi,
I'm trying to run the evaluation through run_evaluation.sh
on a RTX 3070Ti GPU but the GPU ran out of memory during loading the models, can you share your evaluation setup? Do you have a brief sense about how much GPU memory is needed for running such experiments?
Thanks.
Hi!
I ran the evaluation on a 3090. I tested it just now and it needed 18567MB GPU memory. I suggest that you run the Carla on the local PC (with a screen) and run the model on the remote GPU server. They can communicate with each other through the network. You can find the related settings in the run_evaluation.sh
Thanks for your fast response. I have a quick follow-up question: Does the current script allow using multiple GPUs if a single GPU's memory is not sufficient (e.g. I have 4 GPUs with 8GP memory each but not one 3090 with ~20G memory)?
Sorry, the current script doesn’t support this feature. But the model (w/o the Carla server) only consume about 14g memory, and 16G v100 is enough. Maybe quantization can be applied to reduce the gpu memory requirement.
Ruiyang Zhu @.***>于2024年1月12日 周五14:03写道:
Thanks for your fast response. I have a quick follow-up question: Does the current script allow using multiple GPUs if a single GPU's memory is not sufficient (e.g. I have 4 GPUs with 8GP memory each but not one 3090 with ~20G memory)?
— Reply to this email directly, view it on GitHub https://github.com/opendilab/LMDrive/issues/12#issuecomment-1888485423, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEFTRR6EPWK2SR5YIC4BZQDYODG4NAVCNFSM6AAAAABBXQ2X42VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBYGQ4DKNBSGM . You are receiving this because you commented.Message ID: @.***>
Thanks for your answer. For "running the Carla on the local PC (with a screen) and run the model on the remote GPU server", I tried to look at leaderboard/scripts/run_evaluation.sh
but do not seem to find any related network settings to run the model on the remote GPU server. Can you give me some more pointers on where I can set the configurations to run the models and communicate the results through network? Thanks!
Can you give me some more pointers on where I can set the configurations to run the models a
You can add the parameter HOST in run_evaluation.sh and add --host=${HOST} when calling the python script.
Thanks for the instruction. Would that mean 'run the carla server (CarlaUE4.sh)' on a local PC and run the run_evluation.sh
on a remote GPU server (by setting the HOST to be the IP of local PC)?
I tried that setup, but what about the pygame
window that will pop up during the evaluation of run_evaluation.sh
? I got some errors in that part:
========= Preparing RouteScenario_6 (repetition 0) =========
> Setting up the agent
ALSA lib confmisc.c:855:(parse_card) cannot find card '0'
ALSA lib conf.c:5178:(_snd_config_evaluate) function snd_func_card_inum returned error: No such file or directory
ALSA lib confmisc.c:422:(snd_func_concat) error evaluating strings
ALSA lib conf.c:5178:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory
ALSA lib confmisc.c:1334:(snd_func_refer) error evaluating name
ALSA lib conf.c:5178:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5701:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2664:(snd_pcm_open_noupdate) Unknown PCM default
I had same doubt about pop up of pygame window ....How can we do so ?