LMDrive
LMDrive copied to clipboard
Questions about evaluation.
Hi! I have 2 questions about evaluation. I'd be grateful for you help.
- Where to set directory of dataset before evaluation?
- When running to the following code, the pygame window crash and the program is stuck.
# leaderboard/team_code/lmdriver_agent.py: DisplayInterface.__init__
self._display = pygame.display.set_mode(
(self._width, self._height), pygame.HWSURFACE | pygame.DOUBLEBUF
)
I tried to execute the following code separately, the pygame window continues to open.
import pygame
pygame.init()
pygame.font.init()
pygame.display.set_mode((1200, 900), pygame.HWSURFACE | pygame.DOUBLEBUF)
Thanks in advance!
Hi!
- You can put the dataset anywhere you can access it. When you use the dataset to pretrain or finetune the model, the dataset path should be set to your dataset path.
- Could you provide more details or the log? BTW, the two code blocks look similar? I can't understand the following context:
# leaderboard/team_code/lmdriver_agent.py: DisplayInterface.__init__
Hi!
- The input of the model during evaluation does not come from the dataset you provided, but from the simulator. Is that right?
- Sorry for not describing it clearly. Here is the detailed information about the problem I encountered when evaluation. Are there anything wrong with my operating procedures?
- My Command: First, run calar in one console.
bash carla/CarlaUE4.sh --world-port=2000
Second, run evaluation in another console.
export PT=2000
export CARLA_ROOT=carla
export CARLA_SERVER=${CARLA_ROOT}/CarlaUE4.sh
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla
export PYTHONPATH=$PYTHONPATH:$CARLA_ROOT/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg
export PYTHONPATH=$PYTHONPATH:leaderboard
export PYTHONPATH=$PYTHONPATH:leaderboard/team_code
export PYTHONPATH=$PYTHONPATH:scenario_runner
export LEADERBOARD_ROOT=leaderboard
export CHALLENGE_TRACK_CODENAME=SENSORS
export PORT=$PT # same as the carla server port
export TM_PORT=$(($PT+500)) # port for traffic manager, required when spawning multiple servers/clients
export DEBUG_CHALLENGE=0
export REPETITIONS=1 # multiple evaluation runs
export ROUTES=langauto/benchmark_long.xml
export TEAM_AGENT=leaderboard/team_code/lmdriver_agent.py # agent
export TEAM_CONFIG=leaderboard/team_code/lmdriver_config.py # model checkpoint, not required for expert
export CHECKPOINT_ENDPOINT=results/lmdrive_result.json # results file
#export SCENARIOS=leaderboard/data/scenarios/no_scenarios.json #town05_all_scenarios.json
export SCENARIOS=leaderboard/data/official/all_towns_traffic_scenarios_public.json
export SAVE_PATH=data/eval # path for saving episodes while evaluating
export RESUME=False
echo ${LEADERBOARD_ROOT}/leaderboard/leaderboard_evaluator.py
python3 -u ${LEADERBOARD_ROOT}/leaderboard/leaderboard_evaluator.py \
--scenarios=${SCENARIOS} \
--routes=${ROUTES} \
--repetitions=${REPETITIONS} \
--track=${CHALLENGE_TRACK_CODENAME} \
--checkpoint=${CHECKPOINT_ENDPOINT} \
--agent=${TEAM_AGENT} \
--agent-config=${TEAM_CONFIG} \
--debug=${DEBUG_CHALLENGE} \
--record=${RECORD_PATH} \
--resume=${RESUME} \
--port=${PORT} \
--trafficManagerPort=${TM_PORT}
- Program is stuck, and the information is shown below.
leaderboard/leaderboard/leaderboard_evaluator.py:24: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
bc1
localhost 2000
bc2
bc3
leaderboard/leaderboard/leaderboard_evaluator.py:95: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if LooseVersion(dist.version) < LooseVersion('0.9.10'):
bc4
/root/anaconda3/envs/lmdrive/lib/python3.8/site-packages/diffusers/models/cross_attention.py:30: FutureWarning: Importing from cross_attention is deprecated. Please import from diffusers.models.attention_processor instead.
deprecate(
pygame 2.5.2 (SDL 2.28.2, Python 3.8.13)
Hello from the pygame community. https://www.pygame.org/contribute.html
bc5
bc
========= Preparing RouteScenario_3 (repetition 0) =========
> Setting up the agent
- After checking, I find the program is stuck in the following position:
/media/samsung/samsung/mc.wei/code/LMDrive-main/leaderboard/leaderboard/leaderboard_evaluator.py(497)<module>()
-> main()
/media/samsung/samsung/mc.wei/code/LMDrive-main/leaderboard/leaderboard/leaderboard_evaluator.py(488)main()
-> leaderboard_evaluator.run(arguments)
/media/samsung/samsung/mc.wei/code/LMDrive-main/leaderboard/leaderboard/leaderboard_evaluator.py(423)run()
-> self._load_and_run_scenario(args, config)
/media/samsung/samsung/mc.wei/code/LMDrive-main/leaderboard/leaderboard/leaderboard_evaluator.py(277)_load_and_run_scenario()
-> self.agent_instance = getattr(self.module_agent, agent_class_name)(args.agent_config)
/media/samsung/samsung/mc.wei/code/LMDrive-main/leaderboard/leaderboard/autoagents/autonomous_agent.py(45)__init__()
-> self.setup(path_to_conf_file)
/media/samsung/samsung/mc.wei/code/LMDrive-main/leaderboard/team_code/lmdriver_agent.py(176)setup()
-> self._hic = DisplayInterface()
> /media/samsung/samsung/mc.wei/code/LMDrive-main/leaderboard/team_code/lmdriver_agent.py(81)__init__()
-> self._display = pygame.display.set_mode(
I find that when executing this code self._display = pygame.display.set_mode
, a pygame window open but immediately collapsed. Afterwards, the program will remain stuck here.
Hi!
- Yes. Our pipeline is based on the closed-loop and end-to-end setting. The evaluation needs to interact with the simulator in a real-time manner.
- Sorry, I can't reproduce the error and I haven't encountered this problem. Did you run it on a docker/server environment? Maybe you can try a newer version of pygame. Or remove
pygame.HWSURFACE | pygame.DOUBLEBUF
when you initialize the display. Or comment on corresponding pygame codes to run the evaluation without the GUI.
Thanks for your fast response. I will try according to your recommend.