LimSim icon indicating copy to clipboard operation
LimSim copied to clipboard

Collect data

Open a1wj1 opened this issue 1 year ago • 13 comments

Can the software collect visual image data for reinforcement learning?

a1wj1 avatar Apr 16 '24 09:04 a1wj1

Thank you for your interest in LimSim&LimSim++. Currently, LimSim++ has saved the panoramic image data in the database. You can find the relevant data in the imageINFO table in the database. However, this image is a compressed image with a size of 560*315. If you need the original image of 1600*900, you can obtain the CameraImages.ORI_CAM_FRONT series during the running process and save it yourself.

fudaocheng avatar Apr 16 '24 09:04 fudaocheng

Glad to hear from you! I am using LimSim++ of LLM. If GPT4 is used for description, is the input of GPT4 a description of the BEV image scene? Also, how do I obtain the CameraImages.ORI_CAM_FRONT series?

a1wj1 avatar Apr 16 '24 11:04 a1wj1

Yes, if your LLM does not support image input, you can use the text description we provide. In lines 249 to 253 of ExampleVLMAgentCloseLoop.py, you can see the method of obtaining and using CameraImages. You can replace images[-1].CAM_FRONT with images[-1].ORI_CAM_FRONT. You can check the function model.getCARLAImage() to get more information.

fudaocheng avatar Apr 17 '24 01:04 fudaocheng

Thank you. My LLM just can't read the image, how do you get the text description on your code? I saw the introduction to the paper which said: LimSim++ extracts road network and vehicle information around your vehicle. This scenario description and task description information is then packaged and passed in natural language to the driver agent.

But in terms of code, how do you get this process?

a1wj1 avatar Apr 17 '24 02:04 a1wj1

In lines 314 to 316 of ExampleLLMAgentCloseLoop.py, you can see how we get navigation information, action information and environment information.

navInfo = descriptor.getNavigationInfo(roadgraph, vehicles)
actionInfo = descriptor.getAvailableActionsInfo(roadgraph, vehicles)
envInfo = descriptor.getEnvPrompt(roadgraph, vehicles)

In fact, you can build your own driver agent by modifying ExampleLLMAgentCloseLoop.py directly on top of it.

fudaocheng avatar Apr 17 '24 03:04 fudaocheng

4a02413b5b94b66d20cefc1ab1ad895 Thanks for your reply. Excuse me. How do I in ExampleLLMAgentCloseLoop images from three perspectives py reality inside?

a1wj1 avatar Apr 17 '24 07:04 a1wj1

ExampleLLMAgentCloseLoop.py will not provide round-view images, you can get camera images from ExampleVLMAgentCloseLoop.py

fudaocheng avatar Apr 17 '24 08:04 fudaocheng

Should it be possible to transfer the image display code from ExampleVLMAgentCloseLoop.py to ExampleLLMAgentCloseLoop.py?

a1wj1 avatar Apr 17 '24 09:04 a1wj1

In fact, there is no big difference between the two in terms of interface calls, you can take the interfaces in VLMExample and use them in LLMExample to get the image information. However, VLMExample's runtime conditions are different, you can refer to readme.md to run VLMExample.

fudaocheng avatar Apr 18 '24 02:04 fudaocheng

When I was running ExampleLLMAgentCloseLoop.py, I had already set the link for carla and opened carla, but no image was displayed either. I compared ExampleLLMAgentCloseLoop.py and ExampleVLMAgentCloseLoop.py and felt that apart from the LLM interface, the other parts were not very different, but I could not find the key code to display the image.

a1wj1 avatar Apr 18 '24 02:04 a1wj1

Did you set CARLACosim=True when you initialize the model?

# init simulation
model = Model(
    egoID=ego_id, netFile=sumo_net_file, rouFile=sumo_rou_file,
    cfgFile=sumo_cfg_file, dataBase=database, SUMOGUI=sumo_gui,
    CARLACosim=True, carla_host=carla_host, carla_port=carla_port
)

However, I still recommend that you use VLMExample if you want to work with image data.

fudaocheng avatar Apr 18 '24 02:04 fudaocheng

Yes,I have CARLACosim=True

a1wj1 avatar Apr 18 '24 03:04 a1wj1

So, can you run the VLMExample successfully? You can just run VLMExample to test that your environment is installed correctly and that the application is running properly, without using VLM making a decision.

fudaocheng avatar Apr 18 '24 03:04 fudaocheng

Stale issue message, no activity

github-actions[bot] avatar Jun 18 '24 01:06 github-actions[bot]