LLM-Assisted-Light icon indicating copy to clipboard operation
LLM-Assisted-Light copied to clipboard

Does different GPT API differ in the performance of customerized prompts?

Open Joan-YAO opened this issue 1 year ago • 1 comments

I have tried GPT-4o-mini in the LLM agent but get a sub-optimal decision like the following screenshot. The decision (phase 2) provided by the embedded RL is more reasonable than the one (phase 1) provided by LLM agent. Is this problem caused by the version of API version? Or is it caused by the imcomplete logic judgement in the chain-of-thought or prompt engineering? Which file contains the prompt for me to make some adjustment or modification? 图片

Joan-YAO avatar Nov 27 '24 07:11 Joan-YAO

Hello,

Thank you very much for your interest in this project.

You can modify the prompt in the following file: https://github.com/Traffic-Alpha/LLM-Assisted-Light/blob/main/TSCPrompt/llm_prompt.py

I used GPT-4 for testing. I've reviewed most of the outputs, and the logic and final decisions are consistent.

Bests, Maonan

wmn7 avatar Dec 19 '24 22:12 wmn7