textgrad icon indicating copy to clipboard operation
textgrad copied to clipboard

TextualGradientDescent optimizer response could not be indexed

Open Shad0wSeven opened this issue 1 year ago • 2 comments

I keep getting this error

ERROR: TextualGradientDescent optimizer response could not be indexed. This can happen if the optimizer model cannot follow the instructions. You can try using a stronger model, or somehow reducing the context of the optimization. Response: <VARIABLE> [INPUT] description: Input to the chatbot, which may include questions, requests for information, or clarifications. The input should be clear and concise to ensure accurate responses. [OUTPUT] response: <Your output here> Guidelines for responses: 1. Provide clear, concise, and accurate answers. 2. If the question is ambiguous or unclear, ask for clarification politely. 3. Use a friendly and professional tone. 4. Ensure the response is relevant to the input. 5. If the information is not available, state that clearly and offer alternative suggestions if possible. </VARIABLE>

whenever using longer prompts or differnet types of data, following prompt optimization, is ther any fix? Running GPT-4o

Shad0wSeven avatar Jul 17 '24 20:07 Shad0wSeven

You can try reducing the batch size.

In optimizer.py, the prompt_update_parameter in line 177 has long feedback for input prompts that are long or short. Even though the system prompt + prompt_update_parameter token length is below gpt-4 max tokens, this error comes up. However, reducing the batch size to 1 or 2 is working. Tested with evaluator engine = gpt-4o, gpt-4.

simra-shahid avatar Jul 28 '24 03:07 simra-shahid

I am running into the same issue with both FormattedLLMCall's and BlackboxLLM. I am using very large variables for context 20000 tokens. reducing the context has gotten the optimizer to run but doing so makes my loss very high. I hope there's a way around this i'm looking to optimize only the prompt not the context.

gl-donald avatar Aug 23 '24 19:08 gl-donald