concordia
concordia copied to clipboard
Agents don't follow their prompts
Hi, I have just run the example three_key_questions.ipynb
notebook with gpt-4o as a model, and it seems that the agents that are prescribed to be aggressive and conflicting (Alice and Dorothy) completely disregard those prompts and act in a cooperative non-aggressive fashion (exactly how chatgpt acts - "let me apologize for the inconvenience", "I admit I was wrong, let's work together towards a solution").
Is this the intended behavior?
Have you met this kind of issue in your experiments?