courses
courses copied to clipboard
Fix incorrect arguments and resulting prompts in prompts.py files for lessons 5, 6, and 9 of prompt_evaluations course
In the prompt evaluations course, lessons 5, 6, and 9 use a prompts.py file to have promptfoo compose the prompts. However, the existing versions incorrectly assume that the needed variable is directly passed; instead, promptfoo passes a context dict with a 'vars' key, from which the desired variable can be extracted. This affects the prompts passed to the models and the resulting eval scores. The prompt that is actually passed can be seen by clicking on the magnifying glass in any of the cells in the promptfoo viewer. This is a somewhat insidious error, given that everything runs and one wouldn't notice unless drilling down to see the actual final prompts passed to the models.
In lesson 5, by my runs, Haiku goes from 0% passing with the simple_prompt, to 75% passing after the fix, and from 66.67% passing to 75% passing with the better_prompt after the fix. There are changes in eval metrics in lessons 6 and 9 as well after fixing the prompts, albeit less dramatic than for lesson 5. For lesson 5, the new outcomes would require a small change to the narrative in the notebook (happy to do this, but I assumed you'd prefer someone at Anthropic make those changes) and perhaps to the simple_prompt.