dspy icon indicating copy to clipboard operation
dspy copied to clipboard

How do I optimise for F1 score?

Open umarbutler opened this issue 11 months ago • 5 comments

The documentation states that '[f]or simple tasks, [a metric] could be just "accuracy" or "exact match" or "F1 score". This may be the case for simple classification or short-form QA tasks', yet it does not clarify how F1 scores can be used to optimize programs. Is that possible? And if so, how?

The metric function that is passed to teleprompter.compile seems to take gold and pred as inputs, which are single classifications, so I am unable to see how you could calculate an F1 score based off of that. It would be really helpful to have that ability since not all tasks should be optimised based on accuracy.

umarbutler avatar Mar 05 '24 07:03 umarbutler

It would be great if we could define a function that indicates whether a prediction is a true positive, false positive, true negative or false negative and then another function could be passed that takes a confusion matrix and calculates a score based on that. I don't just want to optimize F1 scores, I also want to be able to optimize for F-beta scores, MCC, precision, recall and other metrics calculatable based on confusion matricies.

Another way would be to input a sklearn.metrics function and a function for normalising values into True and False and have that function run before data is passed to the sklearn.metrics function. That would also be very helpful.

umarbutler avatar Mar 05 '24 10:03 umarbutler

https://dspy-docs.vercel.app/docs/building-blocks/metrics

okhat avatar Apr 29 '24 16:04 okhat

@okhat This does not answer my question. Please see:

The documentation states that '[f]or simple tasks, [a metric] could be just "accuracy" or "exact match" or "F1 score". This may be the case for simple classification or short-form QA tasks', yet it does not clarify how F1 scores can be used to optimize programs. Is that possible? And if so, how? The metric function that is passed to teleprompter.compile seems to take gold and pred as inputs, which are single classifications, so I am unable to see how you could calculate an F1 score based off of that. It would be really helpful to have that ability since not all tasks should be optimised based on accuracy.

umarbutler avatar May 17 '24 12:05 umarbutler

https://discord.com/channels/1161519468141355160/1161519642985111593/1234543339404267620

KevinGregory avatar Jun 20 '24 20:06 KevinGregory