llm-guard
llm-guard copied to clipboard
Risk score logic explanation
Hey, can anyone explain me the logic of risk score calculation in toxicity scanner in input scanner, as the formula in util does not give justice to the model generated scores.
If possible please provide a detailed explanation behind adding risk_score as a metric/indicator.
Thanks, Kadam
Hey @kadamsolanki , thanks for reaching out.
We use threshold configured and only calculate the risk score if it's above that threshold. Then the risk score is basically how far above the confidence score from the threshold.
Hope it makes sense
Hey @asofter, it does make sense and I was aware of this, but I wanted to say use the risk score for evaluation and there it does not make sense in case of all the scanners where we have sentence level scores. Because it will take the max score of all sentences of any 1 of the labels.
now to use the same max score for risk score calculation, does not help me as I am not sure on which sentence, or which label it is failing. So, I wanted to understand that is there a way for some sort of aggregation calculation or some confidence score at overall level for me to be clear with the model output.
I see, your use-case is sentence-level matching instead of overall text. Do you mean something which provides avg score across all sentences instead of the highest?
Yes.
Marking as duplicate of https://github.com/protectai/llm-guard/issues/111