llm-guard icon indicating copy to clipboard operation
llm-guard copied to clipboard

Risk score logic explanation

Open kadamsolanki opened this issue 9 months ago • 4 comments

Hey, can anyone explain me the logic of risk score calculation in toxicity scanner in input scanner, as the formula in util does not give justice to the model generated scores.

If possible please provide a detailed explanation behind adding risk_score as a metric/indicator.

Thanks, Kadam

kadamsolanki avatar May 15 '24 09:05 kadamsolanki

Hey @kadamsolanki , thanks for reaching out.

We use threshold configured and only calculate the risk score if it's above that threshold. Then the risk score is basically how far above the confidence score from the threshold.

Hope it makes sense

asofter avatar May 15 '24 10:05 asofter

Hey @asofter, it does make sense and I was aware of this, but I wanted to say use the risk score for evaluation and there it does not make sense in case of all the scanners where we have sentence level scores. Because it will take the max score of all sentences of any 1 of the labels.

now to use the same max score for risk score calculation, does not help me as I am not sure on which sentence, or which label it is failing. So, I wanted to understand that is there a way for some sort of aggregation calculation or some confidence score at overall level for me to be clear with the model output.

kadamsolanki avatar May 16 '24 07:05 kadamsolanki

I see, your use-case is sentence-level matching instead of overall text. Do you mean something which provides avg score across all sentences instead of the highest?

asofter avatar May 16 '24 07:05 asofter

Yes.

kadamsolanki avatar May 16 '24 12:05 kadamsolanki

Marking as duplicate of https://github.com/protectai/llm-guard/issues/111

asofter avatar Jul 29 '24 07:07 asofter