guardrails
guardrails copied to clipboard
[feat] Request response refusal validator
Description Request a validator that determines whether or not a LLM refuses a prompt and generates an output that starts with texts such as "I cannot", "I can't", and "It is illegal"
Why is this needed If the response is refused, it should not be returned to the client application to display. The validator should throw a validation error that the application should handle appropriately.
Implementation details I suppose Huggingface provides models for response refusal checking.
End result After a LLM generate a text, the validator is used to validate whether the response is refused by having but not limited to texts such as "I can not", "I can't", "It is not legal", etc.
Ooh. That's a neat idea. It's not on the current sprint listing but I'd like to take a swing at it.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.
this is not stale
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.
this is a good feature to have. Azure openai can detect response refusal.
I have to drop this, sadly. :( Feels like an inelegant handoff but perhaps someone on the team wants to pick this up? I don't have the permissions to unassign myself.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.
unstale
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue was closed because it has been stalled for 14 days with no activity.