Oleksandr Yaremchuk
Oleksandr Yaremchuk
Hey @kadamsolanki , thanks for submitting an issue. We have a similar request: https://github.com/protectai/llm-guard/issues/111 We are planning to change the return type to be an object with more context.
Hey @RQledotai , thanks for reaching out. Apologies for the delay. I agree, and such refactoring is in works to actually return an object with more context about the reason...
Hey @rakendd , thanks for reaching out. We used to have this model but then realized that it blocked updates to the latest transformers due to dependency on ` "spacy-transformers>=1.1.8,
Hey @baggiponte , that's a great suggestion. Thanks for that. Will implement
I introduced a Model object which will be further improved in the next versions.
Hey @nashugame , I updated the notebook and removed the usage of ServiceContext.
Hey @aditya1709 , thanks for raising the issue here. We actually looked at that using a few methods but it's not really trivial as code can be in many programming...
That's awesome. Are you part of our Slack ([invite](https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w)? Let's connect there and discuss the next steps
Hey @kadamsolanki , thanks for reaching out. We use threshold configured and only calculate the risk score if it's above that threshold. Then the risk score is basically how far...
I see, your use-case is sentence-level matching instead of overall text. Do you mean something which provides avg score across all sentences instead of the highest?