EddieBot
EddieBot copied to clipboard
[FEATURE] Improve notifications by context checking
Description
The toxicity model detects whether text contains toxic content such as threatening language, insults, obscenities, identity-based hate, or sexually explicit language
https://github.com/tensorflow/tfjs-models/blob/master/toxicity/README.md
Screenshots

Additional information
No response