make-a-readme icon indicating copy to clipboard operation
make-a-readme copied to clipboard

Hugging.readme

Open modleao opened this issue 7 months ago • 0 comments

Toxic Content Detector

This project is a simple web application that detects toxic, offensive, or harmful content in English text. It uses the pre-trained unitary/toxic-bert model from Hugging Face and is built with Gradio for the web interface.

💡 What It Does

  • Detects toxic categories like:
    • Toxic
    • Severe Toxic
    • Obscene
    • Threat
    • Insult
    • Identity Hate
  • Returns only labels with high confidence
  • Shows "Clean" if no toxic content is detected

🧠 Model Used


🛠️ Installation

  1. Clone this repository or download the files:
git clone https://github.com/yourusername/toxic-content-detector.git
cd toxic-content-detector

modleao avatar May 07 '25 18:05 modleao