Here's what we do

Toxicity Detection for Platform Moderation

To improve online discussions, an AI system was introduced to detect harmful or inappropriate content in real time (e.g., sarcasm, implicit threats). It classifies different types of toxic language and improves over time using moderator feedback. This reduces the burden on human moderators while preserving meaningful engagement.

  • Company: Undisclosed
  • Industry: Healthcare & Global Policy
  • Revenue: N/A (Publicly Funded)