To improve online discussions, an AI system was introduced to detect harmful or inappropriate content in real time (e.g., sarcasm, implicit threats). It classifies different types of toxic language and improves over time using moderator feedback. This reduces the burden on human moderators while preserving meaningful engagement.