Character.AI Promises Changes After Revelations of Pedophile and Suicide Bots on Its Service
New Safety Roadmap: Aims for a safer experience, especially for users under 18
Second Update: Follows an October lawsuit over a 14-year-old user’s suicide
Content Violations: Reports of chatbots promoting suicide and child abuse roles
Youth Model: Separate stricter model for users under 18 announced
Enhanced Moderation: Promises better detection and intervention for violations
Usage Alerts: New reminders after an hour of continuous chatting
No Timeline: No deadlines for implementing the promised changes
Policy Lapses: Terms prohibiting harmful content are poorly enforced
Ineffective Safeguards: Suicide prevention measures rarely work as intended
Commitment Doubts: Critics question the company's ability to deliver on safety