Character.AI is making major changes to continue keeping teen users safe. According to a blog post from the company, Character.AI has added several new features to keep teens safe on the platform and will soon roll out parental controls that will give parents much more oversight. The company says these measures will block inappropriate content and make the overall experience safer.
A mother recently sued Character.AI, blaming it for her 14-year-old son's suicide.
Character.AI has updated its LLM for teen users. It has created a separate large language model (LLM) for users under 18 years old. This LLM identifies inappropriate responses, especially on sensitive topics such as romance or self-harm, and blocks them.
The platform will also show a specific pop-up directing users to the National Suicide Prevention Lifeline if the LLM identifies language that hints at self-harm or suicide. The new changes will make it a more friendly and safer place for teenagers.
It has also prevented minor users from editing responses to the chatbots. Earlier, users were able to edit what the bots said, but teens will not be able to change the conversations anymore. This is part of the company's efforts to block inappropriate content from being added to chats.
These changes are necessary to protect teens from harmful conversations. It will help in creating a safe area for a smaller age group. The changes are being made together with "several teen online safety experts," including the organization ConnectSafely.
The company is also adding new parental controls which will allow parents to see how much time their child spends on Character.AI and which AI chatbots they interact with the most. The new parental control will be available in Q1 2025.
The company will also bring a time limit feature: it will notify users after an hour of chatting with a bot to take a break. This feature will prevent addiction to the platform and ensure healthy usage habits.