crossorigin="anonymous"> Character.AI introduced safety measures during cases of teenage suicide. – Subrang Safar: Your Journey Through Colors, Fashion, and Lifestyle

Character.AI introduced safety measures during cases of teenage suicide.




Words reading “Artificial Intelligence AI”, a small prototype of a robot hand is shown in this illustration. – Reuters

Character.AI, once hailed as one of Silicon Valley’s most promising AI startups, on Thursday announced new security measures aimed at protecting teenage users that it claimed Its chatbots are said to have contributed to youth suicide and self-harm.

The California-based company, founded by former Google engineers, is among several firms offering AI companions — chatbots to provide conversation, entertainment and emotional support through human-like interactions. is designed.

In a Florida lawsuit in October, a mother claimed the platform took responsibility for her 14-year-old son’s suicide.

The teenager, Sewell Setzer III, had developed an intimate relationship with a chatbot based on “Game of Thrones” character Daenerys Targaryen and mentioned suicidal thoughts.

According to the complaint, Bott encouraged his final act, replying “Please do, my dear king” as he said he was “coming home” before taking his own life with his stepfather’s weapon. .

The suit says Character.AI “went to great lengths to engineer 14-year-old Seul’s harmful dependence on their product, sexually and emotionally abused her, and ultimately stopped her from offering help or her parents.” failed to inform when he expressed suicidal ideation.”

A separate Texas lawsuit filed Monday involves two families who allege the platform exposed their children to sexually explicit material and encouraged self-harm.

One case involved a 17-year-old autistic boy who allegedly suffered a mental health crisis after using the platform.

In another instance, the lawsuit alleges that a Character.AI encouraged a teenager to kill his parents for limiting his screen time.

The platform, which hosts millions of user-created personas ranging from historical figures to abstract concepts, has become popular among young users seeking emotional support.

Critics say this has led to a dangerous dependency among vulnerable youth.

In response, Character.AI announced that it had developed a separate AI model for users under 18, with stricter content filters and more conservative responses.

The platform will now automatically flag suicide-related content and refer users to the National Suicide Prevention Lifeline.

“Our goal is to provide a space that is attractive and safe for our community,” a company spokesperson said.

The company plans to introduce parental controls as early as 2025, allowing monitoring of children’s use of the platform.

For bots that include descriptions such as therapist or doctor, a special note will warn that they do not replace professional advice.

New features include mandatory break notifications and a significant disclaimer about the artificial nature of interactions.

Both lawsuits name Character.AI’s founder and Google, an investor in the company.

The founders, Noam Shazeer and Daniel De Freitas Adiwarsana, returned to Google in August as part of a technology licensing deal with Character.AI.

Google and Character.AI are completely separate, unrelated companies, Google spokesman Jose Castaneda said in a statement.

“User safety is a top concern for us, which is why we have taken a careful and responsible approach to developing and introducing our AI products, with rigorous testing and security processes in place,” he added. “



Source link

Leave a Reply

Translate »