crossorigin="anonymous"> Controversial chatbot security measures ‘a sticking plaster’ – Subrang Safar: Your Journey Through Colors, Fashion, and Lifestyle

Controversial chatbot security measures ‘a sticking plaster’


Chatbot platform Character.ai is changing the way teens work, promising it will become a “safer” place with added control for parents.

The site is facing two lawsuits in the US – one over the death of a teenager – and has been named “A clear and present danger“To the youth.

It says safety will now be “infused” with new features that will tell parents how their child is using the platform – including how much time they’re spending talking to chatbots. And who they talk to the most.

The platform – which allows users to create digital personas they can interact with – will get the “first iteration” of parental controls by the end of March 2025.

But Andy Burrows, head of the Molly Rose Foundation, called the announcement “a belated, reactive and wholly unsatisfactory response” which he said “seems to be a response to their fundamental security concerns”. is stuck to the plaster”.

“Getting to grips with platforms like Character.ai and taking action against their continued failure to tackle entirely avoidable harm will be an early test for Ofcom,” he said.

character.ai Criticized in October. When the platform found chatbot versions of teenagers Molly Russell and Briana Ghee.

And the new safety features come as it faces legal action in the US over concerns it has handled child safety in the past. With a family claim A chatbot told a 17-year-old boy that killing his parents was an “appropriate response” to him limiting his screen time.

New features include notifying users after talking to the chatbot for up to an hour, and introducing new disclaimers.

Users will now be shown more warnings that they’re talking to a chatbot rather than a real person – and to treat what it says as fiction.

And it’s adding an extra disclaimer to chatbots that purport to be psychologists or therapists, so users shouldn’t rely on them for professional advice.

Social media expert Matt Navarra said he believes the move to introduce new security features “reflects a growing recognition of the challenges posed by the rapid integration of AI into our daily lives”.

“These systems aren’t just delivering content, they’re simulating interactions and relationships that can create unique risks, particularly around trust and misinformation,” he said.

“I think Character.ai is dealing with a significant risk, the potential for misuse or exposure to inappropriate content for young users.

“This is a smart move, and one that recognizes the evolving expectations around responsible AI development.”

But he said while the changes were encouraging, he was interested to see how the safeguards hold up as the role of AI grows.



Source link

Leave a Reply

Translate »