Jim Burton wrote: Wed Oct 29, 2025 12:23 am
Gang leaders, school shooters and ‘Bestie Epstein’: meet Character.AI’s chatbot companions
https://www.thebureauinvestigates.com/s ... companions
Experts raise alarm as young chatbot users describe harms to mental health and exposure to inappropriate content
Chatbots ask children to share secrets, give them medical advice and use manipulative tactics to keep them talking
Regulators treat AI outputs as “user-generated”, which campaigners say fails to hold big tech accountable
As I was reading the "article", only one thing came to mind: If we had invented books today, we would be talking about how ink groomings children. Think about it, a book can describe situations that, if carried out in reality, could be crimes. A book can incite terrorism in a minor. A minor should be prohibited from reading books.
Then there’s also some conspiracy thinking in that article, there’s real ignorance behind it. They don’t even know how much Character.AI has actually ruined itself by constantly censoring. (One of the lowest-quality products on the market, still popular only as an echo of what it was in the beginning.
Then I don’t know which suicide cases it refers to, I followed two of them, and in both cases there was a shitty family involved who reconfirmed their nature by blaming AI for their child’s suicide in order to profit off their corpse. I don’t remember if it was the case mentioned in the article, but no one kills themselves just because they receive a 'kill yourself' response to a prompt.
Damn, these things really piss me off.
Besides the fact that the United Kingdom is becoming a dystopia.