Re: AI is abusing my child!
Posted: Thu Oct 16, 2025 5:03 pm
Meta Faces Lawsuit Over AI Chatbots’ Role in Child Grooming
https://www.webpronews.com/meta-faces-l ... -grooming/
https://www.webpronews.com/meta-faces-l ... -grooming/
Public and Private Support Forums for MAPs and Allies
http://forum.map-union.org/
Experts raise alarm as young chatbot users describe harms to mental health and exposure to inappropriate content
Chatbots ask children to share secrets, give them medical advice and use manipulative tactics to keep them talking
Regulators treat AI outputs as “user-generated”, which campaigners say fails to hold big tech accountable
As I was reading the "article", only one thing came to mind: If we had invented books today, we would be talking about how ink groomings children. Think about it, a book can describe situations that, if carried out in reality, could be crimes. A book can incite terrorism in a minor. A minor should be prohibited from reading books.Jim Burton wrote: Wed Oct 29, 2025 12:23 am Gang leaders, school shooters and ‘Bestie Epstein’: meet Character.AI’s chatbot companions
https://www.thebureauinvestigates.com/s ... companions
Experts raise alarm as young chatbot users describe harms to mental health and exposure to inappropriate content
Chatbots ask children to share secrets, give them medical advice and use manipulative tactics to keep them talking
Regulators treat AI outputs as “user-generated”, which campaigners say fails to hold big tech accountable
Family therapist Dr. Tom Kersting joins 'Fox & Friends' to discuss a study showing teenagers utilizing AI chatbots for mental health advice, efforts to restrict access and how parents can help their children maintain a social life without AI.
As the holiday season looms into view with Black Friday, one category on people’s gift lists is causing increasing concern: products with artificial intelligence.
The development has raised new concerns about the dangers smart toys could pose to children, as consumer advocacy groups say AI could harm kids’ safety and development. The trend has prompted calls for increased testing of such products and governmental oversight.
“If we look into how these toys are marketed and how they perform and the fact that there is little to no research that shows that they are beneficial for children – and no regulation of AI toys – it raises a really big red flag,” said Rachel Franz, director of Young Children Thrive Offline, an initiative from Fairplay, which works to protect children from big tech.
Last week, those fears were given brutal justification when an AI-equipped teddy bear started discussing sexually explicit topics.
Part of modern parenting, for many of us, is navigating the shifting landscape of digital threats. From the pitfalls of social media to the risks of excessive screen time.
Now, a new technology has quietly entered the homes of millions, AI chatbots — computer programs designed to simulate human conversations through text or voice commands.
One popular platform is called "Character AI." More than 20 million monthly users mingle with hyper-realistic, digital companions through its app or website.
AI tools will become “child sexual abuse machines” without urgent action, as “extreme” AI videos fuel record levels of child sexual abuse material found online by the Internet Watch Foundation (IWF).
New data released today (January 16) by the IWF shows 2025 was the worst year on record for online child sexual abuse material found by its analysts, with increasing levels of photo-realistic AI material contributing to the “dangerous” levels.
Analysts have also seen a “frightening” 26,362% rise in photo-realistic AI videos of child sexual abuse, often including real and recognisable child victims. In 2025, the IWF discovered 3,440 AI videos of child sexual abuse compared to only 13 in 2024.
Criminals are using the improving technology to create more of the most extreme Category A imagery (material which can even include penetration, bestiality, and sexual torture).