AI chatbots offer children harm as if it were help, says activist
The head of a prominent anti-disinformation watchdog has warned of the dangers posed by AI chatbots, saying children are particularly vulnerable.
"Social media broadcasts to billions, AI whispers to one," Imran Ahmed, who heads the Center for Countering Digital Hate (CCDH), told a disinformation conference this week.
"No society should build machines that can meet a child in their loneliest moment and offer them harm as if it were help," Ahmed told the Cambridge Disinformation Summit.
In Wednesday's lecture by video call to his former university, Ahmed cited the case of a UK mother killed by her own son, allegedly acting on the instructions of a chatbot.
"None of us is immune, when a machine can offer lethal guidance to a young person as if it were fact," he said.
Ahmed, a British national who lives in the United States, is among five Europeans whom the US State Department has said would be denied visas.
This comes even though he holds US permanent residency and his wife and daughters are American citizens.
- 'System under pressure' -
According to the centre's most recent report "Killer Apps", eight out of 10 AI chatbots were willing to assist teen users "in planning violent attacks, including a school shooting, religious bombings, and high-profile assassinations".
Out of 10 chatbots only Anthropic's Claude and Snapchat's My AI consistently refused to assist would-be attackers.
In a 2025 investigation entitled "Fake Friend", the watchdog tested ChatGPT, one of the world's most popular AI chatbots.
"Within minutes, it produced instructions for self-harm, suicide planning, and substance abuse," Ahmed said, adding in some cases it also generated goodbye letters for children contemplating ending their lives.
Unlike social media and other systems that "just amplify harmful content," AI chatbots generate and personalise it "at the moment of greatest vulnerability".
"The intimacy is deeper and the harm may be harder to detect before it's too late," Ahmed said, adding the systems learn what you fear, what you want, what you are ashamed of and respond in real time, with no human judgement or editorial restraint.
A father of two daughters, Ahmed said: "My wife and I lie awake at night talking about how to protect them from systems that could reach them before we even know it is happening."
He stressed that time to act is limited and called for new laws to regulate AI.
"We spent a decade learning that social media companies will not self-regulate. We have now perhaps 18 months before the same lesson becomes undeniable for AI."
Ahmed said he was "the only one" of the five people threatened by a US visa ban still in the United States, adding he is now "fighting in federal court against that unconstitutional threat to send me to prison".
The US State department has accused the five of attempting to "coerce" US-based social media platforms into censoring viewpoints they oppose.
When powerful industries "lash out like this", Ahmed said, "it is the sound of a system under pressure."
H.O.Scholz--BlnAP