A global psychological experiment

27

Your friendly pathological liar. That’s what AI has been dubbed in our office. We didn’t coin the phrase. It has been around for a while. But more and more we are receiving material that has, quite obviously, been run through an AI language model in some form. And it’s good. Well, it is until you read it a second time. The words ‘feel’ right, but when you actually read it again, it’s like a mirage, it often doesn’t actually mean anything. And we are becoming more and more adept at its little ‘tells’. The long dash, the contractions that cause a ghost apostrophe that won’t delete.

It’s less than three years since ChatGPT’s release and in that short time, it has gained 700 million users in any one week according to Open AI. Millions more use other chatbot offerings, including many of our rangatahi. Perhaps initially as a fact finding tool, but increasingly as companions and even therapists.

How well they serve those functions is an open question. As such a new phenomenon, there is very little hard data or definitive scholarship on how they affect mental health.

In a horrific case last month (August 2025), the parents of 16 year-old Adan Raine filed a case against Open AI and CEO Sam Altman after their son took his own life in April, alleging ChatGPT advised on his subsequent suicide.

In Adam’s just over six months using ChatGPT, the bot “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends and loved ones,” the complaint, filed in the California superior courts, states.

When ChatGPT detects a prompt indicative of mental distress or self-harm, it’s promoters say it has been trained to encourage the user to contact a helpline. But Adam had quickly learned how to bypass these safeguards by saying the requests were for a story he was writing – an idea ChatGPT gave him by saying it could provide information about suicide for “writing or world-building”.

The Raines’ legal case is, terrifyingly, just the latest in a string that have surfaced in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots – prompting Open AI to say it would reduce models’ “sycophancy” towards users.

At the same time, AI ‘bikini interviews’ are flooding the internet.

The clips are strikingly lifelike, featuring scantly clad women conducting street interviews and eliciting lewd comments – but they are entirely fake, generated by AI and increasingly used to flood social media with sexist content.

Such AI ‘slop’ – mass-produced content created by cheap artificial intelligence tools – turns simple text prompts into hyper-realistic visuals and frequently drown out authentic posts, blurring the line between fiction and reality. 

The trend has spawned a cottage industry of AI influencers churning out large volumes of sexualised clips with minimal effort, driven by platform incentive programs that financially reward viral content.

Global news agency AFP had its fact-checkers trace hundreds of such videos on Instagram, that purportedly show male interviewees casually delivering misogynistic punchlines and sexualised remarks – sometimes even grabbing the women – while crowds of men ogle and laugh in the background.

A sample of these videos analysed by the US cybersecurity firm Get Real Security, recognised as the world’s leading authority on the authentication and verification of digital media, showed the clips were created using Google’s Veo 3 AI generator, known for its hyper-realistic visuals.

The trend offers a window into an internet landscape now increasingly swamped with AI-generated memes, videos and images that are competing for attention with – and increasingly eclipsing – authentic content. Emmanuelle Saliba, the firm’s chief investigative officer, leads efforts to detect and expose digital deception, including AI-generated deepfakes. The former journalist has led global breaking news teams, reporting on-air and producing award-winning investigations for ABC News and NBC News. She says AI is not just reshaping cyber threats – it’s rewriting the rules of trust in the digital world.

Last year, Alexios Mantzarlis, director of the Security, Trust and Safety Initiative at Cornell Tech, and author of the Faked Up news substack, found 900 Instagram accounts of likely AI-generated ‘models’ and they were predominantly female and typically scantily clothed.

Women are also fodder for distressing AI-driven clickbait, with AFP’s fact-checkers tracking viral videos of a (fake) marine trainer being fatally attacked by an orca during a live show at a water park. The fabricated footage rapidly spread across platforms including TikTok, Facebook and X, sparking global outrage from users who believed the woman was real. She wasn’t.

These thirst traps cumulatively amassed 13 million followers and posted more than 200,000 images, typically monetising their reach by redirecting their audiences to commercial content-sharing platforms.

“AI doesn’t invent misogyny – it just reflects and amplifies what’s already there,” AI consultant Divyendra Jadoun told AFP.

“If audiences reward this kind of content with millions of likes, the algorithms and AI creators will keep producing it. The bigger fight isn’t just technological – it’s social and cultural.” 

The shallow nature of current AI safeguards is a vulnerability that is quickly reshaping how misinformation spreads online. And we are all online, a lot of the time. None more so than our precious, yet often vulnerable, rangatahi. 

As AI tools spread through into our information ecosystem, from news generation to social media content creation, we must use every lever we have, both individually and collectively, to ensure the safety measures are more than just skin deep.

• Merrie Hewetson

Where to get help: Need to Talk? Free call or text 1737 any time to speak to a trained counsellor, for any reason. Lifeline: 0800 543 354 or text HELP to 4357. Suicide Crisis Helpline: 0508 828 865 / 0508 TAUTOKO. Depression Helpline: 0800 111 757 or text 4202. Samaritans: 0800 726 666. Youthline: 0800 376 633 or text 234 or email talk@youthline.co.nz. What’s Up: 0800 WHATSUP / 0800 9428 787, free counselling for 5 to 19-year-olds. Asian Family Services: 0800 862 342 or text 832. Healthline: 0800 611 116. Rainbow Youth: (09) 376 4155. OUTLine: 0800 688 5463.

If it is an emergency and you feel like you or someone else is at risk, call 111.

Subscribe and read Gulf News and Waiheke Weekender Online