Is it possible for an object to be racist? In 2017, a senior Nigerian-born Facebook employee found himself in a bathroom filled with apparently useless soap dispensers. It turned out they did work, but only for his white friends. The device contained an electronic sensor which didn’t recognise black skin, because it had never been tested on a wide enough range of skin colours.
The tech designer shared a video of his experience on Twitter, commenting: “If you have ever had a problem grasping the importance of diversity in tech and its impact on society, watch this video.”
As any wheelchair user will tell you, physical objects can have effects far beyond their job description. In the computer age, artificial intelligence (AI) technologies seem magical – but they can be biased and socially damaging in unexpected ways.
Imagine the frustration felt by the residents of the English town of Scunthorpe to find they had been blocked from (web portal) AOL accounts by the giant company’s profanity filter. We feel for them, although it’s still kinda funny, for non-Scunthorpe-ites anyway.
And then there are the AI-human interactions so horrendous that they have darkened the entire story of AI; the death of 49-year-old cyclist Elaine Herzberg, killed in 2018 by a self-driving test car, springs to mind.
The tragedy encapsulates the ‘idiot genius’ problem with AI. The car detected Elaine with an array of radar and light-emitting lidar sensors, which fed an indescribably complex network of algorithms. And yet the car didn’t even swerve. Nobody prepared the car’s ‘brain’ for the shape of a cyclist with shopping bags on her handlebars; perhaps because many of the car’s algorithms had been written by other algorithms.
“In some ways we’ve lost agency,” writes Ellen Ullman, an American programmer. “When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand.”
Machine learning, a fundamental part of AI research, is the study of algorithms that learn through experience, systems which have been called the ‘cogs and gears’ of the modern age. They keep planes in the air, diagnose disease, recommend Netflix shows and, in the US, tell judges who to send to jail. This is the technology behind search engines like Google Search and ‘generative’ tools like ChatGPT, GPT-4 and AI art.
This week saw the ‘production release’ of ChatGPT, a free AI chatbot using ‘large language model’ (LLM) software – the test version was released late last year – trained by analysing billions of online documents and the inputs of those who use it.
ChatGPT can write a 2000 word university-level essay in 20 minutes. Students, harried and time-poor, have been early adopters, using it to write essays on subjects they don’t understand; although some have pointed out that the software ‘stimulates creativity’, and others have said it helps non-native English speakers to write up research, or create simple-language summaries of science papers.
ChatGPT has passed the US medical licensing exam, business, coding and law exams.
A researcher into academic integrity, Imperial College London scientist Thomas Lancaster, told The Guardian: “It’s an incredibly tricky problem because this has almost appeared out of nowhere.”
While ChatGPT has been hailed for its ability to organise information and churn out newsletters in the chatty, neutral or academic tone of your choice, it is not reliably accurate.
“The ways large language models get things wrong can be fascinating,” writes tech commentator Ethan Zuckerman. “These systems are optimised for plausibility, not accuracy. If you ask for academic writing, it will output text complete with footnotes… [but] many of the works in these footnotes have been ‘imagined’ by the system, a phenomenon called “hallucination”.
“This facility with form and content,” he says, “makes ChatGPT an extremely efficient bullshitter.”
Former US publisher Gordon Crovitz warned recently that ChatGPT is “the most powerful tool for spreading misinformation that has ever been on the internet”.
This week RNZ’s Mediawatch talked to Crovitz, cofounder of a US-based group working to rate New Zealand’s news sites for reliability. The company, NewsGuard, is using human journalists to train generative AI to “minimise the potential for misinformation-spreading on an epic scale”, the RNZ presenter, Colin Peacock, told listeners.
“We thought this was a journalistic problem, not really a technological problem so we brought to it a journalistic solution, which is to identify nine basic apolitical criteria of journalistic practice,” Crovitz told him. For example, “Does a site disclose its ownership? Does it have a corrections policy?”
Other signs of reliability, he says, are contact details and bylines, like the ones you see under every story in, ahem, Gulf News.
Although software like ChatGPT is transformative, it values speed over accuracy. To save ourselves from drowning in a tide of AI-generated BS, we need to know where the truth lives, and as Zuckerman suggests, “the ways that [this technology] breaks and fails”. • Jenny Nicholls