I’m not a photographer, but I’ve always loved photography. Especially news photography. When I worked for a glossy current affairs magazine, I spent much of my time figuring out how to illustrate long-form feature stories. It was exciting: the journalists were the best in the business and they worked on investigations for months – sometimes even years.
Much as I yearned for the immediacy of photo reportage, the monthly feature stories invariably ended up as portrait galleries, of doctors, scientists, detectives, the accused, witnesses or anyone else the writer thought was essential to the story. Their pictures did help to bring a complex issue to life, and helped readers see things from the interviewee’s point of view.
But as writers tended to explore things which had already happened, like the Lundy murder case, reportage was difficult to include, even though you might think a current affairs magazine would be full of it.
I mourned this lack of photojournalism. Photographs are great at telling us about the lives of people who lead different lives to our own. One of the first real photojournalists was Jacob Riis, a 19th century New York reporter who popularised the phrase ‘How the Other Half Lives’, the title of the book he published in 1890. Riis’s photographs showed young New York children trapped in hellish working and living conditions, shocking the nation into housing reform and basic child labour laws.
The book’s title is a quote from the French writer François Rabelais: “one half of the world does not know how the other half lives” (‘la moitié du monde ne sait comment l’autre vit’). Photojournalism can change that.
While news photography can also be used in manipulative ways, it is a shock to see what little value the truth has to vendors of generative AI.
This is not lost on Donald Trump, who falsely claimed on social media this month that a photograph of a crowd waiting for Kamala Harris had been faked by AI.
“There was nobody there! This is the way the Democrats win elections, by CHEATING,” he fumed, in a sinister sign of the way he deals with electoral bad news.
Unfortunately for Trump, several major news channels had broadcast the event via live stream, and the large crowd – many themselves shooting videos – were filmed from numerous angles. A local news outlet put the crowd at “about 15,000 people”.
While faked photos have been around since the dawn of photography, it used to take skill and effort to fool us convincingly.
Not anymore.
“An explosion from the side of an old brick building. A crashed bicycle in a city intersection. A cockroach in a box of takeout. It took less than 10 seconds to create each of these images with the Reimagine tool in the [new Google phone] Pixel 9’s Magic Editor. These photographs are extraordinarily convincing, and extremely fake”, warned Sarah Jeong last week in online technology magazine The Verge.
A senior Wikipedia editor, sharing the story, sighed: “People are already submitting AI-generated or ‘improved’ photos to Wikimedia Commons, which serves them up as the official image of a person or event.”
It was, he worried, “the almost-overnight destruction of photo-as-truth”.
With the addition of AI generated drug paraphernalia to a portrait, your child can be convincingly defamed in minutes. Anyone’s face can now be added to deepfake porn. And shared.
To Wikimedia editors, the veracity of a photograph is critical, because the whole point of Wikipedia is to be reliable and historically useful. Up until now, photographs have been accepted as evidence of domestic or state violence; used in court by an abused woman or shared online by citizen journalists from countries like Iran, Egypt, Turkey, Kashmir, Bangladesh and Gaza.
It was easy to disprove Trump’s lies about the Harris rally image, because there were so many witnesses – around 15,000, as it turned out. But his brazen accusation carries an unpleasant whiff of the future, when the default assumption about any contested photograph will be that it is fake.
Gulf News will never use AI to generate news photography, or knowingly publish anything written with products like ChatGPT, which gets things wrong because it has been trained on outdated information. Large language models like ChatGPT are increasingly fed their own “slop” as the internet runs out of text and images uncontaminated by AI output. (Please don’t use ChatGPT as a searchable database). AI also relies on vast, energy-hungry data centres, causing Google and Microsoft’s emissions to skyrocket. We want no part of that.
Robots have no moral sense, take no responsibility, can’t recognise plagiarism, disinformation, satire or propaganda, and are prone to gushing cliché.
As Mediawatch producer Hayden Donnell writes: “Reading [a story written by AI] produces what I imagine to be a similar sensation to a spider crawling over your brain. Its writing isn’t so much terrible as unsettling, existing in a kind of uncanny valley between sense and nonsense.”
• Jenny Nicholls
©Waiheke Gulf News Ltd 2024