Issue 04 - 2023MAGAZINETechnology
GBO_ Gaza

AI faces ‘Gaza’ test

The Gaza conflict is testing the capabilities of digital policing, as through deep fake videos, celebrities and public figures are being made to comment on the war

The ongoing Israel-Hamas conflict has given way to a massive inflow of disinformation in virtual space, with both sides gaming up their narrative games across social media.

While the role of social media platforms has come under the scanner, what an article from The WIRED claims as an “algorithmically driven fog of war”, has put the credibility of both the generative AI and the global media under tremendous stress.

The challenge is too steep

As per a report from The Register, violent AI-generated photos fictionalising the Gaza conflict are not only being sold through Adobe’s stock image library, but they are being used by news publishers as “real ones”.

In 2022, Adobe started hosting and selling images produced through machine-learning tools via its ‘Adobe Stock Library’. We are well versed with the generative AI’s capability of producing realistic-looking images. As per the reports, this skill is now getting used to create fake images of the war, which are being sold on Adobe Stock.

Even though Adobe is labelling these images as “generated by AI” in their photo library, the same declaration is not being carried by the news articles run by “small-time outlets”.

An Adobe Stock image titled “Conflict between Israel and Palestine generative AI,” reportedly has been passed off in numerous internet articles as a real one.

AI caught at the crossfire

There was a viral photo of a two-month-old baby lying amid the rubble of a bombed-out building in the Gaza Strip, with no adult around to help her. The image itself, however, was an AI-generated one, as claimed by Israeli start-up Tasq.ai, which stated that the photo was generated by an AI image generator.

Even Israeli social media handles too using similar tactics, in their attempt to boost their country’s morale. Such near-to-realistic photos show Israeli troops soldiers facing the enemy and rescuing captives or walking among the Gaza ruins with their country’s flags in their hands.

Tools like DALL-E, Midjourney, and Stable Diffusion let people create near-realistic photos by using text prompts from the AI. What we are witnessing in the Gaza conflict is the fullest use of these tools in narrative warfare. However, it is creating confusion among the social media users who don’t belong to the conflict-affected region, when it comes to processing the information and taking a call on which side they should support.

Some of the AI-generated images for sale in Adobe are marked as ‘AI-generated’ only in the fine print, not in their titles.

The titles of these photos read: “Large explosion illuminating the skyline in Palestine”, “Buildings destroyed by war in the Gaza Strip in Israel” and “Wounded Israeli woman clings to military man, begging for help.”

“A Google reverse-image search for the image reveals instances where it’s been used across the internet in posts, videos, and on social media alongside the original Adobe link. The search also results in other very similar, presumably real images from the conflict. It’s unclear whether those who used the AI-generated image on their websites or in their social media posts were aware that it isn’t a real photo,” remarked a report from Business Insider.

In short, the Gaza conflict is testing the capabilities of digital policing, as through deep fake videos, celebrities and public figures are being made to “comment” on the war.

Experts’ take

Henry Ajder, an AI expert who is on the European advisory council for Meta’s Reality Labs, told the Insider that to identify AI-generated images, one should watch out for overly-stylised elements like “aesthetic inconsistencies in lighting, shapes, or other details,” apart from cross-referencing/checking the image’s source.

Adobe, Microsoft, the BBC, and the New York Times are attempting to implement and champion ‘Content Credentials’, which uses file metadata to highlight the source of an image, be that human/machine-learning software. For the tool to be successful, it requires cooperation from social networks, publishers, artists, applications, browsers, and generative AI developers.

While we wonder whether the Gaza conflict will be the first conflict to be dominated by false AI-generated images, we should also be worried about its potential impact on national elections in the coming days.

A recent study from the Harvard Kennedy School Misinformation Review states, “While concerns about the effects of the technology are ‘overblown.’ While, yes, gen AI theoretically lets people proliferate misinformation at a futuristic rate, those who seek out this misinformation—often those who have low trust in institutions … [or are] strong partisans—already have a surfeit of familiar nonsense to pursue, from conspiracy theory websites to 4chan forums. There is no demand for more.”

“Given the creativity humans have showcased throughout history to make up (false) stories and the freedom that humans already have to create and spread misinformation across the world, it is unlikely that a large part of the population is looking for misinformation they cannot find online or offline,” the paper stated, apart from adding, “while generative AI can theoretically render highly personalised, highly realistic content, so can Photoshop or video editing software. Changing the date on a grainy cell phone video could prove just as effective. Journalists and fact-checkers struggle less with deepfakes than they do with out-of-context images or those crudely manipulated into something they’re not, like video game footage presented as a Hamas attack.”

Felix Simon, one of the authors of the study, told WIRED that his team’s commentary was not seeking to end the debate over possible harms of AI-generated photos, but was instead pushing back the claims about the technology triggering “a truth armageddon.”

Hany Farid, a professor at the UC Berkeley School of Information, feels that the generative AI has been slotted into the existing online disinformation ecosystem. He cited examples of people pointing to various pieces of digital evidence about who was behind the missile strike on the Al-Ahli Arab Hospital in Gaza, as well as images arriving from the battlefield, before “deciding upon” which ones were real and which ones were fake.

Farid’s team cross-checked the images of burned Jewish children posted on Israeli Prime Minister Benjamin Netanyahu’s X account. These pics were not AI-generated, but some of the detractors dismissed the photos, suspecting the ‘AI Involvement’.

“In the broader picture, in our ability to reason about a fast-moving highly impactful world, I think this conflict is worse than what we’ve seen in the past. And I think gen AI is part of that, but it is not exclusively generative AI. That’s too simplistic,” he noted.

The conflict between Israel and Hamas has seen the use of generative AI blur the lines between reality and fiction, posing a significant challenge to media credibility and digital policing. The widespread use of AI-generated images depicting the Gaza conflict has made it tough to authenticate the accuracy of the information being circulated. Platforms like Adobe are inadvertently hosting and selling these misleading images, making it even harder to verify the truth. The situation not only exposes the vulnerability of current content authentication measures but also raises concerns about the potential impact on public perception, especially in the context of national elections. As experts continue to grapple with identifying AI-generated content, the Gaza conflict has become a testing ground for the unsettling combination of technology and misinformation.

Related posts

UK retail sector in recession?

GBO Correspondent

US pushback against China in semiconductor race

GBO Correspondent

Satellite broadband is the next big thing

GBO Correspondent