Israel-Iran conflict sees massive surge in fake, AI-generated content

AdminUncategorized12 hours ago6 Views

My first reaction to seeing some of those clips of Iran’s missiles falling on Israel was disbelief. It wasn’t the fact that the Iron Dome could be breached that surprised me — drop enough projectiles on it, and some will reach their targets. I thought I was looking at AI-generated content shared on social media to mislead and manipulate users.

Only after checking online for more information, and seeing multiple videos, I started trusting the social media posts coming in, claiming that Iran was bombing Israel.

That said, if you saw a downed F-35 in the desert, you probably realized quite quickly that it was an AI-generated image. But even I stopped to consider whether the image might be real. What if Iran could strike down the most sophisticated US fighter jet?

I say this as someone who has been following AI development closely for well over two years now. I’m well aware of what generative AI software like ChatGPT and Gemini can do, and the ease with which amateur internet users can create fake images and life-like AI videos to fool the masses.

Imagine non-AI users who see some of these images and clips for the first time. They might have a different view of the Israel-Iran conflict. Well, it turns out that some of these AI-generated creations are going viral online, amassing tens of millions of views.

According to the BBC, BBC Verify found dozens of social media posts created to amplify Iran’s response. Three AI videos alone amassed over 100 million views across multiple platforms.

It’s not just pro-Iran generative AI content that surfaced on social media with incredible speed. Some pro-Israeli accounts posted old clips of protests in Iran, sharing them as evidence of growing dissent following Israel’s strikes.

Disinformation is a part of war. I’m not surprised that each side would try to shape the narrative in their favor. But generative AI software makes it incredibly easy to generate content that can go viral online. All you need is the right program, a text prompt, and money to create images and videos that would make anyone question reality.

It’s not necessarily state-sponsored attacks resulting in the use of generative AI programs to create fake content that can go viral either, though it’s likely such organizations are involved in a large number of generated AI fakes circulating online.

The BBC notes that some people do it for financial gain that comes along with going viral. A pro-Iranian account called Daily Iran Military has doubled its number of followers on X to 1.4 million in under a week. The account shared fake AI content, but it’s not associated with Tehran.

Other parties might be interested in taking advantage of the Israel-Iran conflict for their own gain as well. Alethea analyst group CEO Lisa Kaplan identified accounts linked to Russian intelligence operations that promoted AI fakes featuring the American F-35 fighter that Israel used to attack Iran.

An AI photo of a downed F-35 is making the rounds of social media. A clip showing an Israeli F-35 being shot down received over 21 million views on TikTok before being removed. The clip came from a video game simulator.

The object of such campaigns is to make a weapon like F-35 appear less lethal and less sophisticated than it is. That way, others start doubting the supremacy of certain American weapons, which is to Russia’s advantage.

The Israel-Iran conflict will probably be the first war that’s fought online via AI-powered misinformation campaigns. It’s very likely that millions of people will believe what they see and share the viral content before checking its accuracy.

AI platforms that make it incredibly easy to create such content will be partly responsible. Services like ChatGPT, Gemini, and others let users create fake photos with incredible ease. Most of them do not have visible watermarks. And on social media, virtually no one checks the metadata that might include permanent but invisible watermarks.

Then there are social platforms like X, TikTok, Instagram, and others, which make it very easy for anyone to share such content without real oversight. Not every fake is taken down immediately. TikTok removed the F-35 video after it received over 20 million views. The harm was already done.

The Israel-Iran conflict will also be a testbed for other potential misinformation campaigns. Bad actors might use the war to come up with strategies to weaponize fake AI content for other campaigns aimed at misleading the masses in the months and years to come.

Make sure to check the BBC report for more details about the fake AI content related to the conflict, as well as examples showing the kind of misleading images you might see online.

Read More

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Recent Comments

No comments to show.

Stay Informed With the Latest & Most Important News

I consent to receive newsletter via email. For further information, please review our Privacy Policy

Advertisement

Loading Next Post...
Follow
Sign In/Sign Up Sidebar Search Trending 0 Cart
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...

Cart
Cart updating

ShopYour cart is currently is empty. You could visit our shop and start shopping.