Ads Top

Adobe Caught Selling AI-Generated Images of Israel-Palestine Violence

The misinformation is coming from inside the tech industry.

 

Software giant Adobe has been caught selling AI-generated images of the Israel-Hamas war, as first spotted by Australian news outlet Crikey, a shocking and morally reprehensible instance of a company directly profiting from the spread of misinformation online.

 

A quick search on the company's Adobe Stock website — a service that offers subscription customers access to a library of generic stock images and now AI-shots as well — for "conflict between Israel and Palestine" comes up with photorealistic images of explosions in high-density urban environments that closely resemble the real-life carnage currently unfolding in Gaza.

 

Another image shows a "mother and child in destroyed city in Palestine Israel war conflict," a devastating framing that was entirely AI-generated. In fact, it's one of a series of 33 images that all show a similar composition.

 

And yet another shows "destroyed and burnt buildings in the Israel city."

 

These images all appear to have been submitted by Adobe Stock users and were seemingly not generated by Adobe itself.

 

However, while they're technically tagged as being "generated with AI," a requirement for all user-submitted works, some of these images are already making their rounds on other parts of the web, as Crikey found, which could easily mislead unsuspecting members of the public.

 

A simple reverse image search on Google confirms this, with one photorealistic AI image of a huge explosion being used by a number of small publications.

 

After all, without closely examining these images for telltale signs of having been generated by an AI, like misaligned windows or mismatched lighting and shadows, they could easily pass for the real thing.

 

AI image generators like OpenAI's DALL-E, Stable Diffusion, and Midjourney have made massive technological leaps over the last 12 months. Long gone are the days of obvious glitches or horrifying animal monstrosities.

Consequently, AI-generated images are getting huge amounts of visibility online. Earlier this year, Futurism found that the top image result on Google for the name of famed realist artist Edward Hopper was an AI fake.

 

Instead of treading into the world of generative AI carefully, Adobe has chosen to embrace the tech with enthusiasm.

 

It made a big splash last month after taking its generative AI model called Firefly out of beta, making it an integral and easily accessible feature of its widely used Photoshop. The company even added a new annual bonus scheme for Adobe Stock contributors, actively incentivizing them to allow their work to be used to train the company's AI model.

 

But that kind of fervor isn't always to everybody's benefit. With how Adobe is choosing to market AI-generated images, the company is also actively undermining the work of photojournalists. In many ways, it's yet another instance of AI technologies threatening to take a big bite out of the livelihoods of those who took the original images these AI image-generating algorithms are trained on in the first place.

 

It's a particularly concerning and ethically dubious example, considering the sheer amount of danger war photographers put themselves in to document the harsh realities of human conflict.

 

Worse yet, while these kinds of images can easily lead to the spread of misinformation, they're also actively undermining the trust we have in the news we read on a daily basis.

 

"Once the line between truth and fake is eroded, everything will become fake," Wael Abd-Almageed, a professor at the University of Southern California’s school of engineering, told the Washington Post last year. "We will not be able to believe anything."

 

More on AI images: Google's Top Result for "Johannes Vermeer" Is an AI Knockoff




Powered by Blogger.