Artists are tired of generative AI stealing their work. So they annoy them

Getty Images sued Stability AI – responsible for Stable Diffusion – months ago. A well-known image agency discovered that part of its catalog was used to train this generative artificial intelligence model, sparking a debate that has followed us all year. So far, this demand has not materialized much, so the artists have taken a different route. Much more sophisticated.

eggplant. That’s the name of a new tool created by a group of academics in a recent study. With it, artists can “poison” their works so that if someone uses them to train their generative AI models, the images those models later generate don’t respond to the users’ prompts.

glaze. Ben Zhao, a professor at the University of Chicago, is responsible for both the previous development and Glaze, another tool that allows artists to “disguise” their personal style and prevent it from being plagiarized by generative AIs. In this case, the tool changes the pixels of the images in a very subtle and invisible way to the human eye. Still, it allows machine learning models to be manipulated to interpret an image as something other than what is actually displayed.

Corrupted art to save it. These tools take advantage of the fact that these AI models are trained with huge amounts of data. The images to which these tools are applied end up confusing these training processes. This allows artists to upload their work online and protect it from potential companies using it to train their models.

You ask for a cat and she creates a dog for you. With Nightshade and Glaze, you make the models act in a faulty way: if you ask them for a cat, they can make a dog, and if you ask them for a car, they can make a cow. The study shows how damaged models do things they shouldn’t, making them worse to use. Removing edited images is very difficult because the companies that end up using them have to find and remove them independently.

But this creates other problems.. These tools could be used for malicious purposes, but according to Zhao, thousands of corrupted images and very powerful models would be needed to achieve this goal. But experts warn that it is necessary to implement defenses both for these tools and for large models that can collect images without a training permit.

We still don’t know how AIs are trained. These tools are another symptom of the problem with generative artificial intelligence. Both ChatGPT and Stable Diffusion and their alternatives have been trained on massive amounts of data, but it is uncertain which authors and artists might see their work used for these processes. Claims and complaints are mounting, and this unique battle between those who offer these models and the content creators does not seem to have an easy solution.

Image | Xataka with Bing Image Creator

In Xataka | The study prompted several people to distinguish between texts from professional writers and artificial intelligence. There is good news

Leave a Comment