
With the recent advent of AI-generated images, many artists are concerned about how these AI are trained and what this means for classroom use.
This is the first of three columns discussing concerns about AI technologies and their development.
In recent months, AI-generated images have exploded in prevalence on social media platforms and the internet at large. Following this rapid increase in coverage, artists have elicited various concerns regarding the implications of AI-generated art. These kinds of images, and the conversation surrounding them, pose serious problems including theft and classroom use.
AI-generated images have two main forms: those where a prompt is entered into the program and images are created — “text-to-art,” and those where images are uploaded and modified to look like a style of art — “AI-edited images.” The text-to-art program Dall-E 2 was released in 2022 and was followed swiftly by several other programs, like Midjourney and Stable Diffusion. AI-edited images have become popular through websites and apps like Lensa AI — which have been increasingly seen on social media platforms like Instagram and Tiktok. An AI-generated image, “Théâtre D’opéra Spatial,” was created using Midjourney and recently won an award at the Colorado State Fair’s annual art competition in digital art.
However, even with their popularity, there is a large issue with how the AI generating these images is trained. These programs train their AI by scraping millions of images on the internet, a process that involves copying information from large quantities of information from internet programs. It has been reported that some of the images scraped into Stable Diffusion’s database included private medical photos and pornography. The AI takes a breadth of images and recognizes patterns within them, allowing the program to understand prompts and generate images in the same style as the art it observes.
Scraping copyrighted images — specifically those of artists — from other platforms constitutes a new form of theft. This is especially true because the images are often scraped without permission from or attribution to the artist who made them. This has caused an uproar among some artists who protested the featuring of AI art on ArtStation, an art showcase website. The artists posted anti-AI logos to their accounts on the platform. A group of artists are currently suing Stable Diffusion over this theft. Despite these protests, this practice is likely to stay protected by copyright law. In 2015, the U.S. Court of Appeals for the 2nd Circuit for the Second District ruled in a similar case that Google’s scanning of millions of copyrighted books falls under fair use. New cases should establish a difference between scraping information for use in a search engine and an AI program.
Max Estes is an American illustrator and children’s author based in Norway who virtually teaches art classes at MU. In the face of online scraping, Estes wonders how the practice will impact artists’ use of the internet.
“Part of me just thinks ‘does the future involve just people posting less of their work online?’” Estes said.
When images are scraped from the internet, the AI also reads any captions present. This includes artists’ names, which are then factored into the AI’s learning. As a result, AI users can include an artist’s name and the AI can attempt to replicate that artist’s style.
Greg Rutkowski, an illustrator who has voiced concerns over AI-generated images, had his art included in thousands of prompts on the Midjourney site according to a report by the MIT Technology Review. Rutkowski has worked with Dungeons & Dragons and Magic: The Gathering. When his name is put into the prompt on a program like Midjourney, the AI will attempt to create art similar to his in style. There have also been accusations that Lensa’s AI-editing program is copying artists’ styles whole-cloth.
On a micro level, professors like Estes have already run into issues pertaining to AI-generated images within the classroom. Estes teaches a class on writing and illustrating graphic novels and recently had a student turn in a comic book assignment that was made by an AI image generator, after which he sent an announcement to his students explaining that they could not use AI-generated images in his class.
“I don’t think that a vast majority of the students needed to be told that,” Estes said. “But if one of them thought that was OK, then at least five or six of them thought it was OK.”
Estes’ worries go beyond just theft. Estes points out that everyone takes inspiration or “steals” elements from other works. In college, students sometimes even learn to imitate great artists from the past.
“I guess it’s no different than 10 or 12 years ago when I was having discussions with colleagues about, ‘this illustrator is great, but you know, they’re just ripping off what so and so was doing,’” Estes said.
Estes also questioned why his students or anyone would want to use these AI image generators, wondering what people got out of inputting prompts into a program without creating the images themselves.
“Where is the inherent value to the creator if you yourself are not authoring the work?” Estes said. “Where is the benefit to that person? How is that work worthy of your time? But that is, I understand, a rather naive point of view because that is assuming that everyone has some artistic integrity or wants to have some satisfaction in the work they are creating.”
It seems clear that AI image technologies should not be allowed in classroom settings as they prevent students from learning to create art on their own. The training of these images on copyrighted art and the use of these AI to recreate artists’ styles plainly show a new type of theft facing the industry.
Edited by Molly Gibbs | mgibbs@themaneater.com
Copy edited by Matt Guzman and Lauren Courtney