Midjourney https://www.artnews.com The Leading Source for Art News & Art Event Coverage Tue, 02 Jan 2024 22:54:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 https://www.artnews.com/wp-content/themes/vip/pmc-artnews-2019/assets/app/icons/favicon.png Midjourney https://www.artnews.com 32 32 Database of 16,000 Artists Used to Train Midjourney AI, Including 6-Year-Old Child, Garners Criticism https://www.artnews.com/art-news/news/midjourney-ai-artists-database-1234691955/ Tue, 02 Jan 2024 22:54:18 +0000 https://www.artnews.com/?p=1234691955 For many, a new year includes resolutions to do better and build better habits. For Midjourney, the start of 2024 meant having to deal with a circulating list of artists whose work the company used to train its generative artificial intelligence program.

During the New Year’s weekend, artists linked to a Google Sheet on the social media platforms X (formerly known as Twitter) and Bluesky, alleging that it showed how Midjourney developed a database of time periods, styles, genres, movements, mediums, techniques, and thousands of artists to train its AI text-to-image generator. Jon Lam, a senior storyboard artist at Riot Games, also posted several screenshots of Midjourney software developers discussing the creation of a database of artists to train its AI image generator to emulate.

https://x.com/JonLamArt/status/1741545927435784424?s=20

The 24-page list of artists’ names used by Midjourney as the training foundation for its AI image generator (Exhibit J) includes modern and contemporary blue-chip names,as well as commercially successfully illustrators for companies like Hasbro and Nintendo. Notable artists include Cy Twombly, Andy Warhol, Anish Kapoor, Yayoi Kusama, Gerhard Richter, Frida Kahlo, Andy Warhol, Ellsworth Kelly, Damien Hirst, Amedeo Modigliani, Pablo Picasso, Paul Signac, Norman Rockwell, Paul Cézanne, Banksy, Walt Disney, and Vincent van Gogh.

Midjourney’s dataset also includes artists who contributed art to the popular trading card game Magic the Gathering, including Hyan Tran, a six-year-old child and one-time art contributor who participated in a fundraiser for the Seattle Children’s Hospital in 2021.

Phil Foglio encouraged other artists to search the list to see if their names were included and to seek legal representation if they did not already have a lawyer.

Access to the Google file was soon restricted, but a version has been uploaded to the Internet Archive.

The list of 16,000 artists was included as part of a lawsuit amendment to a class-action complaint targeted at Stability AI, Midjourney, and DeviantArt and the submission of 455-pages of supplementary evidence filed on November 29 last year.

The amendment was filed after a judge in California federal court dismissed several claims brought forth by a group of artists against Midjourney and DeviantArt on October 30.

The class-action copyright lawsuit was first filed almost a year ago in the United States District Court of the Northern District of California.

Last September, the US Copyright Review Board decided that an image generated using Midjourney’s software could not be copyright due to how it was produced. Jason M. Allen’s image had garnered the $750 top prize in the digital category for art at the Colorado State Fair in 2022. The win went viral online, but prompted intense worry and anxiety among artists about the future of their careers.

Concern about artworks being scraped without permission and used to train AI image generators also prompted researchers from the University of Chicago to create a digital tool for artists to help “poison” massive image sets and destabilize text-to-image outputs.

At publication time, Midjourney did not respond to requests for comment from ARTnews.

]]>
New Data ‘Poisoning’ Tool Enables Artists To Fight Back Against Image Generating AI https://www.artnews.com/art-news/news/new-data-poisoning-tool-enables-artists-to-fight-back-against-image-generating-ai-companies-1234684663/ Wed, 25 Oct 2023 21:20:53 +0000 https://www.artnews.com/?p=1234684663 Artists now have a new digital tool they can use in the event their work is scraped without permission by an AI training set.

The tool, called Nightshade, enables artists to add invisible pixels to their art prior to being uploaded online. These data samples “poison” the massive image sets used to train AI image-generators such as DALL-E, Midjourney, and Stable Diffusion, destabilizing their outputs in chaotic and unexpected ways as well as disabling “its ability to generate useful images”, reports MIT Technology Review.

For example, poisoned data samples can manipulate AI image-generating models into incorrectly believing that images of fantasy art are examples of pointillism, or images of Cubism are Japanese-style anime. The poisoned data is very difficult to remove, as it requires tech companies to painstakingly find and delete each corrupted sample.

“We assert that Nightshade can provide a powerful tool for content owners to protect their intellectual property against model trainers that disregard or ignore copyright notices, do-not-scrape/crawl directives, and opt-out lists,” the researchers from the University of Chicago wrote in their report, led by professor Ben Zhao. “Movie studios, book publishers, game producers and individual artists can use systems like Nightshade to provide a strong disincentive against unauthorized data training.”

Nightshade could tip the balance of power back from AI companies towards artists and become a powerful deterrent against disrespecting artists’ copyright and intellectual property, Zhao told MIT Technology Review, which first reported on the research.

According to the research report, the researchers tested Nightshade on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch. After they fed Stable Diffusion just 50 poisoned images of cars and then prompted it to create images of the vehicles, the usability of the output dropped to 20%. After 300 poisoned image samples, an attacker using the Nightshade tool can manipulate Stable Diffusion to generate images of cars to look like cows.

Prior to Nightshade, Zhao’s research team also received significant attention for Glaze, a tool which disrupts the ability for AI image generators to scrape images and mimic a specific artist’s personal style. The tool works in a similar manner to Nightshade through a subtle change in the pixels of images that results in the manipulation of machine-learning models.

Outside of tools like Nightshade and Glaze, artists have gone to court several times over their concerns about AI image generative models, which have become incredibly popular and generated significant revenues.

In January, artists sued Stability AI, Midjourney, and DeviantArt in a class-action lawsuit, arguing their copyrighted material and personal information was scraped without consent or compensation into the massive and popular LAION dataset. The lawsuit estimated that the collection of 5.6 billion images, scraped primarily from public websites, included 3.3 million from DeviantArt. In February, Getty Images sued Stability AI over photos used to train its Stable Diffusion image generator. In July, a class-action lawsuit was filed against Google over its AI products.

Tools like Nightshade and Glaze have given artists like Autumn Beverly the confidence to post work online again, after previously discovering it had been scraped without her consent into the LAION dataset.

“I’m just really grateful that we have a tool that can help return the power back to the artists for their own work,” Beverly told MIT Technology Review.

]]>
Judge Appears Likely to Dismiss AI Class Action Lawsuit by Artists https://www.artnews.com/art-news/news/ai-class-action-lawsuit-dismissal-hearing-stabilityai-midjourney-deviantart-1234675071/ Fri, 21 Jul 2023 16:35:43 +0000 https://www.artnews.com/?p=1234675071 On Wednesday, Judge William Orrick of the US District Court for the Northern District of California heard oral arguments on defendants’ motion to dismiss in the case of Andersen v Stability Ltd, a closely-watched class action complaint filed by multiple artists against companies that have developed AI text-to-image generator tools like Stability AI, Midjourney, and DeviantArt.

During the hearing, the judge appeared to side with AI companies, thus making it likely that he would dismiss the case.

“I don’t think the claim regarding output images is plausible at the moment, because there’s no substantial similarity [between the images by the artists and images created by the AI image generators],” Orrick said during the hearing, which was publicly accessible over Zoom.

The issue is that copyright claims are usually brought against defendants who have made copies of pre-existing work or work that uses a large portion of pre-existing works, otherwise called derivative works. In other words, a one-to-one comparison typically needs to be made between two works to establish a copyright violation.

But, as explained in the most recent Art in America, the artists in the lawsuit are claiming a more complex kind of theft. They argue that AI companies’ decision to include their works in the dataset used to train their image generator models is a violation of their copyrights. Because their work was used to train the models, the artists argue, the models are constantly producing derivative works that violate their copyrights.

The defendants’ lawyers pointed out various issues with the artists’ arguments. To begin with, out of the three named plaintiffs—Sarah Andersen, Karla Ortiz, and Kelly McKernan—only Andersen has registered some of her works with the U.S. Copyright Office. That Ortiz and McKernan don’t hold registered copyright is a major obstacle to claiming valid copyright infringement claims. Meanwhile, it didn’t seem that Andersen was in a much better position, despite having sixteen of her works registered.

“Plaintiffs’ direct copyright infringement claim based on output images fails for the independent reason that Plaintiffs do not allege a single act of direct infringement, let alone any output that is substantially similar to Plaintiffs’ artwork,” Stability AI’s counsel wrote in their motion to dismiss. “Meanwhile, Plaintiffs’ allegations with respect to Andersen are limited to only 16 registered collections but even then, Plaintiffs do not identify which “Works” from Andersen’s collections Defendants allegedly infringed.”

Orrick was also skeptical of how much of an impact these three artist’s works could have had on the models, insofar as they are likely to produce derivatives, given that these models were trained on billions of images. While the judget has not yet filed his official decision, if he dismisses, the artists will have the opportunity to refile and address the weak aspects of the suit.

Orrick’s reaction to the suit appears to confirm legal and technology analysts’ assessment that current copyright law is not equipped to address the potential injustices engendered by AI.

An ongoing study by technologists under the name Parrot Zone have tested image-generator models and found that the system is capable of recognizing and reproducing the styles of thousands of artists. Out of 4,000 studies done, they found that these models can reproduce the style of 3,000 artists, both living and dead, all without recreating any specific works. The issue is that, even as these models are appear to credibly copy existing artists’ styles, “style” is not protected under existing copyright laws, leaving a kind of loophole that AI image-generators can exploit to their benefit.

[To learn more about this lawsuit, read “Artists Are Suing Artificial Intelligence Companies and the Lawsuit Could Upend Legal Precedents Around Art“]

]]>
DeviantArt’s Decision to Label AI Images Creates a Vicious Debate Among Artists and Users https://www.artnews.com/art-news/news/deviantart-artficial-intelligence-ai-images-midjourney-stabilityai-art-1234674400/ Tue, 18 Jul 2023 15:25:31 +0000 https://www.artnews.com/?p=1234674400 San Francisco–based concept artist RJ Palmer got his big break on DeviantArt. After joining the image-sharing platform in 2005, the 33-year-old began posting realistic drawings of Pokémon there in the style of the Japanese anime Monster Hunter. His work soon made the rounds online and, in 2016, the production designer for the movie Detective Pikachu reached out. Palmer has freelanced ever since for the entertainment industry, primarily for video games.

“DeviantArt for me was a pretty big deal,” Palmer told ARTnews. “I became one of their success stories.” But these days, Palmer describes the site as an “unusable mess” because of “AI crap”—work produced by AI-powered text-to-image generators like Midjourney, Stable Diffusion, and OpenAI’s DALL-E.

With more than 75 million users, DeviantArt is one of the latest—and largest—online spaces to grapple with AI-generated images. Last month, the site announced that it would require users to disclose whether works they submitted were created using AI tools; the announcement followed one by Google in May of a similar plan to label “AI-generated images,” just weeks before the European Union urged other Big Tech platforms to follow suit.

Both the EU and Google’s argument for labeling AI-generated images has centered on misinformation. When an image of Pope Francis in a white puffer jacket went viral earlier this year, for example, many people didn’t immediately know it was faked. The threat of misinformation around news events or elections appears obvious. But another debate around AI labeling touches on the core of how we define art, who gets to make it, and who can profit from it. That conversation has enraged creators on both sides.

“[DeviantArt] can be like, ‘Oh, there’s suddenly all these people using our service, they’re uploading tons of images.’ It’s good for—at least they think it’s good for—the site’s health, even though it’s driving … actual longtime users and … regular artists away from the service,” Palmer said.

As a successful digital artist, Palmer has become a spokesman of sorts for artists on DeviantArt who object to AI-generated images on the platform.

The first issue, for Palmer and other digital artists, is how such generators were developed—by “stealing” other artists’ work, as he put it. Most programs were trained on the LAION dataset, a collection of 5.6 billion images scraped primarily from public websites. A class action lawsuit filed by artists in January against DeviantArt, Midjourney, and StabilityAI—the company behind Stable Diffusion—estimated that 3.3 million images in LAION were ripped from DeviantArt. (DeviantArt has said in public statements that it was never asked, nor did it give, permission for this.)

Artists like Palmer were already upset when those text-to-image AI generators launched early last year, but the conflict escalated in November when DeviantArt released its own version, DreamUp, that automatically included  users’ creations in its dataset. Opting out required users to delete each individual image, a prohibitive burden considering that many, like Palmer, have thousands of works on the platform.

BRAZIL - 2022/04/12: In this photo illustration, a woman's silhouette holds a smartphone with the DeviantArt logo in the background. (Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images)
In this photo illustration, a woman’s silhouette holds a smartphone, with the DeviantArt logo in the background.

Less than 12 hours after DreamUp’s launch, DeviantArt announced that it was reversing the policy and not keeping users’ artworks in the dataset by default. But that was mostly a feint: DreamUp was built on Stable Diffusion, and therefore, on the LAION dataset, which already includes countless images by DeviantArt users.

Palmer’s criticism of DeviantArt is as much about the platform’s tone-deaf execution of AI as about AI itself. The day DreamUp launched, Palmer conducted a Twitter Spaces conversation with several DeviantArt executives. One question on which Palmer pressed the company: If DeviantArt was intent on creating an AI image generator, why not use an “ethically sourced” dataset?

CMO Liat Karpel Gurwicz told Palmer that users would upload AI images even if the platform banned them. By introducing its own generator, DeviantArt retained some control. “We cannot go and undo what these datasets and models have already done … ” Gurwicz said. “We could build our own model, that’s true … But doing that would take us probably a couple of years in reality.”

Despite DeviantArt’s insistence that it was protecting artists, DreamUp fueled a massive user backlash. They spoke out to the media and launched online protests; message boards were rife with complaints, and some users said they would leave the site entirely.

Beyond the ethics of AI training datasets, Palmer’s issue with AI-generated images—and why he supports labeling—comes down to time and creativity. Users say DeviantArt’s homepage and search are now flooded with low-quality, AI-generated images that likely took seconds or minutes to create, many of which aren’t labeled, despite the site’s new requirement. By Palmer’s measure, AI has turned a vibrant artistic community into an image dump.

Palmer has also noticed other users imitating his work using AI (and not well, he said). If the training improves, he’s worried AI could replace him or other artists entirely. Unfortunately, they can’t copyright style, only specific artworks. And according to the US Copyright Office, AI creators can’t even do that.

This past March, the office released an official position that only “human-authored” works are eligible for copyright. Many artists applauded the decision, as it seemingly eliminated corporations’ ability to profit from AI-generated images and, therefore, offered some hope for the protection of artists’ livelihoods. AI labeling, then, would help establish what images can and cannot be legally protected.

But for Jason M. Allen, the 40-year-old founder of tabletop games studio Incarnate Games, arguments over copyright miss the point. Artists and AI similarly create artworks influenced and derived from an amalgamation of images, experiences, and art.

“So really, every experience that you have, every book that you read, every piece of art that you look at, is going through your neural network. And then you’re using that experience and your recollection of these ideas and combination of concepts to then express yourself using your choice of medium and technique,” Allen said of the artistic process. “And I can’t? Because it’s artificial intelligence?”

This past September, Allen won first place at the Colorado State Fair annual art competition with his AI-generated image Théâtre d’Opéra Spatial. By Allen’s estimation, he spent more than 80 hours experimenting with different prompts on Midjourney to generate the image. He also founded Art Incarnate, where he sells prints and other upcoming AI creations.

A selection of images on DeviantArt’s labeled AI-images page.

The US Copyright Office’s decision, Allen argues, ignores the creativity in using AI tools, and he’s since appealed in an effort to copyright his award-winning piece. For Allen, forced AI-labeling produces a similar bias against AI creators and creates multiple “tiers” of artists.

“I feel like it’s impossible to remove the human element from the work,” said Allen, who doesn’t consider himself an artist. “There’s always a user, there’s always a person, there’s always a creative force.”

The idea that AI generators are just another tool for artists has parallels to 19th-century debates about photography, which was seen, at the time, as a mechanistic reproducer of fact rather than a conduit to creativity. In an 1884 US Supreme Court case, a lithograph company that reproduced a photograph of Oscar Wilde argued the original could not be copyrighted because photographs lacked originality, being the result of a simple button push. In the decision, Justice Samuel Miller deemed it an “original work of art,” noting the creative decisions that went into the portrait’s production. Similar debates and court cases were waged in France, the United Kingdom, and elsewhere at the time.

Ahmed Elgammal, a professor of computer science at Rutgers University and the director of the Rutgers Art and Artificial Intelligence Lab, sees photography and AI similarly, as tools.

“I think it might be fair to think of labeling [images] as AI the same as labeling an image a photograph or labeling an image as digitally created,” Elgammal told ARTnews, adding that fake images circulating on social media are “really problematic.”

Even if platforms agree that all AI-generated works should be labeled, the challenge remains how to do so. User reporting has obvious problems. Google’s AI labeling tool, rolled out in May, asks text-to-image generators to label works at the point of production. The company said Midjourney and others would join in the coming months. Meanwhile, using an algorithm or automated detection system to determine whether something was created with AI could introduce more problems than it solves.

“A technological solution to a technological problem, that’s gonna lead to more technological problems,” Jennifer Gradecki, assistant professor of art and design at Northeastern University, told ARTnews.

Derek Curry, also an art and design professor at Northeastern, told ARTnews that algorithmic detection would likely end up with false positives and false negatives. That could have a major impact on artists, depending on how platforms and governments choose to approach AI copyright in the future.

The real problem with labeling AI, Gradecki and Curry believe, is that the lines are blurry. Almost all smartphone cameras and many digital cameras already use AI to enhance images with image stabilization or color optimization. Image-editing software also offers AI enhancement. How much AI processing is acceptable before an image is deemed AI-generated?

“Even if you require large companies that are under some sort of regulation to label AI-generated content, within that even there’s a question of what constitutes AI-generated content,” Curry said.

While it’s clear that AI image generators are not going anywhere, Elgammal, the computer science professor, thinks the threat to artists will blow over.

“Soon people will realize that they are losing a lot by using these tools, their identity is lost, control is lost,” Elgammal said. “And at the end, art created by these kinds of tools will look the same. For me, anything produced by Midjourney looks the same.”

]]>
Artists Are Suing Artificial Intelligence Companies and the Lawsuit Could Upend Legal Precedents Around Art https://www.artnews.com/art-in-america/features/midjourney-ai-art-image-generators-lawsuit-1234665579/ Fri, 05 May 2023 14:37:34 +0000 https://www.artnews.com/?p=1234665579 Mike Winkelmann is used to being stolen from. Before he became Beeple, the world’s third most-expensive living artist with the $69.3 million sale of Everydays: The First 5000 Days in 2021, he was a run-of-the-mill digital artist, picking up freelance gigs from musicians and video game studios while building a social media following by posting his artwork incessantly.

Whereas fame and fortune in the art world come from restricting access to an elite few, making it as a digital creator is about giving away as much of yourself as possible. For free, all the time.

“My attitude’s always been, as soon as I post something on the internet, that’s out there,” Winkelmann said. “The internet is an organism. It just eats things and poops them out in new ways, and trying to police that is futile. People take my stuff and upload it and profit from it. They get all the engagements and clicks and whatnot. But whatever.”

Winkelmann leveraged his two million followers and became the face of NFTs. In the process, he became a blue-chip art star, with an eponymous art museum in South Carolina and pieces reportedly selling for close to $10 million to major museums elsewhere. That’s without an MFA, a gallery, or prior exhibitions.

“You can have [a contemporary] artist who is extremely well-selling and making a shitload of money, and the vast majority of people have never heard of this person,” he said. “Their artwork has no effect on the broader visual language of the time. And yet, because they’ve convinced the right few people, they can be successful. I think in the future, more people will come up like I did—by convincing a million normal people.”

In 2021 he might have been right, but more recently that path to art world fame is being threatened by a potent force: artificial intelligence. Last year, Midjourney and Stability AI turned the world of digital creators on its head when they released AI image generators to the public. Both now boast more than 10 million users. For digital artists, the technology represents lost jobs and stolen labor. The major image generators were trained by scraping billions of images from the internet, including countless works by digital artists who never gave their consent.

In the eyes of those artists, tech companies have unleashed a machine that scrambles human—and legal—definitions of forgery to such an extent that copyright may never be the same. And that has big implications for artists of all kinds.

Two side by side images of an animated woman.
Left: night scene with Kara, 2021, Sam Yang; RIght: Samdoesarts v2: Model 8/8, Prompt: pretty blue-haired woman in a field of a cacti at night beneath vivid stars (wide angle), highly detailed.

In December, Canadian illustrator and content creator Sam Yang received a snide email from a stranger asking him to judge a sort of AI battle royale in which he could decide which custom artificial intelligence image generator best mimicked his own style. In the months since Stability AI released the Stable Diffusion generator, AI enthusiasts had rejiggered the tool to produce images in the style of specific artists; all they needed was a sample of a hundred or so images. Yang, who has more than three million followers across YouTube, Instagram, and Twitter, was an obvious target.

Netizens took hundreds of his drawings posted online to train the AI to pump out images in his style: girls with Disney-wide eyes, strawberry mouths, and sharp anime-esque chins. “I couldn’t believe it,” Yang said. “I kept thinking, This is really happening … and it’s happening to me.”

Yang trawled Reddit forums in an effort to understand how anyone could think it was OK to do this, and kept finding the same assertion: there was no need to contact artists for permission. AI companies had already scraped the digital archives of thousands of artists to train the image generators, the Redditors reasoned. Why couldn’t they?

Like many digital artists, Yang has been wrestling with this question for months. He doesn’t earn a living selling works in rarefied galleries, auction houses, and fairs, but instead by attracting followers and subscribers to his drawing tutorials. He doesn’t sell to collectors, unless you count the netizens who buy his T-shirts, posters, and other merchandise. It’s a precarious environment that has gotten increasingly treacherous.

“AI art seemed like something far down the line,” he said, “and then it wasn’t.”

Two side by side images of an animated woman.
Left: JH’s Samdoesarts: Model 5/8, Prompt: pretty blue-haired woman in a field of a cacti at night beneath vivid stars (wide angle), highly detailed. Right: Kara sees u, Kara unimpressed, 2021, Sam Yang

Yang never went to a lawyer, as the prospect of fighting an anonymous band of Redditors in court was overwhelming. But other digital artists aren’t standing down so easily. In January, several filed a class action lawsuit targeted at Stability AI, Midjourney, and the image-sharing platform DeviantArt.

Brooklyn-based illustrator Deb JJ Lee is one of those artists. By January, Lee was sick and tired of being overworked and undervalued. A month earlier, Lee had gone viral after posting a lowball offer from Epic Games to do illustration work for the company’s smash hit Fortnite, arguably the most popular video game in the world. Epic, which generated over $6 billion last year, offered $3,000 for an illustration and ownership of the copyright. For Lee, it was an all-too-familiar example of the indignities of working as a digital artist. Insult was added to injury when an AI enthusiast—who likely found out about Lee from the viral post—released a custom model based on Lee’s work.

“I’ve worked on developing my skills my whole life and they just took it and made it to zeros and ones,” Lee said. “Illustration rates haven’t kept up with inflation since the literal 1930s.”

Illustration rates have stagnated and, in some cases, shrunk since the ’80s, according to Tim O’Brien, a former president of the Society of Illustrators. The real money comes from selling usage rights, he said, especially to big clients in advertising. Lee continued, “I know freelancers who are at the top of their game that are broke, I’m talking [illustrators who do] New Yorker covers. And now this?”

Lee reached out to their community of artists and, together, they learned that the image generators, custom or not, were trained on the LAION dataset, a collection of 5.6 billion images scraped, without permission, from the internet. Almost every digital artist has images in LAION, given that DeviantArt and ArtStation were lifted wholesale, along with Getty Images and Pinterest.

The artists who filed suit claim that the use of these images is a brazen violation of intellectual property rights; Matthew Butterick, who specializes in AI and copyright, leads their legal team. (Getty Images is pursuing a similar lawsuit, having found 12 million of their images in LAION.) The outcome of the case could answer a legal question at the center of the internet: in a digital world built on sharing, are tech companies entitled to everything we post online?

The class action lawsuit is tricky. While it might seem obvious to claim copyright infringement, given that billions of copyrighted images were used to create the technology underlying image generators, the artists’ lawyers are attempting to apply existing legal standards made to protect and restrict human creators, not a borderline-science-fiction computing tool. To that end, the complaint describes a number of abuses: First, the AI training process, called diffusion, is suspect because it requires images to be copied and re-created as the model is tested. This alone, the lawyers argue, constitutes an unlicensed use of protected works.

From this understanding, the lawyers argue that image generators essentially call back to the dataset and mash together millions of bits of millions of images to create whatever image is requested, sometimes with the explicit instruction to recall the style of a particular artist. Butterick and his colleagues argue that the resulting product then is a derivative work, that is, a work not “significantly transformed” from its source material, a key standard in “fair use,” the legal doctrine underpinning much copyright law.

As of mid-April, when Art in America went to press, the courts had made no judgment in the case. But Butterick’s argument irks technologists who take issue with the suit’s description of image generators as complicated copy-paste tools.

“There seems to be this fundamental misunderstanding of what machine learning is,” Ryan Murdock, a developer who has been working on the technology since 2017, including for Adobe, said. “It’s true that you want to be able to recover information from the images and the dataset, but the whole point of machine learning is not to memorize or compress images but to learn higher-level general information about what an image is.”

Diffusion, the technology undergirding image generators, works by adding random noise, or static, to an image in the dataset, Murdock explained. The model then attempts to fill in the missing parts of the image using hints from a text caption that describes the work, and those captions sometimes refer to an artist’s name. The model’s efforts are then scored based on how accurately the model was able to fill in the blanks, leading it to contain some information associating style and artist. AI enthusiasts working under the name Parrot Zone have completed more than 4,000 studies testing how many artist names the model recognizes. The count is close to 3,000, from art historical figures like Wassily Kandinsky to popular digital artists like Greg Rutkowski.

The class action suit aims to protect human artists by asserting that, because an artist’s name is invoked in the text prompt, an AI work can be considered “derivative” even if the work produced is the result of pulling content from billions of images. In effect, the artists and their lawyers are trying to establish copyright over style, something that has never before been legally protected.

Two collaged images of young Black girls side by side.
A side-by-side comparison of works by Lynthia Edwards (left) and Deborah Roberts (right), that was included as an exhibit in Robert’s complaint filed in August 2022.

The most analogous recent copyright case involves fine artists debating just that question. Last fall, well-known collage artist Deborah Roberts sued artist Lynthia Edwards and her gallerist, Richard Beavers, accusing Edwards of imitating her work and thus confusing potential collectors and harming her market. Attorney Luke Nikas, who represents Edwards, recently filed a motion to dismiss the case, arguing that Roberts’s claim veered into style as opposed to the forgery of specific elements of her work.

“You have to give the court a metric to judge against,” Nikas said. “That means identifying specific creative choices, which are protected, and measuring that against the supposedly derivative work.”

Ironically, Nikas’s argument is likely to be the one used by Stability AI and Midjourney against the digital artists. Additionally, the very nature of the artists’ work as content creators makes assessing damages a tough job. As Nikas described, a big part of arguing copyright cases entails convincing a judge that the derivative artwork has meaningfully impacted the plaintiff’s market, such as the targeting of a specific collecting class.

In the end, it could be the history of human-made art that empowers an advanced computing tool: copyright does not protect artistic style so that new generations of artists can learn from those who came before, or remix works to make something new. In 2012 a federal judge famously ruled that Richard Prince did not violate copyright in incorporating a French photographer’s images into his “Canal Zone” paintings, to say nothing of the long history of appropriation art practiced by Andy Warhol, Barbara Kruger, and others. If humans can’t get in trouble for that, why should AI?

Three of 400 “Punks by Hanuka” created by a cyberpunk brand that provides a community around collaborations, alpha, and whitelists on AI projects.

In mid-March, the United States Copyright Office released a statement of policy on AI-generated works, ruling that components of a work made using AI were not eligible for copyright. This came as a relief to artists who feared that their most valuable asset—their usage rights—might be undermined by AI. But the decision also hinders the court’s ability to determine how artists are being hurt financially by AI image generators. Quantifying damages online is tricky.

Late last year, illustrator and graphic novelist Tomer Hanuka discovered that someone had created a custom model based on his work, and was selling an NFT collection titled “Punks by Hanuka” on the NFT marketplace OpenSea. But Hanuka had no idea whom to contact; such scenarios usually involve anonymous users who disappear as soon as trouble strikes.

“I can’t speak to what they did exactly because I don’t know how to reach them and I don’t know who they are,” Hanuka said. “They don’t have any contact or any leads on their page.” The hurt, he said, goes deeper than run-of-the-mill online theft. “You develop this language that can work with many different projects because you bring something from yourself into the equation, a piece of your soul that somehow finds an angle, an atmosphere. And then this [AI-generated art] comes along. It’s passable, it sells. It doesn’t just replace you but it also muddies what you’re trying to do, which is to make art, find beauty. It’s really the opposite of that.”

For those who benefited from that brief magical window when a creator could move more easily from internet to art world fame, new tools offer a certain convenience. With his new jet-setting life, visiting art fairs and museums around the world, Winkelmann has found a way to continue posting an online illustration a day, keeping his early fans happy by letting AI make the menial time-consuming imagery in the background.

This is exactly what big tech promised AI would do: ease the creative burden that, relatively speaking, a creator might see as not all that creative. Besides, he points out, thieving companies are nothing new. “The idea of, like, Oh my god, a tech company has found a way to scrape data from us and profit from it––what are we talking about? That’s literally been the last 20 years,” he said. His advice to up-and-coming digital artists is to do what he did: use the system as much as possible, and lean in.

That’s all well and good for Winkelmann: He no longer lives in the precarious world
of working digital artists. Beeple belongs to the art market now.  

]]>