AI art https://www.artnews.com The Leading Source for Art News & Art Event Coverage Tue, 02 Jan 2024 22:54:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 https://www.artnews.com/wp-content/themes/vip/pmc-artnews-2019/assets/app/icons/favicon.png AI art https://www.artnews.com 32 32 Database of 16,000 Artists Used to Train Midjourney AI, Including 6-Year-Old Child, Garners Criticism https://www.artnews.com/art-news/news/midjourney-ai-artists-database-1234691955/ Tue, 02 Jan 2024 22:54:18 +0000 https://www.artnews.com/?p=1234691955 For many, a new year includes resolutions to do better and build better habits. For Midjourney, the start of 2024 meant having to deal with a circulating list of artists whose work the company used to train its generative artificial intelligence program.

During the New Year’s weekend, artists linked to a Google Sheet on the social media platforms X (formerly known as Twitter) and Bluesky, alleging that it showed how Midjourney developed a database of time periods, styles, genres, movements, mediums, techniques, and thousands of artists to train its AI text-to-image generator. Jon Lam, a senior storyboard artist at Riot Games, also posted several screenshots of Midjourney software developers discussing the creation of a database of artists to train its AI image generator to emulate.

https://x.com/JonLamArt/status/1741545927435784424?s=20

The 24-page list of artists’ names used by Midjourney as the training foundation for its AI image generator (Exhibit J) includes modern and contemporary blue-chip names,as well as commercially successfully illustrators for companies like Hasbro and Nintendo. Notable artists include Cy Twombly, Andy Warhol, Anish Kapoor, Yayoi Kusama, Gerhard Richter, Frida Kahlo, Andy Warhol, Ellsworth Kelly, Damien Hirst, Amedeo Modigliani, Pablo Picasso, Paul Signac, Norman Rockwell, Paul Cézanne, Banksy, Walt Disney, and Vincent van Gogh.

Midjourney’s dataset also includes artists who contributed art to the popular trading card game Magic the Gathering, including Hyan Tran, a six-year-old child and one-time art contributor who participated in a fundraiser for the Seattle Children’s Hospital in 2021.

Phil Foglio encouraged other artists to search the list to see if their names were included and to seek legal representation if they did not already have a lawyer.

Access to the Google file was soon restricted, but a version has been uploaded to the Internet Archive.

The list of 16,000 artists was included as part of a lawsuit amendment to a class-action complaint targeted at Stability AI, Midjourney, and DeviantArt and the submission of 455-pages of supplementary evidence filed on November 29 last year.

The amendment was filed after a judge in California federal court dismissed several claims brought forth by a group of artists against Midjourney and DeviantArt on October 30.

The class-action copyright lawsuit was first filed almost a year ago in the United States District Court of the Northern District of California.

Last September, the US Copyright Review Board decided that an image generated using Midjourney’s software could not be copyright due to how it was produced. Jason M. Allen’s image had garnered the $750 top prize in the digital category for art at the Colorado State Fair in 2022. The win went viral online, but prompted intense worry and anxiety among artists about the future of their careers.

Concern about artworks being scraped without permission and used to train AI image generators also prompted researchers from the University of Chicago to create a digital tool for artists to help “poison” massive image sets and destabilize text-to-image outputs.

At publication time, Midjourney did not respond to requests for comment from ARTnews.

]]>
US Judge Rules AI-Generated Art Not Protected by Copyright Law https://www.artnews.com/art-news/news/us-judge-rules-ai-generated-art-is-not-protected-by-copyright-law-1234677410/ Mon, 21 Aug 2023 19:12:28 +0000 https://www.artnews.com/?p=1234677410 A federal judge in Washington, D.C., ruled Friday that artwork generated by artificial intelligence is not eligible for copyright protection because it lacks “human involvement,” reaffirming a March decision of the United States copyright office.

The ruling is the first in the US to establish boundaries on legal protections for AI-generated art, whose immense popularity has opened a nebulous legal frontier dictated—for better or worse—by assessments of aesthetics and originality.

Judge Beryl A. Howell of the US District Court for the District of Columbia agreed with the US Copyright Office’s decision to deny grant copyright protections to an artwork created by computer scientist Stephen Thaler using “Creativity Machine,” an AI system of his own design. Howell wrote in her motion that “courts have uniformly declined to recognize copyright in works created absent any human involvement.

Thaler, the founder of Imagination Engines, an artificial neural network technology company, sued the office in June 2022 after its denial of his copyright application for A Recent Entrance to Paradise, a two-dimensional image of train tracks stretching beneath a verdant stone arch. Thaler said the work “was autonomously created by a computer algorithm running on a machine,” according to court documents.

The copyright office found this description at odds with the basic tenets of copyright law, which suggest that the work must be the product of a human mind. “Thaler must either provide evidence that the Work is the product of human authorship or convince the Office to depart from a century of copyright jurisprudence. He has done neither,” wrote the review board in its initial rejection.

“Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works,” the judge said, adding that the accessibility of generative AI will “prompt challenging questions” about what degree of human involvement is needed to qualify such artwork for copyright protections.  

Howell concluded, however, that this case “is not nearly so complex” because Thaler stated in his copyright application that he was not directly involved in the generation of the work.

The rise of AI-generative platforms such OpenAI Inc.’s ChatGPT, DALL-E, and Midjourney, has exacerbated legal headaches around appropriation art—a tradition in which one artist ostentatiously repurposes another’s creation. As Richard Prince and the estate of Andy Warhol can attest, the legal battles prompted from this work often find unsatisfying conclusions, with judges assuming the role of art critic. Where it was once artist versus artist, courts must now contend with the diffusion of millions of digital artworks by generative platforms.

Thaler’s attorney, Ryan Abbott, of Brown Neri Smith & Khan LLP, told Bloomberg that he will appeal Howell’s judgment. “We respectfully disagree with the court’s interpretation of the Copyright Act,” Abbott said.

In his motion, Thaler argued that this matter transcended quibbles between individual artists. Providing copyright protections to such artworks, he said, would inspire creativity, ultimately placing it in line with the intentions of copyright law.

“Denying copyright to AI-created works would thus go against the well-worn principle that ‘[c]opyright protection extends to all ‘original works of authorship fixed in any tangible medium’ of expression,” Thaler said.

]]>
US Copyright Office: AI Generated Works Are Not Eligible for Copyright https://www.artnews.com/art-news/news/ai-generator-art-text-us-copyright-policy-1234661683/ Tue, 21 Mar 2023 15:48:01 +0000 https://www.artnews.com/?p=1234661683 The US Copyright Office released a statement of policy last week concerning the copyrighting of works made with artificial intelligence.

“These technologies, often described as ‘generative AI,’ raise questions about whether the material they produce is protected by copyright, whether works consisting of both human-authored and AI-generated material may be registered, and what information should be provided to the Office by applicants seeking to register them,” the statement of policy read. “These are no longer hypothetical questions.”

Since companies like OpenAI and StabilityAI began releasing AI-enabled text and image generators in late 2022 and early this year, requests to copyright works with AI have increased dramatically. At first, the Copyright Office was not quite prepared to parse whether or not these works were eligible for copyright, leading to a flurry of mixed messages.

Last year, author Kris Kashtanova claimed to be the first person to have been granted copyright for an AI-created work when her request to register her comic book Zarya of the Dawn, which was produced using AI-generated images, was approved. The Copyright Office then put its decision under review and requested additional information when it was discovered that the images had been made using popular AI generator Midjourney.

Then, after reviewing its decision late last month, the Copyright Office cancelled its original certification and issued a new one. The elements that Kashtanova created —that is, the writing and other original elements— would be protected. The images would not, as only human-made creations are eligible for copyright.

This last point, that copyright only protects creations made by humans, will be the guiding principle for future judgements of the registration of works. When evaluating a work submitted for registration, copyright officials will be tasked with judging if the original choices executed in a work were produced by a human mind or produced mechanically. Some cases are simpler than others. For example, entering a text prompt into an image generator does not qualify as an act of authorship, as the Office likens the prompt to “instructions for a commissioned artist”. While that case appears clear, others are likely to require more thought.

“A human may select or arrange AI-generated material in a sufficiently creative way that ‘the resulting work as a whole constitutes an original work of authorship.’ Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection,” the statement of policy read.

The Office continued to say that, in these cases, copyright will only protect aspects of the work that were judged to have been made by the authoring human, resulting in partial protections of entire works, as in Kashtanova’s case.

According to the statement of policy, applicants who submit their works for registration from now on must declare if AI was used in any part of the work and those who have submitted applications that lack this declaration must amend them.

]]>
Google to Roll Out App for AI-Generated Artwork, Complicating Copyright Worries https://www.artnews.com/art-news/news/google-ai-generated-art-tool-app-1234645341/ Thu, 03 Nov 2022 16:24:29 +0000 https://www.artnews.com/?p=1234645341 A new Google feature will let consumers use artificial intelligence to bring their fantastical creations to (digital) life by just typing a few words. 

The app, which Bloomberg reported Thursday is currently under development, will have two functions: users can construct cities with its “City Dreamer” function, or customize a family-friendly cartoon monster with its “Wobble” feature.

The tools will be available through Google’s AI Test Kitchen app, Douglas Eck, a lead scientist at Google, said at the company’s AI@ event in New York on Wednesday. The release date for the new app has not yet been announced.

The features will use AI imaging technologies to generate hyper-specific images from even short text descriptions.

Google’s AI tool will be far from the first available to the public. This year, OpenAI’s DALL-E, Meta AI’s Make-A-Scene, Stability AI’s Stable Diffusion, and Midjourney have all either launched to the public or are in some state of semi-public beta testing.

Social media has become flooded with the chaotic amalgamations produced by these platforms and users’ suggestions —imagine Big Bird robbing a bank—but critics have raised legitimate concerns over how generative AI can spread misinformation or infringe on artists’ copyright. 

In September, Getty Images banned AI generated art, including images created by DALL-E and Make-A-Scene, from its platform. According to Getty, the decision was based on concerns of how image generators scrape publicly available content from across the internet when producing new imagery.

The sampled imagery is often copyrighted and come from news outlets and stock photo websites like Getty or are original artworks—without credit or compensation to the content creators. It’s still unclear whether that usage exceeds the boundaries of fair use as established by U.S. copyright law. Commonly in such court cases, the judge’s decision hinges on whether the new work is sufficiently “transformative.”

“Generative AI models are powerful, there’s no doubt about that,” Eck said at a press conference during the event. “But we also have to acknowledge the real risks that this technology can pose if we don’t take great care, which is why we’ve been slow to release them.”

]]>
Getty Images Bans AI-Generated Images Due To Copyright Worries https://www.artnews.com/art-news/news/getty-images-bans-ai-generated-images-due-to-copyright-1234640201/ Thu, 22 Sep 2022 15:41:10 +0000 https://www.artnews.com/?p=1234640201 Getty Images announced on Wednesday that it is banning AI-generated art, including images produced by OpenAI’s DALL-E and Meta AI’s Make-A-Scene, from its platform. The decision, according to Getty, stems from concerns that copyright laws are currently unsettled with regard to imagery created by those tools.

“There are real concerns with respect to the copyright of outputs from these models and unaddressed rights issues with respect to the imagery, the image metadata and those individuals contained within the imagery,” Getty Images CEO Craig Peters told the Verge. “We are being proactive to the benefit of our customers.”

The concern over copyright is not unfounded. AI image generators scrape publicly available pictures from across the web to train their algorithms and to sample them when producing new imagery. Those images are often copyrighted ones that come from news sites or even stock photo sites like Getty. As Gizmodo noted, tech blogger Andy Baio analyzed the image set used by Stable Diffusion, an AI tool similar to DALL-E produced by Stability AI, and found that 35,000 of the 12 million images were scraped from stock photo sites.

Whether that usage violates U.S. copyright law is an open question for the courts. Typically, to use copyrighted material, a creator has to demonstrate that the copying was done for a “transformative” purpose, which generally falls into commentary, criticism, or parody of material in question, as noted by the Stanford Libraries’ primer on Fair Use doctrine. The question of whether images produced by DALL-E and other AI tools constitute a “transformative” purpose is, at best, murky due to the automated nature of their production.

Many in the arts and the AI space have noted that it will likely take new legislation to settle the question.

“On the business side, we need some clarity around copyright before using AI-generated work instead of work by a human artist,” Jason Juan, an art director and artist with clients including Disney and Warner Bros., told Forbes last week. “The problem is, the current copyright law is outdated and is not keeping up with the technology.”

Similarly, Daniela Braga, who is on the White House Task Force for AI Policy, said a “legislative solution” is necessary.

“If these models have been trained on the styles of living artists without licensing that work, there are copyright implications,” Braga told Forbes.

In the meantime, Getty has said it is using the Coalition for Content Provenance and Authenticity, an industry-created development project, to filter out AI-generated content. It’s unclear whether such a tool will be effective.

]]>
GANs and NFTs https://www.artnews.com/list/art-in-america/features/gans-and-nfts-1234594335/ Fri, 28 May 2021 18:33:20 +0000 https://www.artnews.com/?post_type=pmc_list&p=1234594335 When Christie’s hosted its first Art + Tech Summit in 2018, the topic was the blockchain. The second edition, in June 2019, focused on artificial intelligence. Blockchain and AI are two big, buzzy topics, and they have intersected in unexpected ways, especially during this year’s crypto art boom. Artists whose work uses generative adversarial networks (GANs)— algorithms that pit computers against each other to produce original machine-made output approximating the human-made training data—have turned to crypto platforms not only to sell their work, but also to explore ways of critically and creatively engaging the blockchain.

People who make creative work with AI tend to be self-taught, as artists or engineers or both. They’re drawn to new technologies and ideas taking shape at the margins of culture. There’s a provocative friction between the figure of the tinkering outsider and the reputations of AI and blockchains, in the popular imagination, as rapidly growing forms of technological infrastructure with massive resources invested in them, behemoths that are transforming the shape of everyday life by digitizing more and more of it. Artists who sell their work as NFTs have been criticized for contributing to an ecologically destructive, toxically libertarian culture; artists who make work with AI have drawn fire for normalizing the technologies that enable corporate surveillance and predictive policing. The artists who take up these tools despite the problems associated with them aren’t utopians. However, they see firsthand the reality that new technologies are not monoliths but evolving systems, rife with flaws and potentials.

]]>
How Does a Human Critique Art Made by AI? https://www.artnews.com/art-in-america/features/creative-ai-art-criticism-1202686003/ Wed, 06 May 2020 14:37:35 +0000 https://www.artnews.com/?p=1202686003 In 1962 something unusual happened at Bell Labs in Murray Hill, New Jersey. After a long day tending the room-size IBM 7090, an engineer named A. Michael Noll rushed down the hall with a printout in hand. The machine’s plotter had spit out an arrangement of lines. The abstract design on the sheet could have passed for a work in the print section at a museum of modern art. Noll had decided to explore the machine’s creative potential by having it make randomized patterns. Noll’s breakthrough would eventually be called “computer art” by both fellow programmers and cultural historians. His memo announcing the creation to the Bell Labs staff, however, was more measured: “Rather than risk an unintentional debate at this time on whether the computer-produced designs are truly art or not,” he wrote, “the results of the machine’s endeavors will simply be called ‘Patterns.’”¹

The idea that machines can mimic certain aspects of human reasoning—thus becoming “artificially intelligent”—stretches back to the birth of information theory in the 1940s. But for decades after the first experiments in digital “thinking,” little to no progress was made. Prognostications about a cybernetic future stalled and funding dried up, in what researchers referred to as an “AI winter.” But recently, optimism has returned to the field. In the past several years, machine learning algorithms—powered by a deluge of images from social media—have advanced enough that artificial neural networks produce images that some say exhibit creativity in their own right.

AI was already capable of automating rote, mechanical tasks. But its newfound ability to generate images has inspired a resurgence in the debate about machine creativity. It also prompts the question of what art critics can say about work made by computers. Criticism plays a key function in the humanist tradition as a mediator between institutions and the public. Not only do critics tie cultural production to its historical and socioeconomic context, but they also highlights art’s role in cultivating the democratic values of empathy and egalitarianism. Three recent books offer possible paths for criticism that addresses creative AI. But all of them have serious shortcomings.

A. Michael Noll, Gaussian-Quadratic, 1963.

A. Michael Noll: Gaussian-Quadratic, 1963, gelatin silver print, 30 inches square.

Noll’s “patterns” are the earliest examples cited by Arthur I. Miller in his book The Artist in the Machine: The World of AI-Powered Creativity (MIT Press, 2019). A historian of science, Miller grounds his writing on AI in his longtime interest in creativity and genius. “The world of intellect is not a level playing field,” he writes. “No matter how diligently we paint, practice music, ponder science, or write literature, we will never be Picasso, Bach, Einstein, or Shakespeare.” Miller begins from the premise that the brain is, like the computer, an information processor. Computers and artists alike ingest data, apply rules, and adapt them to create something novel. The genius brain makes connections better and faster than the average one, like a computer with more processing power. If creativity is, as Miller defines it, the production of new knowledge from already existing knowledge, then the computer may someday have the capacity to match and exceed the human brain.

Miller’s book pays special attention to Alexander Mordvintsev’s DeepDream, a 2015 breakthrough for machine creativity. While working for Google as a computer vision engineer, Mordvintsev interrupted a deep neural network trained on images of dogs and cats. His intervention rendered a momentary visual trace of the network’s attempt to spot patterns in data. The result was an image of a technicolored chimera, half-dog and half-cat, that spliced together parts of the reference image (a kitten and a beagle perched on adjacent tree stumps) with thousands of other images of similar subjects. It was dubbed “Nightmare Beast” after it went viral on the internet.

Miller discusses a host of other engineer-artists who have helped advance the growing phenomenon of art made with neural networks. In Portraits of Imaginary People (2017), Michael Tyka drew upon thousands of portrait photos from Flickr to create images of people who never existed. Leon Gatys’s style-transfer work employs a neural network to make found images resemble paintings by Rembrandt, Cézanne, and Van Gogh. Miller doesn’t fully explain how these images exhibit creativity. The computer makes decisions about colors and brushstrokes, but its transformations are more of a novelty act than the creation of original work.

Alexander Mordvintsev's DeepDream “Nightmare Beast,” 2015.

Alexander Mordvintsev’s DeepDream “Nightmare Beast,” 2015.

Almost all of Miller’s triumphant examples employ unsupervised learning, in which a neural network generates images by itself. A more specific subset of this process used by artists is known as a generative adversarial network (GAN), which involves two dueling neural networks: the discriminator, which is trained on real images, and the generator, which scrambles pixels to make completely new images. The generator sends each image it makes to the discriminator, which judges whether or not the image is made by the generator.

GAN art engages only in combinatorial creativity, meaning it synthesizes existing image data to make something slightly new, based on its imperfect ability to correlate an image to assigned metadata. Mordvintsev fed a neural network images of cats and dogs, then interrupted it midstream to watch it try to build a cat or a dog itself from scraps. The GAN is cognitively inferior to a newborn child. It needs to be fed source material specifically and carefully. One of AI-art criticism’s first tasks will be to show the public that supposedly “smart” technologies are, in practice, quite dumb. Just because GANs employ unsupervised learning—meaning there is no predefined output—does not make them creative.

 

Artist Casey Reas details the many steps involved in the production of GAN art in Making Pictures with Generative Adversarial Networks (Anteism Books, 2020). One of the developers of the generative art software Processing, Reas provides a text that is part “how to,” part reflection on the aesthetic capacities of the method. It is the best account so far of how a critic might engage with the end product of something on the order of DeepDream.

To start, Reas is adamant that with each step in the highly complex process of working with GANs, the artist makes critical interventions. First, one must select and upload a large set of images. Reas compares this process to a photographer’s choosing subjects, staging, and equipment. Second, one makes adjustments to interrupt or influence the neural network’s output. These choices are often speculative, and the end result is always a matter of testing and learning. Reas describes it as coaxing images from the “latent space.” No one is sure exactly how the networks’ layers learn, and thus, no one is quite sure why the networks modify images the way they do.

An image of houses along the Neckar River in Tübingen, Germany by Leon Gatys in the style of Edvard Munch, created using a neural network.

An image of houses along the Neckar River in Tübingen, Germany by Leon Gatys in the style of Edvard Munch, created using a neural network.

Reas gives a sober assessment of the role of the artist vis-à-vis these nascent tools. He is neither utopian about superhuman powers nor dismissive of AI-based creative output, as limited and contrived as it may be. Instead, he lays out the points of entry for criticism. Anywhere the artist directly intervenes, the critic may follow. A critic writing about DeepDream could, for example, point to the narrowness of Mordvintsev’s selection of images of cats and dogs, and how the end product suffered as a result. Or the critic could comb through the various versions produced by the neural network and judge the relative merits of each composition. Both of these approaches treat the artist-engineer as a curator, as someone whose primary work is the selection of images. This is because the true moment of machine creativity is illegible to humans. If it weren’t, then there would be nothing artificial about its intelligence.

An image of houses along the Neckar River in Tübingen, Germany by Leon Gatys in the style of Vincent van Gogh, created using a neural network.

An image of houses along the Neckar River in Tübingen, Germany by Leon Gatys in the style of Vincent van Gogh, created using a neural network.

Proponents of AI art are quick to defend the new genre with comparisons to photography. “Generating an image with a GAN can be thought of as the start of another process, in the same way that capturing an image with a camera is often only one step in the larger system necessary for making a picture,” writes Reas. When artists use a camera, it is clear what they intend to create. Accidents happen, to be sure. But no previous creative process has ceded more control to mathematics than working with AI.

An image of houses along the Neckar River in Tübingen, Germany by Leon Gatys in the style of J. M. W. Turner, created using a neural network.

An image of houses along the Neckar River in Tübingen, Germany by Leon Gatys in the style of J. M. W. Turner, created using a neural network.

Media theorist Lev Manovich goes beyond artistic use of machine learning, taking a broader view of AI’s impact on culture. In AI Aesthetics (Strelka, 2018), he focuses on the AI built into apps like Instagram—the commercialized creative AI that he says could influence “the imaginations of billions.” Manovich usefully explores the negative implications of the “gradual automation (semi or full) of aesthetic decisions.” This leads him to touch eventually on a possible future in which AI becomes a kind of cultural theorist. Manovich cites a common case: a computer that is fed multiple examples of artworks can use machine learning to detect basic attributes that belong to certain styles and genres. After this, the system can accurately identify the style or genre of a new image fed into the network. While Manovich suggests that this is, on the surface, one of the functions of a cultural critic or historian, he also points out its limitations. The GAN that identifies a style or genre is just as much a black box as the GAN that produces new art. While the “cultural analytics” that Manovich predicts may empower machine learning to do the descriptive work of historiography, telling us the what, where, who, and when on a scale we have never before experienced, it is unable to access the critic’s ability to tell us about the how and why. As these semi-automated aesthetic categorizations become more common, there’s a risk of impoverishing critical inquiry by erasing human reference points.

 

The current discourse on art and AI offers two ways to situate the artist’s agency. Miller represents the computationalist school of thought, where all the faculties of human genius will, in due time, be usurped by the powers of machine learning. Reas, however, places the human artist at the center, explaining how AI is actually a set of discrete tools used by the artist. Most people working with these new tools fall somewhere in between, as evidenced by Manovich’s hopes for a branch of cultural analytics that combines the data scientist’s precision with the humanist’s theoretical analysis. But there is little doubt that with the introduction of these machine learning abilities—whether they be the superintelligent computers of Miller’s fantasy or the extensions of the artist’s tool kit described by Reas—the critical observer will need a new vocabulary. The math behind all these tools challenges the critic’s understanding and application of context, meaning, intent, and influence. The more advanced the GANs become, the less there is to say.

Mike Tyka, A fleeting memory, 2017.

Mike Tyka: A fleeting memory, 2017, archival print.

But there could be a more expansive and generative future for critique. There is a growing movement in Silicon Valley to integrate neural network capabilities into all parts of life, from loan application assessment to predictive policing. The designs for this future are being implemented by massive technology platforms that already act like states: mapping the world, working on defense projects, impacting the outcome of elections. Against this background, the computationalist notion that the human brain is a suboptimal computer has enormous political and ethical weight. The critical discourse around art and AI takes on urgent political significance. Critics cannot simply ignore AI creativity as they might have ignored momentary fads in the past. Art institutions are now eagerly partnering with corporations that conduct research in machine learning, seeing them as new sources of funding and audience engagement. Artists today can employ the same tools used to drive autonomous vehicles, surveil and track immigrants, or calculate the probability that a person will commit a crime. In this rapidly changing climate, criticism of art made with AI has a responsibility to engage in a structural analysis that identifies the position of GANs in the system of power relations known as platform criticism.

Mike Tyka, khaledbakri7, 2017.

Mike Tyka: khaledbakri7, 2017, archival print.

Seeing, Naming, Knowing (Brooklyn Rail, 2019), by critic and curator Nora N. Khan, is the strongest example of criticism that addresses this new algorithmic regime. Khan takes law enforcement’s deployment of racially biased automated surveillance tools as a starting point. She then discusses the work of Trevor Paglen, Ian Cheng, and Sondra Perry, identifying automated image production as a profound transition from passive to active camera vision. Her analysis reveals a drive toward a machine-enabled omniscience. Large-scale machine learning marches inexorably toward optimization, always seeking more data in an attempt to smooth out mistakes in computational judgment. Worst of all, these highly orchestrated efforts present themselves as neutral, objective, and impervious to critique. “Seeing is always an ethical act,” Khan argues. “We have a deep responsibility for understanding how our interpretation of information before us, physical or digital, produces the world.”

AI creativity, however tenuous, is an early indication that art has now become inextricable from the same platforms that run the information economy and all its newly potent political apparatuses. Criticism of AI art must acknowledge this reality, and explore how it shapes the art of our time.

1 A. Michael Noll, “Patterns by 7090,” Bell Labs Inc. memo, Aug. 28, 1962, noll.uscannenberg.org.

 

This article appears under the title Critical Winter in the April 2020 issue, pp. 26–29.

]]>
How Does AI Change the Way We Perceive Art? https://www.artnews.com/art-in-america/features/ian-cheng-simulation-ai-art-1202675838/ Tue, 21 Jan 2020 19:57:54 +0000 https://www.artnews.com/?p=1202675838 Images Made by Machines, for Machines
Last fall Christie’s sold a computer-generated painting titled
Portrait of Edmond de Belamy, from la Famille de Belamy (2018) for $432,500. (Early estimates had peaked around $10,000.) A blurred portrait of a chubby man in a frock coat, the work, per Christie’s, was created “by an artificial intelligence, an algorithm,” the algebraic formula for which, “with its many parenthesis,” was written out in the painting’s lower right-hand corner like a signature.1  As media outlets sought out experts to parse the freak sale, an AI-in-art community coalesced. Spokespeople for this ad hoc group of artists and technologists quickly denounced the painting and its algorithm, credited to the Parisian collective Obvious.

The collective had made extensive use of code by nineteen-year-old wunderkind Robbie Barrat, who rebuked Obvious as, well, obvious. “No one in the AI and art sphere really considers them to be artists,” Barrat told Artnet. “They’re more like marketers.”² Similarly, fellow AI art luminary Mario Klingemann said, “It’s horrible art from an aesthetic standpoint. You have to put some work into it to call it art.”³

Many popular accounts of AI hinge on the possibility, whether longed for or feared, of a self-directed artificial general intelligence able to perform any human intellectual act. Part of the backlash to the Edmond de Belamy sale was in response to the suggestion that an “artificial intelligence managed to create art,” as a since disavowed press release stated.4 Like most of what currently passes as AI, Obvious’s program is less self-actualized consciousness than relentlessly honed unitasker. Drawing from fifteen thousand portraits painted over the last five centuries, the AI analyzed the data set’s patterns until it produced a number of criteria defining said portraits. The AI then set two algorithms against each other: a generator that produced images based on the criteria, and a discriminator that decided whether those newly generated images met the standard.

Obvious: Portrait of Edmond de Belamy, from la Famille de Belamy, 2018.

Obvious: Portrait of Edmond de Belamy, from la Famille de Belamy, 2018, ink on canvas, 27½ inches square.

It seems predestined that someone would use available technology to render uncanny, historic-looking portraiture, signifying little else but the fact that a computer made something as arbitrarily old-looking and conservatively humanist as Edmond de Belamy. But the machinic consumption and production of images can be much more sophisticated and harrowing: ICE mines information from driver’s licenses with facial recognition technology; deepfakes are used for pornography and political trickery. Moreover, AI can create images that are far stranger than a depiction of a hunched, portly white man seen through Photoshop’s oil paint filter.

For his series “Adversarially Evolved Hallucination” (2017) Trevor Paglen trained a generator-discriminator AI to produce visual representations of allegories and concepts, ranging from symbols from Freud’s Interpretation of Dreams to monsters like vampires and zombies that have been historical emblems of capitalism. As is the case with much AI art, the “Hallucinations” were produced through extensive human labor, with Paglen gathering tens of thousands of images for the AI to assimilate. Paglen’s use of these technologies is less about the prospect of facing our mechanized doppelgangers in art school than about coming to terms with the power and volume of rather specific applications. Traffic cameras snapping license plates, algorithms trawling the more than 50 billion photos posted on Instagram, scanners discreetly registering faces at Walmarts and sports stadiums—optic data is extracted wholesale anywhere and everywhere both IRL and digitally. AI vision invades our public and private lives at an incredible pace and magnitude. “The overwhelming majority of images are now made by machines for other machines, with humans rarely in the loop,” Paglen writes.5

The question of whether a machine can make a work of art is therefore a little quaint. We humans seem to have consigned ourselves to minor modes of visual production in the face of both AI’s current application and the prognosis of its future uses. (More dangerous than nuclear weapons, says Elon Musk.) Artworks engaging with AI beg to be eclipsed by these questions, as well as the increasing antiquatedness of the works’ content in the face of whatever comes next.


Simulation as Ritual

A few months after the Edmond de Belamy sale, I had an indifferent experience when viewing an animation produced by an AI, in the sense that I had little to no response to or engagement with it. On the screen was a creature composed of a series of spiny segments strung together like some unending crab leg. It glanced around with many visages. Its gray auxiliary faces lined a body topped by a larger crimson head, all simultaneously feline and reptilian. BOB (2018), the moniker an acronym for “Bag of Beliefs,” is described somewhat cheekily as an AI life-form by its creator, New York–based artist Ian Cheng. Though not associated with the AI crowd that responded to the Christie’s sale, Cheng has become well-known over the past eight years for his work in screen-based simulations, usually coded using the video game engine Unity. In 2017, the “Emissary” trilogy of simulations, exploring the history of cognitive evolution, constituted Cheng’s first US solo museum presentation, at MoMA PS1 in New York.

But my reaction to Cheng’s work came at Gladstone Gallery in New York, as I watched BOB jet across a mostly empty digital space displayed on eighteen monitors gridded together into a giant screen mounted on a white wall. At the top of the barren landscape floated a constellation of dots. Every few minutes, BOB soared toward one of these dots, connecting with it as if touching a star in the sky. A gong then sounded and a black portal opened, dropping offerings. BOB floated back to the floor to sniff at the heavenly gifts, auxiliary heads proffering spiny fruit and mushrooms to the creature’s central mouth, which made a hmmph noise as it ate and the crab leg grew (and defecated, telescoping pipes of gray emerging from its trunk).

Ian Cheng: BOB, 2018.

Ian Cheng: BOB, 2018.

Cheng described the dots as shrines, and each one visible on screen was tagged with the name of an individual (IG Max, Young Costanza, etc.) who had downloaded and operated the BOB Shrine app. Through the app, I could select consumables to feed BOB, as well as gift charms like black orbs and “luck stones” (I understood the effects of neither). I could also send bombs that blew BOB up. Reduced in segment number, BOB briefly appeared corpselike before it resumed zooming around. (I could give the offerings labels like “cursed” and “lucky”; BOB would judge the labels’ accuracy according to an inscrutable algorithm and thereby award me reputation points.)

Disallowed death, BOB periodically cycles through micro personalities described by the show’s press release as a “congress of motivating ‘demons.’” The term “personalities” sounds more complex than the unitasking singularity of the demons’ urges: eater demons are hungry for offerings, flight demons flee threats like bombs. In the exhibition’s exegesis of this bag of beliefs, the demons fight for control over BOB. The winner is the one that produces minimal surprise, the unexpected creating “emotional upheaval” that signals BOB to update its beliefs in order to avoid further disruption. The goal, it would then seem, is stability, an entropic settling into sameness of behavior, regardless of whatever explosive material or lucky stones rain from above.

I visited BOB three times, for an hour or so each session, over the course of the two months it lived at Gladstone. I never saw BOB jump to my star. While I sat, my shrine was never listed in the right-hand ticker, where bot-vernacular messages from BOB appeared: “I chose Chunky Rat’s Shrine”; “My Alert Demon took over me, but now my Idle Demon is coming.” I was hard pressed to determine any meaningful differences between the demons, which mostly seemed to vary the speed with which BOB snuffled at offerings and darted toward the ceiling.

Emissary Forks at Perfection (2015–16), the second work in Cheng’s earlier trilogy, was on view concurrently at the Museum of Modern Art in New York as part of a group show. I stood in front of the massive screen for an hour, watching myriad Shiba Inu dogs be led by unmanned golden leashes around a swampy crater lake at dusk as a skeleton man loosely clothed in translucent flesh wandered around. Had I not read Cheng’s account of these various agents, I would not have deduced that the skeleton was an unnamed undead celebrity whose relationship to the Shiba controlled the water level of the lake, depending on the health of their bond. This was managed by an AI that “spoke” to the dog through the leash—an interaction I would never have intuited.

Ian Cheng: BOB production drawing, 2018–19.

Ian Cheng: BOB production drawing, 2018–19, ink on paper, 8½ by 11 inches.

Emissary Forks at Perfection was totally opaque to me, but I could understand, in general, what BOB was doing, if not exactly what was going on in the bag of beliefs. If easier to parse, the interactive elements of this relatively simple single agent weren’t particularly engaging. All of that demonic congress of code and mobile phone–
distributed instruction made for a life-form alone with inanimate objects, doing what animals in captivity often do. I felt as though I were watching an aquarium with one inhabitant.

AI automates and accelerates the production of images, moving and still, and of prose, poetry, and pop songs in a media environment already saturated with human-made cultural objects. The low ratio of signal to noise produces a great deal of indifference. These viewing experiences, in which we are present and unengaged, are paradigmatic of how we currently engage with most things. But watching Cheng’s segmented creature bolt to “Furthering Height’s Shrine” to unleash a hail of starfish offerings was, once I understood what was happening, an odd, AI-inflected combination of novel and uninteresting.

In various interviews, publications, and press materials Cheng frames his simulations as “video games that play themselves.”6 These simulations are characterized by an odd series of states: virtual, singular, and infinitely ongoing, at least until the power cord is yanked. One could have always watched more, and one could have always watched differently. Cheng himself has said that his simulations continually surprise him, and I imagine he has watched them more than anyone else has. Looking for something digestible, i.e., finite, critics are left either to engage with what they’ve seen of the works previously or cling to first premises and principles, to a simulation’s numbers and functions and the narrative and argot the coder is trying to push on them. Trying to understand such a work can be like reviewing an instruction manual. An AI art piece will always be, at the very minimum, about how it’s AI.

That so much of what constitutes the simulation remains unseen makes it strangely akin to works that cannot be fully taken in from one viewpoint at one time, like certain examples of Land art. The difference is a kind of quantifiably boundless excess, which encourages long viewing times, trying to catch a glimpse of what might happen, even if it’s slight variations in aquarium behavior. Scholar Sianne Ngai argues that calling something “interesting” is a plea to keep paying attention because it’s perpetually different, always diverging from what it was before.7 A simulation is the apotheosis of this. BOB isn’t uninteresting, as I first surmised; it is merely interesting.

 

Ian Cheng: Emissary Forks at Perfection, 2015–16, software simulation.

Ian Cheng: Emissary Forks at Perfection, 2015–16, software simulation.

Worlders and Lurkers

A simulation could be more than that. In Emissaries Guide to Worlding, a publication that accompanied Cheng’s 2018 exhibition at the Serpentine Galleries, London, where BOB debuted, Cheng writes about his work as Worlding (always with a capital W), an infinite game played for the sake of playing, as opposed to the finite game played to win. As Cheng describes it, Worlding is a three-act process. The player (a “Worlder”) first composes a present of characters, relationships, and ecological conditions, then narrates a prehistory, and finally simulates a future in which an “infinite-enough game engine for the World” exists “to perpetuate itself without its supervising author.”8 Though the singular noun of Worlder suggests a lone artist giving rise to a fictional expanse, Cheng’s emphasis on “infinite-enough” perpetuation hints at Worlding’s extra-fictional effects. Cheng cites as twentieth-century examples of Worlders various titans of technology and mass media whose stories and ideas have become the bedrock intellectual property for corporate empires that far exceed any individual authorial scope. “The fiction,” Cheng writes, “becomes the movie, becomes the video game, becomes the toys, spinoffs, theme park, becomes the working mega-economy of a franchise.”9 Walt Disney, George Lucas, Steve Jobs: whether artists or marketers or names that mostly serve as metonyms for multimedia conglomerates worth many billions, they’re Worlders all.

A simulation is a neat trick for an individual artist to attempt in order to match the scale of such empires, and Cheng has generated enough complexity out of his AIs and mythologies to produce something ongoing. Perpetuation isn’t engrossment, however, even though I imagine either the technology of the simulations or the narrative glosses Cheng gives them will improve. As of now, however, the two combine poorly: the mythologies provide an inadequate hermeneutic for those scrutinizing the random action of the machine.

I watched BOB’s random action but I wasn’t engrossed in its world, as I am by many of the franchise products that constitute the myriad worlds available to those with an internet connection, a laptop, a phone, a console. I acquiesced to unitasking, which is to say I was a pair of eyes zoned out watching a video game play a video game, much in the same way I zone out watching other people play video games, a phenomenon that began on friends’ couches and spread en masse through livestreaming platforms like Twitch that combine real-time play with continuous dialogue between streamer and audience through a chatroom overlay. Twitch has grown rapidly since its 2011 founding. Each month during 2018, some 3.4 million unique broadcasters streamed themselves, for a combined total of 560 billion minutes watched. Bought by Amazon in 2014 (and thus becoming part of Worlder Jeff Bezos’s infinite game), the website is, according to Alexa.com, the fortieth most visited website at the time of this writing. You can stream other kinds of simulators there (I have been watching a Mennonite farmer play Farming Simulator 19), and MoMA PS1 ran the “Emissary” simulations on the platform during Cheng’s 2017 show, though the institution has left no documentation on the account. Streamers, one should note, often review video games while playing them. An adequately novel, if incomplete, form of reviewing a Cheng work, might be to do a gonzo Twitch stream of it in which we all congress around BOB.

Ian Cheng: BOB production drawing, 2018–19.

Ian Cheng: BOB production drawing, 2018–19, ink on paper, 8½ by 11 inches.

I don’t play video games, but I do keep up with them via blogs and YouTube and Twitch. This phenomenon of engaging not with a cultural form itself but rather with its attendant offshoots seems to be increasing. Who has time to do the thing, anymore? Just look at the documentation. And while I won’t speak categorically, I imagine there is at least a sizable portion of the Twitch-watching public that thinks as I do when I occasionally peruse the website. A vast technological infrastructure undergirds the playing of games not to win even when one can, but simply to play, and this infrastructure produces its own cosmology and history and vernacular and rituals of communication. In the face of this World, I watch but don’t contribute or engage or interact.

The opposite of the Worlder in these media environments Cheng calls Worlds might be the Lurker, a passive and unengaged recipient of content. I just lurk mutely, then think later about what I’ve done, often regretting the time spent giving something my attention but being inattentive. I used to be more of a guilty couch potato, someone who relaxes through media but feels worthless when reflecting on their downtime, but I’ve realized how haptic these things are, the way my fingers type in URLs, swipe to apps on my phone. (While writing this I lost ten minutes watching Tfue, the most-followed Twitch streamer, play Fortnite, the most-streamed Twitch game of 2019.) This tendency toward engaged unproductivity, habituated into my body by however many screens I’m surrounded by, can feel like a precursor to a coming world.

There is a technological advance up the gentle slope of graduated artificial intelligences. This will assuredly exert downward pressure on wages and available work, regardless of whether every hand lifting a wrench or a paintbrush is replaced by the automaton’s claw. Extrapolating from current studies of the nonworking and underemployed, both retiree and prime-age male populations spend most of their time devoted to leisure, “the lion’s share,” according to Atlantic reporter Derek Thompson in his essay “A World Without Work,” “spent watching television, browsing the Internet, and sleeping.”10 Thompson was writing in 2015, and his report already sounds dated, as differentiating watching television from browsing the internet is becoming increasingly difficult. Where is the playing and watching of video games?

Against the couch-potatofication of the working world, there are optimistic predictions that we’ll all turn to meaningful communities of play or art-making as technological unemployment ratchets up, every person becoming a Twitch streamer or a Worlder. Maybe, but here’s a hedge of my own against disappointment: as economies mutate and Worlds metastasize (with or without marketing departments), endlessly outputting wikis for prestige television spin-offs and movies of video games or vice versa, much of the material may be mediocre—or merely novel at best. That combination of new but meh is the sort of spectacle I and others already zone out to, lose time to, because we can muster no aesthetic judgment but only a passive reception of serialized difference. And all this cultural output, or at least that portion of it that’s available to our human eyes and ears, can become art through institutional validation. But there’s no guarantee that you or I will still be able to earn a living from the art we create or the work we do. We may become a world of Lurkers. But perhaps people will send me money on Twitch for my criticism. I’ll stream myself looking, but doing little else, and talking—until I become like a function, an AI parsing contemporary work and finding itself indifferent, an automaton of myself that I can sit back and lurk.

Ian Cheng: Emissary Forks at Perfection, 2015–16.

Ian Cheng: Emissary Forks at Perfection, 2015–16.


1
See “Is Artificial Intelligence Set to Become Art’s Next Medium?,” Christie’s, Dec. 12, 2018, christies.com
2 Robbie Barrat, quoted in Tim Schneider and Naomi Rea, “Has Artificial Intelligence Given Us the Next Great Art Movement? Experts Say Slow Down, the ‘Field Is in Its Infancy,’” Artnet News, Sept. 25, 2018, news.artnet.com.
3 Mario Klingemann, quoted in Meagan Flynn, “A 19-Year-Old Developed the Code for the AI Portrait That Sold for $432,000 at Christie’s,” Washington Post, Oct. 26, 2018, washingtonpost.com.
4 Quoted in James Vincent, “How Three French Students Used Borrowed Code to Put the First AI Portrait in Christie’s,” The Verge, Oct. 23, 2018, theverge.com.
5 Trevor Paglen, “Invisible Images (Your Pictures Are Looking at You),” New Inquiry, Dec. 8, 2016, thenewinquiry.com.
6 See, for example, Ian Cheng quoted in Andrea K. Scott, “Watch the Absorbing and Tedious Simulations of Ian Cheng,” New Yorker, May 16, 2017, newyorker.com.
7 See Sianne Ngai, Our Aesthetic Categories: Zany, Cute, Interesting, Cambridge, Mass., Harvard University Press, 2012.
8 Ian Cheng, Emissaries Guide to Worlding, London, Serpentine Galleries and Fondazione Sandretto Re Rebaudengo, 2018, p. 7.
9 Ibid., p. 9.
10 Derek Thompson, “A World Without Work,” The Atlantic, July/August 2015, theatlantic.com.

 

This article appears under the title “Lurking in an AI World” in the January 2020 issue, pp. 42–47.

]]>
Photos: AI-produced Art https://www.artnews.com/gallery/art-in-america/aia-photos/photos-ai-produced-art-1202675883/ Tue, 21 Jan 2020 18:55:02 +0000 https://www.artnews.com/?post_type=pmc-gallery&p=1202675883