008: AI Excess
On the excess and horrors of generative AI images.
The following text has been published in Swedish in BON magazine, and I have permission from the editors to share the English version with you all.
It begins with the illustration of a cat: According to the author this is the “image of a cat sitting comfortably on a cushion, with its distinctively tabby markings and bright green eyes, set in a cozy living room.” The cat’s face is too cartoonish and its gaze too anthropomorphic to pass as an image made by photographing a real cat. Its fur is, in some spots, so fluffy and diffuse as if unaffected by gravity or friction. But around the mane and inside the ears, each hair swoops away from the body as if freshly drawn or painted. These telltale inconsistencies of focus and sharpness are indicative of an image generated by an artificial intelligence chatbot like Midjourney or ChatGPT. It betrays a production process that intends to have it both ways — near and far, high-def and lo-fi, good and easy, fast and cheap.
This cozy ChatGPT cat was posted about a year ago to r/ChatGPT, where early users of OpenAI’s generative AI chatbot can share their prompts and the images they yield. To follow up on the cozy cat, the creator’s second prompt asks for a more “successful and sophisticated” cat “sitting in its office.” The result is a cat in Western business attire, sitting on (not at) a desk stacked with files, in front of a wall of diplomas and certificates. Then comes the “ultra-successful and distinguished” cat in a “luxurious executive office.” Four prompts later, and our creator asks for a cat that rules over “a boundless transcendent dimension, embodying a level of success that defies imagination.”
As the cat grows more successful, it becomes more powerful. The office slowly transformed into a multi-planetary command center, then a castle, then a center-seat in what looks like the highest court in the universe. “A cat, but it gets progressively more successful in life,” is more than a Reddit post, it summarizes AI image generation’s entire range of potential, the best it can do and the worst it can do. The worst thing generative AI can do is already happening: Plagiarizing artists’ work, replacing artists in the labor force, sucking up obscene amounts of water, energy and fossil fuel to the point of accelerating climate change-related deaths. In short, it helps ruling classes enact another phase of primitive accumulation by hoarding power and resources to excess in order to secure and expand their rule. The best thing generative AI can do is make fun of itself by folding itself open like an endless brochure of stupid ideas, all in the name of populating the internet with an excess of images so cheap and abundant they’re commonly referred to as “slop.”
The pursuit of excess that propels AI accelerationism can only beget more excess. On the level of the image, AI’s excess works exponentially, the prompts buckle and the chatbot splits open to generate increasingly more monstrous and grotesque results. The aesthetics of AI enact a Gothic drama of hauntings and emergence, there is always too much to contain or conceal. This is where we find the sloppiest kitty: A work from a Midjourney artist that goes by @kentshooking on Instagram. The prompt, “When you share a neuralink with your cat at the day spa.” In this video, you can often make out the face of a cat and that of a woman. But these faces are constantly changing, as if they were being decomposed by greedy pixel-shaped maggots. Shifting from soft-focus to deep-fried — an image that wants to be every kind of image — the woman and the cat are sometimes conjoined by a fleshy pink tube that itself transforms into a gloved hand, then a free-floating organ, before once again fusing to the woman and her cat as they continue writhing in the convulsive slop.
Notes: AI Excess builds on earlier work I did for Dirt, in 2023, about disgust and generative AI. The essay diptych was called AI Abjection + AI Bleed and they mostly covered the human-versus-nonhuman debate that surrounded generative AI discourse at the time. I’ve always resented this dichotomy, 1) because “the human” as a category is mostly bullshit 2) everything AI does is the result of human activity and decisions and arguing otherwise only serves to mystify people and parties we should be holding to account. I sometimes think that I am still writing these essays through subsequent works on AI and disgust.
from AI Abjection
…Julia Kristeva calls it abjection in her book-length essay, Powers of Horror. “These body fluids, this defilement, this shit” she writes, and these pixels and bits, I’ll add,“are what life withstands.” When we are disgusted we are challenged on an existential level. Abjection is an acid that corrodes the meanings of our worlds. “There,” Kristeva writes, “I am at the border of my condition as a living being.” So it’s no surprise that feelings of disgust and repulsion also find their way into our reception of emerging technologies. Like the NYPD’s headless robodogs or the self-driving cars with homicidal tendencies. Along with, or perhaps because of, the existential threat some feel when faced with any advancing technology—like generative machine learning—disgust leaks through and tinges our perception…
…For Kristeva, symptoms of disgust point to the disturbance of a symbolic order. We want things to always mean what they’ve always meant—for “us” to mean “safe,” “impermeable,” and “pure.” We rely on fantasies like these to draw clear distinctions between the human and the animal, the living body and the decaying corpse, the human and the machine to maintain our sanity. So our skin tingles at the thought of its dissolving boundaries and our brains fizz at the prospect of their uselessness…
from AI Bleed
…But our deep concerns don’t just have to do with metabolizing AI, they also concern the “bleed” of software into culture, areas where the human hand and mind have always been the dominant mode. This desire to “staunch” the bleed has less to do with the inherent qualities of AI programs and more to do with our fragile ability to draw firm boundaries between AI and human cultures. […] It’s all about the boundaries, the hyphens in the us-vs-them dichotomy, the skin we need to recognize ourselves as distinct from unknown and uncontrollable others.
When and how AI culture is a threat to human culture depends on where you draw the line between human and machine. After all, if AI culture is the product of human-made machines, running human-authored software trained on human-made cultural artifacts, then when and how exactly does it become nonhuman? When we no longer recognize it as our own? When we think of it as something else? I think a lot of the bleeding we so fear has always been happening…
More recently, I returned to my writing on Bryan Johnson for ArtReview, where I reviewed his Netflix documentary and swung at the tech bro longevity complex. (And talked about this some more on NPR’s “It’s Been A Minute.”) Also for ArtReview, what AGI development is doing to creative labor and our access to books and art. Forthcoming, a long essay about AI image-making and Gothic horror for a new magazine launching soon. Subscribe to stay tuned. 🖤



rlly enjoyed this!