2025-in-review

It’s strange to measure every year against a concept developed by a science fiction writer, but William Gibson’s line “The future is already here—it’s just not evenly distributed”1. has been my north star for my recent year-in-review essays. Gibson meant that the future was unevenly distributed by class: the wealthy receive high-tech healthcare while the world’s poorest live in squalor—though one might ask which of these is really our future. Yet the quote has been repeatedly misread as a claim about time andspace: that the future arrives somewhere first, perhaps unseen, while the rest of the world catches up. But this misreading is more productive than Gibson’s intent. Gibson’s critique of inequality is fair enough, but we all know this, decry it, and go on about our business. The misreading, on the other hand, is a theory of historical change.

With the release of ChatGPT in late 2022, a temporal rift opened, shattering the post-Covidean present. But many tried the early tools, encountered hallucinations, read articles about slop and imminent environmental ruin, and reasonably concluded there was nothing to see. By 2025, a cursory examination of news in AI would have assured them that AI had proved a bust. OpenAI’s long-awaited updates disappointed, and the company flailed, turning to social media with Sora, a TikTok clone for AI. Meta seemed to abandon its efforts to create a competitive AI and instead turned to content generation for Instagram and Facebook, something nobody on earth wanted. Talk of a bubble started among Wall Street pundits. The hype-to-disappointment cycle is familiar, and the dismissals were not unreasonable.

But again, the future isn’t evenly distributed, and if you don’t know where to look, you would be excused for believing it’s all hype. Looking past such failures, 2025 was actually a year of breakneck progress. Anthropic’s Claude emerged as the most capable system for complex tasks, Google’s Gemini became highly competitive, while DeepSeek and Moonshot AI proved that China was not far behind. More significant than any single model was the emergence of agentic AI—systems that can take on multi-step tasks, act, navigate filesystems, write and execute code, and work across documents. Claude Code was the year’s groundbreaking innovation. While “slop” was Merriam-Webster’s word of the year, “vibe coding”—using agents to write programs—was much more important. Not only could programmers use them to accelerate their work, it also became possible for non-programmers to realize their ideas without any knowledge of code, a radical change in access I explored in “What Did Vibe Coding Just Do to the Commons?”.

By any first-world standards, at least, these tools are remarkably democratic and inexpensive. A basic Claude subscription costs about as much as a month of streaming, and even the $200 maximum usage account costs less than a monthly car payment. For many, however, the barrier is not price but something deeper—a resistance approaching revulsion. These tools provoke fear in a way that earlier technologies did not. It’s not the apocalyptic dread of the doomers or the Dark Mountain sensibility that apocalypse is near. Rather, it’s a threat to the sense that thought itself is what makes us distinct. The unevenness of the future is no longer about access; it’s now about willingness to engage.

As a scholar, thinking about the very short term is strange for me. I have always been suspicious of claims that radical change was upon us. I would rather align myself with the French Annales school concept of la longue durée, as defined by the great Fernand Braudel, the long-term structures of geography and climate. Faster than that were the medium-term cycles of economies and states, while he dismissed the short-term événements of rulers and political events as “surface disturbances, crests of foam that the tides of history carry on their strong backs.”2. Events, he wrote elsewhere, “are the ephemera of history; they pass across its stage like fireflies, hardly glimpsed before they settle back into darkness and as often as not into oblivion.”3. The real forces operate beneath, slowly, often imperceptibly.

Curiously, Braudel himself embraced technological change in his own work. In the 1920s and 30s, he adapted an old motion-picture camera to photograph archival documents—2,000 to 3,000 pages per day across Mediterranean archives from Simancas to Dubrovnik. He later claimed to be “the first user of microfilms” for scholarly historical research.4. His wife Paule spent years reading the accumulated reels through what Braudel called “a simple magic lantern.”5. Captured in 1940, he spent five years as a prisoner of war and wrote the entire first draft of The Mediterranean—some 3,000 to 4,000 pages—from memory. Paule, meanwhile, retained access to the microfilm and notes in Paris, and after the war, they reconstructed the text, taking his manuscript, verifying it and adding footnotes and references from the microfilm.6.

In 1945, the same year Braudel was liberated, Vannevar Bush published “As We May Think,” in which he imagined a device he called the “Memex”: a mechanized desk storing a researcher’s entire library, indexed and cross-referenced, expandable through associative trails.7. The vision remained speculative for decades. Now the world’s archives are being digitized; AI systems translate, summarize, and search across them in seconds and can translate any language. To take one example, earlier this year, I used Google’s Gemini to translate the Hierosolymitana Peregrinatio of Mikalojus Kristupas Radvila Našlaitėlis, a sixteenth-century pilgrimage narrative from an online scan of the Latin first edition. The result is not a polished scholarly translation, but a working text that allowed me to gain a good sense of a text that was previously unreadable to anyone without proficiency in Latin or Polish (the only language into which, to my knowledge, it had been translated). The role of the intellectual is being transformed—not replaced, but augmented in ways Bush could only sketch. This feels like something other than foam.

How to account for such a rapid shift? Manuel DeLanda offers one answer in A Thousand Years of Nonlinear History. Working in Braudel’s materialist tradition and drawing on Gilles Deleuze and complexity theory, DeLanda describes how flows—of trade, energy, and information—accumulate and concentrate until they cross a threshold, undergo a phase transition, radically reorganizing into a new stable state. But here is the key insight: intensification is la longue durée. The accumulation of flows that began with the Industrial Revolution—or perhaps with writing, agriculture, or even symbolic representation itself—is the deep structure behind our era. Steam, electricity, computing, the internet: each was a phase transition within a longer arc of intensification. Cities accelerate such processes, as Braudel showed, concentrating capital and labor until new forms of economic organization emerge—Venice, Antwerp, Amsterdam, London, each becoming sites at which the future arrived first. Such conditions are not opposed to la longue durée; they are the moments when intensification crosses a threshold.

The continued pace of change this year underscores that there has been no return to equilibrium. But this has been accompanied by unprecedented resistance to technology, appearing as simultaneous terror at its apocalyptic nature (in jobs, if nothing else) and dismissal as useless, especially in Gen Z. A January 2026 Civiqs survey found that 57 percent of Americans aged 18–34 view AI negatively—more than any other age group. Curiously, the seniors category, which now includes most boomers, was the least resistant to AI, followed by Gen X and older millennials, all groups that grew up seeing radical societal and technological changes.8. It seems paradoxical that the smartphone generation recoils from the tools of the future. To understand this resistance means understanding the mentalité that shaped it—what Braudel’s successors in the Annales school called the collective psychology formed through lived experience.9. For Gen Z, that formative experience was network culture—both a successor to postmodernism and a form of collective psychology I did not fully understand at the time. Writing on network culture in 2008, it seemed to me that social media promised connection; instead, it brought division.10. The networked self was indeed constituted through networks, not merely isolated in postmodern fragmentation, but the fragmentation was now collective. Networked publics built barriers against one another, creating what Robert Putnam called cyberbalkanization: retreat into a comfortable niche among people just like oneself, views merely reinforcing views.11. Identity wars and mimetic conflict flared across filter bubbles that amplified outrage and tribal scapegoating as both MAGA and wokism built toxic online cultures. QAnon and a thousand other conspiracy theories propagated through Facebook groups and YouTube recommendations. Young men drifted into incel communities where loneliness became ideology and livestreaming mass shootings was celebrated. Influencers built their empires on hatred—Hasan Piker framed Hamas’s October 7 massacre as anticolonial resistance while Nick Fuentes celebrated mass shooters as vanguards of race war and civilizational collapse.

Nor did this just fragment culture—it exacted a massive psychic toll, as social contagion spread new forms of self-harm and mental illness. During the pandemic, teenage girls began presenting tic-like behaviors—not Tourette’s syndrome, but something researchers termed “mass social media-induced illness,”12. spread by TikTok videos about Tourette’s rather than any actual disease. The pattern was unprecedented but not unique. Eating disorders spread through thinspiration hashtags. Self-harm tutorials circulated on Instagram. The platforms that were supposed to bring us together instead spread desires, disorders, and identities through pure social contagion—and with them, violence and polarization. A generation that grew up inside this experiment—that watched it reshape their peers’ bodies, minds, and identities—is right to be skeptical of the next technological promise.

In 2010, it seemed like network culture had a good chance of becoming understood as the successor to postmodernism. Bruce Sterling and I were engaged in a kind of dialogue about it online. He predicted that network culture would last “about a decade before something else comes along.”13. And he was right, as I acknowledged in my 2020 Year in Review. By then, network culture was exhausted, and with the Covidean break, it seemed time for something new. In 2023, I taught a course at the New Centre for Research & Practice to try to broadly sketch the emerging era. It’s still early and hard to fathom, like trying to understand postmodernism in 1971 or network culture in 1998, but it’s clear that if postmodernism was underwritten by the explosion of mass media, network culture by the Internet, social media, and the smartphone, then the current era is shaped by AI.

But if Gen Z, scarred by the effects of social media, has been reacting with deep fear and anxiety, Sterling how epitmozes the other reaction, dismissal. In the most recent State of the World, for example, he derides AI-generated content as “desiccated bullshit that can’t even bother to lie.” He compares the vibe-coding atmosphere to an acid trip, mocking the professionals who utter “mindblown stuff” like “we may be solving all of software” and “I have godlike powers now.” For Sterling, AI can produce nothing but slop. Now Bruce has always had a healthy skepticism toward tech claims, but I can’t help but think of Johannes Trithemius, the fifteenth-century abbot who wrote De Laude Scriptorum just as Gutenberg’s press was spreading across Europe—defending the scriptorium against a technology he could not see would remake the world.

There are even deeper, more existential fears, and I’ve spent the past year addressing them on my blog, in the process laying the foundation for a book on the topic: AI as plagiarism machine; AI as hallucination engine; AI as stochastic parrot, mindlessly repeating what it has ingested (Sterling’s critique); and AI as uncanny double, too close to us for comfort. As I explain, the discomfort arises not from the machine’s otherness but from its likeness: a mirror held up to processes we preferred to believe were uniquely ours.

It’s no accident that I published these essays on my blog. As far as my personal year in review goes, this was very much the year of the blog. I have no plans to ever publish in an academic journal again. Why would I? Who would read it? Why would I want to publish something paywalled, reinforcing the walled gardens of inequality that academia is so desperate to maintain—even as it proclaims itself the champion of open inquiry and democratized knowledge? Academia has become the realm of what Peter Sloterdijk called cynical reason: rehearsing the tropes of ideology critique while knowing the game is empty and playing it anyway. This revolts me.

But for almost ten years now, since the shutting down of the labs at Columbia’s architecture school, I have been content to write from the position of the outsider, something I reflected on in “On the Golden Age of Blogging”. That essay was prompted by a strange comment from Scott Alexander, who lamented on Dwarkesh Patel’s podcast that he had personally made a strategic error in not blogging during what he called the “golden age,” imagining that “the people from that era all founded news organizations or something.” The golden age he remembers is a fiction, as golden ages often are—and he gets the stakes entirely wrong. Evan Williams founded Blogger in 1999, sold it to Google, co-founded Twitter, then created Medium, which convinced hapless readers pay to read slop long before AI slop was ever a thing. The early bloggers who sought professionalization found themselves absorbed into the worst of the worst, writing for BuzzFeed, peddling nostalgia listicles that rotted psyches.

There was, however, a golden age for me, and I miss it: the architecture blogging community circa 2007—Owen Hatherley, Geoff Manaugh, Enrique Ramirez, Fred Scharmen, Sam Jacob, Mimi Zeiger (whose Loud Paper was less a blog and more a zine, but a key part of the culture), and others. We inherited from zine culture an informal, conversational tone and the will to stand outside architectural spectacle. But ArchDaily and Dezeen commercialized the form, shifting from independent critique to marketing and product. Startup culture absorbed architectural talent.

Blogging was powerful precisely because we had no stakes in it—we owned and controlled our means of intellectual production. The golden age of blogging is not in the past; it is now. After years of proclaiming I would blog more, in 2025, I really did. I wrote over 83,700 words on varnelis.net and the Florilegium—essay-length pieces on landscape, native plants, AI and art, architecture, infrastructure, politics, and tourism. My only regret is that my presidency at the Native Plant Society of New Jersey consumes so much of my thinking about native plants that little remains for writing. But the time will come, and if nothing else, my investigation of the Japanese garden aesthetic should point in the future direction for my writing on landscape.

I also continued to make AI art, or to be more precise, what I called stochastic histories. A major project was a substantial reworking of The Lost Canals of Vilnius, a counterfactual history in which, after the Great Fire of 1610, Voivode Mikalojus Radvila Našlaitėlis rebuilt the city with Venetian-style canals, complete with gondoliers, water processions, and a hybrid “Vilnius Venetian” architecture. As research, I used Gemini to translate Radvila’s sixteenth-century Latin pilgrimage narrative. AI, like photography or film, is what you make of it. Film is perhaps the better analogy—anyone can make a video. Making something worthwhile is another matter entirely. In December, I also completed East Coast/West Coast: After Bob and Nancy, a generative restaging of Nancy Holt and Robert Smithson’s 1969 video dialogue using two AI speakers.

There were other substantial essays, too. In “Oversaturation: On Tourism and the Image”, I finally put down on paper something I had wanted the Netlab to address while at Columbia, but that proved too dangerous for the school to support. Universities cannot critique the very systems of overproduction they depend upon for survival. Publish or perish and endless symposia nobody is interested in are the academic versions of overproduction, but more than that, any architecture school claiming global currency cannot afford to offend either other institutions, like museums, that give it legitimacy, or, for that matter, the trustees that fund both. As I point out, tourism has always been mediated by imagery; take Piranesi’s vedute or the Claude Glass. Grand Tourists always had representations at hand to interpret their direct experience—but a new crisis point has been reached with both overtourism and the overproduction of images. Algorithmic logic now reorganizes cultural geography around “most Instagrammable spots,” making historical significance secondary to content potential. The Fushimi Inari shrine in Kyoto is the case in point—a 1,300-year-old shrine that Instagram made famous and that has now ceased to serve as a religious site due to the influx of visitors. The Japanese have a term for this: kankō kōgai, tourism pollution. Tourism has become the paradigm of contemporary experience—the production of imagery without cultural meaning; everything feeds the same algorithmic mill. Even strategies of resistance get metabolized—slow travel becomes a hashtag, psychogeography becomes an Instagram guide.

The Bilbao effect, which was a major driver of oversaturation, was itself a product of globalization. Hans Ibelings coined “supermodernism” in 1998 to refer to the architectural expression of Marc Augé’s “non-places,” an architecture optimized for the perpetual circulation of bodies and capital. It was the architecture of network culture, of the Concorde and the Internet. Koolhaas diagnosed its endgame in his 2002 “Junkspace“—”Regurgitation is the new creativity”—and then, tellingly, stopped writing. Today, network culture is long gone; nationalism is on the rise. The Internet is a dark forest now14. while the disconnected life is on the rise.15 The most exclusive resorts now advertise no Wi-Fi, no cell service, no addresses—only coordinates. Disconnection has become the ultimate luxury, sold back to the same people who built the infrastructure of connection. More cities are alarmed by the effects of overtourism than desire to attract tourists. In the US, new architectural proposals appeal to a retardataire aesthetic—Trump displaying models of a triumphal arch inspired by Albert Speer and marking a triumph of nothing in particular in models in three sizes (“I happen to think the large looks the best“), a four-hundred-million-dollar ballroom modeled on Mar-a-Lago, an executive order mandating classical architecture for federal buildings that Stephen Miller explicitly framed as culture war.

Yet both Bilbao and MAGA are spectacle, architecture-as-branding. But the Bilbao effect is imploding. No city believes anymore that a signature building by a starchitect will transform its fortunes. The parametricists have nothing left to say. Parametric design promised formal liberation—responsive, site-specific, computationally derived—but what it delivered was the most efficient, ugliest box. If the promise was the blob, the reality is the “5-over-1”: wood-frame residential floors stacked on a concrete podium with ground-floor retail, wrapped in a pastiche of brick veneer, fiber cement panels, and that obligatory conical turret element meant to signal “we thought about this corner.” As for AI-generated architecture, it is merely boring—giant sequoias hollowed out as apartment buildings, white concrete towers with impossible cantilevers, and lush vegetation sprouting from every surface—the same utopian fantasy rendered a thousand times over. These are renders of renders: AI trained on architectural visualization produces visualizations that are utterly disconnected from any tectonic reality. A new generation may emerge in response to new needs, but for now, the discipline has lost its cultural purchase. Architecture, for us, is a thing of the past.

The art world, too, has slowed. Museums are putting on fewer shows, shifting from aggressive schedules to longer, more deliberate exhibitions—or simply cutting programming as budgets tighten.16. The frantic pace of the Biennale circuit has exhausted dealers and collectors alike; smaller fairs are folding, and even the major ones feel like obligations rather than events. Galleries that survived the pandemic are now closing quietly, without the drama of a market crash—just a slow bleed of foot traffic, sales, and cultural attention. There is no new movement, no emergent critical framework, no sense of direction. The market churns on—auction prices for blue-chip artists remain high, collectors still speculate, art advisors still advise—but the sense of cultural mission has dissipated. What remains is commerce without conviction, a field that has forgotten why it exists beyond the perpetuation of its own economy. The institutions that trained artists for this field are collapsing alongside it.

As enrollment dwindles, design schools are collapsing—not merely contracting, but ceasing to exist. Most recently, the California College of the Arts announced in January 2026 that it would close after the 2026–27 academic year17., the last remaining independent art and design school in the Bay Area. It follows a grim procession: the San Francisco Art Institute (2020), Mills College (2022), the Pennsylvania Academy of the Fine Arts (2023), and Woodbury University’s acquisition by Redlands and subsequent adjunctification—a fate that has methodically undone so many schools as faculty become contingent labor and institutions into hollow administrative structures run by well-paid, cost-optimizing consultants.

There is personal resonance for me in this. Simon’s Rock College of Bard, which shuttered its Great Barrington campus in 2025, was where I studied for my first two years before transferring to Cornell—a pioneer of early college education that offered a radical pedagogical experiment in what learning could be beyond conventional schooling. I arrived there straight from high school, as did my good friend and colleague Ed Keller; clearly, something interesting was in the water back then. Simon’s Rock made the development of young minds its central mission rather than an incidental focus of brand management or endowment growth, and its alumni list is impressive for such a small school. It has an afterlife at Bard, but it’s an echo at best.

The difference between these institutional deaths and simple market failure is this: they are not being replaced. When a retail business fails, another may open elsewhere. When a school closes, there is no succession. The market offers no alternative. Instead, what remains are the corporate university satellites—for-profit programs nested within larger institutions (like Woodbury’s absorption into Redlands), stripped of autonomy, their faculty reduced to precariat, their curricula bent toward what can be measured and marketed. The art schools that survive do so by transforming into something else: luxury finishing schools for wealthy families or research appendages to larger universities, where “design thinking” becomes another management consultant’s tool. The pedagogical mission—to create conditions where students might develop serious aesthetic judgment, where they might encounter genuine problems and be forced to think through them—is not merely challenged but impossible. The closure of these schools does not signal a failure of art education; it signals that the very idea of art education as something valuable in itself has been liquidated.

This hollowing out of cultural institutions is not incidental to the political moment—it is one of its hallmarks. Politically, most people have checked out. This is not 2017, when each provocation demanded a response; the outrage cycle has given way to numbness. In “National Populism as a Transitional Mode of Regulation”, I argued that Trump, Orbán, Meloni, and their ilk represent not a return to fascism but something new: the authoritarian management of declining expectations. National Populism correctly identifies that neoliberalism’s promise of shared prosperity has failed, but it channels legitimate grievances toward scapegoats rather than addressing the technological displacement actually causing them. This is its tragic irony: the National Populist base—workers made obsolete by neoliberalism and unable to participate in AI Capitalism—finds its legitimate anger directed into a movement that accelerates the very forces rendering them superfluous. Their value to capital lies in political disruption rather than economic production; they are consumers and voters, but no longer needed as workers. National Populist leaders offer psychological compensation—dignity, recognition, transgressive identity politics—rather than material improvement. The apocalyptic tenor of populist culture, its end-times thinking and conspiracy theories, provides a framework for populations sensing their own economic redundancy.

The alliance between tech billionaires and populist leaders is unstable. AI Capitalism requires borderless computation and global talent flows; nationalist protectionism contradicts these at every turn. Musk, Thiel, and Andreessen have aligned with the movement to dismantle the regulatory state, not because they share its vision but because populism serves as a useful battering ram against institutional constraints. Once those barriers fall, the movement and its human-centric concerns can be discarded. National Populism, as I conclude, is not the future—it is a political interlude, a transitional mode that will not survive contact with the economic forces it has helped unleash.

If National Populism is transitional, is there a positive vision that can replace it? In “After the Infrastructural City”, I responded to Ezra Klein and Derek Thompson’s book Abundance, perhaps the most influential book of 2025, which argues that America’s inability to build is a political choice, not a technical constraint. Their solution: streamline regulation, invest boldly, build more. It’s a compelling vision—and a necessary corrective to decades of paralysis. But Abundance shares a curious blindspot with Muskian pronatalism: both assume we need more people. Musk preaches that declining birthrates spell civilizational collapse; Klein and Thompson build their vision on populations that will mysteriously arrive to fill what’s built, perhaps by immigration. Neither accounts for the possibility that AI changes the equation entirely—that a smaller population, augmented by intelligent systems, might not be a crisis at all. Populations are already shrinking across much of the developed world. What I call “actually-existing degrowth”—not the voluntary eco-leftist kind, but the unplanned demographic contraction now underway in Japan, Korea, and much of Europe—is coming for the United States too. Declining birth rates, aging populations, and regional depopulation: these are not future scenarios but present facts.

This doesn’t invalidate the Abundance agenda; it redefines it. Abundance cannot mean building more for populations that will not arrive. It must mean building better, adaptive, intelligent infrastructure for smaller, older societies. AI, rather than merely destroying jobs, can help navigate this transition: smart grids, autonomous transit, predictive healthcare. The opportunity is real. Managed shrinkage, done well, can mean more livable cities, restored ecosystems, higher quality of life. The question is whether political leaders can articulate a vision of flourishing within limits—or whether nostalgia for growth will leave us building for a future that never comes.

Against the exhaustion of institutions, against the hollowing out of architecture and art, against the closure of the schools that trained people to imagine, the blog remains. It may not be much, but it is one independent voice outside the collapsing structures around me. I wrote over 83,000 words this year. I made art. I thought through problems that matter to me with the help of AI, which provided me with tools I could only have dreamt of merely a year ago. Today, I uploaded hundreds of thousands of words from my essays to a directory in Obsidian so that Claude could draw connections between them (see here for just how one can set this up).

The future is already here—it just isn’t evenly distributed. Some are afraid or are still pretending AI isn’t happening. Phase transitions are uncomfortable. They are also where the interesting work gets done. One makes of one’s time what one makes.

1. William Gibson, quoted in Scott Rosenberg, “Virtual Reality Check Digital Daydreams, Cyberspace Nightmares,” San Francisco Examiner, April 19, 1992, Style section, C1. This is the earliest verified print citation, unearthed by Fred Shapiro, editor of the Yale Book of Quotations.

2. Fernand Braudel, The Mediterranean and the Mediterranean World in the Age of Philip II, trans. Siân Reynolds (New York: Harper & Row, 1972), 21.

3. Braudel, The Mediterranean, 901.

4. Fernand Braudel, “Personal Testimony,” Journal of Modern History 44, no. 4 (December 1972): 448–67.

5. Paule Braudel, “Les origines intellectuelles de Fernand Braudel: un témoignage,” Annales: Histoire, Sciences Sociales 47, no. 1 (1992): 237–44.

6. Howard Caygill, “Braudel’s Prison Notebooks,” History Workshop Journal 57 (Spring 2004): 151–60.

7. Vannevar Bush, “As We May Think,” The Atlantic Monthly 176, no. 1 (July 1945): 101–8.

8. Civiqs, “Do you think that the increasing use of artificial intelligence, or AI, is a good thing or a bad thing?,” January 2026, https://civiqs.com/results/ai_good_or_bad.

9. The concept of mentalités emerged from studies of phenomena like the witch trials, where beliefs and fears spread through communities in ways that could not be reduced to individual irrationality. For an overview of mentalités as a historiographical concept, see Jacques Le Goff, “Mentalities: A History of Ambiguities,” in Constructing the Past: Essays in Historical Methodology, ed. Jacques Le Goff and Pierre Nora (Cambridge: Cambridge University Press, 1985), 166–180.

10. Kazys Varnelis, “The Rise of Network Culture,” in Networked Publics (Cambridge: MIT Press, 2008), 145–160.

11. Robert Putnam, “The Other Pin Drops,” Inc., May 16, 2000.

12. Kirsten R. Müller-Vahl et al., “Stop That! It’s Not Tourette’s but a New Type of Mass Sociogenic Illness,” Brain 145, no. 2 (August 2021): 476–480, https://pubmed.ncbi.nlm.nih.gov/34424292/.

13. Bruce Sterling, “Atemporality for the Creative Artist,” keynote address, Transmediale 10, Berlin, February 6, 2010.

14. Yancey Strickler, “The Dark Forest Theory of the Internet,” 2019, https://www.ystrickler.com/the-dark-forest-theory-of-the-internet/. See also The Dark Forest Anthology of the Internet (Metalabel, 2024).

15. “Trend: Not Just Digital Detox, But Analog Travel,” Global Wellness Summit, 2025, https://www.globalwellnesssummit.com/blog/trend-not-just-digital-detox-but-analog-travel/.

16. “The Big Slowdown: Why Museums and Galleries Are Putting on Fewer Shows,” The Art Newspaper, March 10, 2025, https://www.theartnewspaper.com/2025/03/10/the-big-slowdown-why-museums-and-galleries-are-putting-on-fewer-shows.

17. California College of the Arts, the last remaining private art and design school in the Bay Area, announced in January 2026 that it would close after the 2026–27 academic year. See “‘Nowhere Left to Go’: As California College of the Arts Closes, So Does a Pathway for Bay Area Artists,” KQED, January 13, 2026, https://www.kqed.org/news/12070453/nowhere-left-to-go-as-california-college-of-the-arts-closes-so-does-a-pathway-for-bay-area-artists.

The Rise and Fall of the Author

I know, this is both too long and too short. It should be a book, or it should be edited heavily. But I have a doctoral minor in rhetoric and have been obsessed with print culture for 25 years so there it is. I did what I wanted, but perhaps not what I should have done.

The Library of All Plagiarized Books, Google Imagefx, 2025

In Jorge Luis Borges’s 1939 short story, “Pierre Menard, Author of the Quixote,” Menard undertakes what appears to be an impossible, even insane, task: recreating, word for word, “the ninth and thirty-eighth chapters of the first part of Don Quixote and a fragment of chapter twenty-two.” Menard aims not to copy Cervantes but to write the Quixote anew through his own experiences as a 20th-century French symbolist. But Menard did not want to compose another Quixote, which is easy, but the Quixote itself, coinciding—word for word and line for line—with those of Miguel de Cervantes.1.

When Menard succeeds in producing such a text—identical to the original—Borges’s narrator insists the works are profoundly different. Where Cervantes’s prose was natural and of its time, Menard’s identical words are “almost infinitely richer,” deliberately archaic, embedded with new meaning. Throughout the story, Borges deploys scholarly devices—footnotes referencing fictional authorities such as the “Baroness de Bacourt” and “Carolus Hourcade,” as well as an elaborate bibliographic catalog of Menard’s monographs, translations, and scholarly studies—to create an illusion of academic rigor, at odds with the narrator’s implausible belief that Menard has succeeded in creating the exact Quixote out of sheer will. In framing both the fictional narrator and Menard in this manner, Borges exposes the authorial voice as a social construct mediated through bibliographic catalogs, citations, and scholarly conventions.

Borges’s presentation of Menard as a figure of almost obsessive scholarly intensity, emblematic of an intellectual culture that privileges meticulous citation, exhaustive cataloging, and painstaking documentation, underscores the arbitrary nature of authorial authority. By situating Menard within an elaborate apparatus of footnotes, fictional scholarship, and invented references, Borges highlights how framing alone can endow identical texts with fundamentally different meanings. Menard’s act of plagiarism thus emerges not as a straightforward ethical transgression, but as a concept dependent entirely upon interpretative context. This insight resonates powerfully in the contemporary age of generative AI, where algorithms produce texts that defy conventional notions of plagiarism precisely because they are generated from vast, undifferentiated statistical patterns rather than explicitly identifiable sources. Borges’s story has become a cornerstone of postmodern literary theory precisely because it challenges fundamental assumptions about creativity and authorship. Today, Borges’s meditation on plagiarism as creative re-imagination rather than simple theft illuminates contemporary anxieties about AI and human creativity.

Curiously, sixteen years before Borges published his story, Polish-American writer Tupper Greenwald created an almost identical literary conceit. In his story “Corputt,” Greenwald portrays a character obsessed with Shakespeare’s King Lear. Near death, this character reveals to a colleague that he has achieved his lifelong ambition: writing a drama equal to Lear. The text he reads aloud matches Shakespeare’s play exactly. This uncanny parallel raises provocative questions: Did Borges know Greenwald’s work (quite unlikely)? Is this merely an instance of parallel invention? Does this coincidence itself embody Borges’s central insight into originality and authorship? “Corputt” was largely forgotten until Argentine critic Enrique Anderson Imbert reprinted it in his 1955 anthology Reloj de arena. Borges himself never acknowledged Greenwald and, of course, Imbert’s book was printed over fifteen years after “Pierre Menard.” Whether Borges knew of “Corputt” or both authors independently arrived at remarkably similar ideas remains uncertain. Either possibility underscores the inherent instability of originality, demonstrating how literature continually echoes, duplicates, and anticipates itself.2.

Today’s generative AI systems function as modern-day Pierre Menards, producing works that superficially resemble human-created content while often existing in fundamentally different contexts. Like Menard’s Quixote, AI-generated works can be identical in form to human productions while carrying entirely different implications by virtue of their inhuman origins. The discomfort this creates—particularly among creative professionals—reveals deep-seated cultural assumptions about originality, authenticity, and the supposedly unique human capacity for creative expression.

The intensity of this discomfort has manifested in antagonistic responses from certain segments of the artistic community: legal threats, public denunciations, and harassment of AI developers and users. But it seems ironic that some of the most vocal critics of AI art produce derivative commercial work. Consider the previously little-known fantasy illustrator Greg Rutkowski, who creates genre pieces within established fantasy art conventions. Rutkowski became famous precisely because his name was one of the most-used prompts in early text-to-image systems such as Midjourney, which led him to complain about the “theft” of his style, even though this widespread imitation literally gave him recognition he had never previously achieved.3. Similarly, commercial artist Karla Ortiz—whose website features images of famous actors in films such as Dr. Strange and Loki—gained significantly more attention leading legal challenges against AI companies than she ever had for her industry work creating “concept art,” a field that, despite its misleading name, bears no relation to conceptual art and instead operates entirely within the visual language and narrative conventions of commercial franchises like Marvel.4. In both cases, artists whose own work operates comfortably within inherited commercial styles became vocal advocates against a technology that allegedly “steals” uniqueness they themselves don’t pursue in their professional practice. As I edit this essay, Disney and Universal, both noted for their relentless reliance on their back catalogs, have sued AI image firm Midjourney, claiming it is “a bottomless pit of plagiarism.”5.

These extreme reactions suggest something deeper than mere economic anxiety; they reveal a cultural mythology about creativity that AI fundamentally challenges. By explicitly highlighting the derivative, pattern-based nature of creative production, generative AI systems threaten cherished illusions about human uniqueness and artistic authenticity. In this essay—the third in a series exploring AI and creativity—I examine the history of plagiarism and, even more importantly, the invention of the author upon which it depends.

Our idea of authorship and inspiration is historically contingent. In ancient and medieval periods, creative output was attributed to divine inspiration rather than individual genius. In Greece and Rome, creativity operated primarily through the concepts of mimesis (imitation of admired models) and aemulatio (competitive emulation). Poets such as Homer were seen not as singular creators inventing ex nihilo, but as conduits channeling inspiration from the Muses. Plato depicts this in Ion, a dialogue between Socrates and Ion, a celebrated rhapsode who recites Homer’s poetry. Socrates questions Ion’s claimed expertise, asking if it extends beyond Homer to other poets or topics. Ion admits it does not. Socrates suggests Ion’s ability isn’t based on knowledge or skill, but on divine inspiration—a form of madness bestowed by the gods. This ambiguity is echoed in Plato’s relationship with Socrates: just as poets channel divine sources rather than creating anew, Plato himself channels the figure of Socrates as a philosophical muse, blurring distinctions between inspired imitation and deliberate intellectual invention. Aristotle’s Poetics also situates literary creativity in skilled imitation and incremental improvement of existing forms. Authority, or auctoritas, in the classical era derived not from innovation but from fidelity to revered predecessors; genuine creativity manifested in producing work within established traditions.

Historian Walter Ong describes a cultural state in which narratives and knowledge pass down primarily through memory and repetition rather than written texts as “orality.”6. In oral cultures, a talented storyteller masters existing narratives, reciting them with skill and emotional resonance, adapting content to contemporary circumstances while maintaining continuity with inherited tradition. Here, the concept of plagiarism is beyond comprehension. Knowledge is communally owned, and performers serve as temporary vessels for collective wisdom, not proprietors of intellectual property.

With the development of writing systems and the spread of manuscript culture, information could be transmitted virtually intact across time and space, yet many aspects of oral tradition persisted. Manuscript copying remained a laborious and interpretative process. Scribes continually corrected perceived errors, updated archaic language, clarified ambiguous passages, and often inserted marginal commentary directly into texts. While manuscript culture adhered more precisely to parent texts than oral traditions, it still preserved a fundamentally different relationship between text and authority than we hold today. Textual authority continued to derive from collective wisdom rather than individual innovation. The medieval practice of compilatio is illustrative: encyclopedic works such as Isidore of Seville’s Etymologiae and Vincent of Beauvais’s Speculum maius valorized the meticulous arrangement and synthesis of inherited knowledge. Authority was rooted in the careful management of textual traditions, intellectual labor essential to preserving collective wisdom. Pseudepigraphic attribution—the practice of assigning new works to established authorities—further illustrates the communal understanding of textual authority. Rather than deception, such attributions signified sincere efforts to situate new insights within established intellectual traditions, acknowledging that all knowledge builds upon existing foundations. In manuscript culture, authority was thus derived not from novelty but from the individual’s ability to synthesize, arrange, and safeguard the accumulated wisdom of their predecessors. Texts were treated as communal artifacts, valuable resources preserved, transmitted, and continually refined through shared intellectual effort.

A shift away from communal knowledge toward originality emerged during the Renaissance, but this was a matter of evolution, not a radical break. The Renaissance humanists were drawn to the arguments of Roman rhetoricians Cicero and Quintilian, who contended that the best orators drew inspiration from earlier masters. Artists and intellectuals approached imitatio (imitation) as the necessary foundation for learning, understanding it as central to artistic and intellectual practice, a disciplined route to excellence. Originality lay not in invention ex nihilo but in reworking established forms with new insights, adapted to contemporary needs.

Medieval thought, like classical thought before it, was dominated by the trivium—grammar, rhetoric, and logic—distinct but intertwined fields of knowledge. Grammar reached far beyond syntax and depended on students memorizing classical and Christian texts. Rhetoric was a pillar of medieval thought and Cicero’s De inventione was its backbone, quoted endlessly in florilegia, collections of literary excerpts. Quintilian, by contrast, survived only in a four-book epitome. Petrarch’s 1345 discovery of Cicero’s letters to Atticus, Quintus, and Brutus in Verona, followed by Salutati’s championing of Cicero, and Poggio Bracciolini’s 1416 recovery of the complete twelve-book manuscript of Quintilian’s Institutio oratoria at the monastery at St. Gall expanded the rhetorical canon significantly.7. Humanist teachers trained students to copy, amplify, and vary classical texts, moving systematically from close paraphrase toward free recomposition. This humanist practice of imitatio deepened medieval habits, turning disciplined engagement with authoritative texts into the surest path to eloquence and invention.

While for the humanists, imitatio governed education, inventio supplied content, taking the place that originality and inspiration occupy today. At the heart of rhetorical practice, inventio refers to the disciplined search for material—arguments, images, historical exempla—already latent in authoritative sources and even in life itself. A student mined texts and experience, copied choice passages into a commonplace book, then rearranged and amplified them for a new occasion. Erasmus called these notebooks treasure-houses of invention while Agricola placed inventio at the hinge of dialectic and rhetoric.8. Originality therefore arose from judgment: the orator’s skill lay in selecting, recombining, and adapting inherited matter with timely insight and persuasive force.

Visual artists engaged in analogous practices, beginning their training by meticulously copying classical sculptures and earlier masterworks. Just as rhetorical imitation was disciplined reshaping rather than mere repetition, artistic originality involved mastering established visual languages before creatively adapting them to contemporary purposes. Imitation also lay at the heart of the early modern idea of the artist, a construction often traced back to Giotto. Giotto’s pupils Taddeo Gaddi, Maso di Banco, and Bernardo Daddi disseminated his style across central Italy, solidifying the idea of a stylistic lineage originating in a great artist. By the quattrocento, Cennino Cennini—who studied under Gaddi’s son—explicitly recognized this lineage in his handbook, Libro dell’arte (c. 1400, although not published until 1821), suggesting that a personal manner would naturally emerge after a student thoroughly internalized a master’s style and spirit alongside direct study from nature. Cennini explicitly positioned Giotto as transformative, stating that he “translated the art of painting from Greek [Byzantine] into Latin and made it modern,” distinguishing his originality as foundational yet derived from disciplined imitation rather than spontaneous genius.9.

The quattrocento further systematized this approach. Workshops led by artists like Brunelleschi, Donatello, and Ghiberti employed rigorous study of classical sculpture using casts of antique sculptures and repeated copying of established masterpieces through cartoons and master drawings. Cennini’s guidelines and later academies, such as the Carracci brothers’ Accademia degli Incamminati (1582), codified a clear pedagogical sequence: draw from antiquity, copy the master, then innovate. Michelangelo famously sculpted a Sleeping Cupid in the antique style, artificially aging it to sell as a genuine Roman artifact, demonstrating that in the market’s eyes, skillful imitation was indistinguishable from genius. Rather than creating scandal, the artifice brought the attention of patrons to him.10. This deliberate merging of imitation and innovation directly served a burgeoning art market, where patrons increasingly requested artworks “in the manner of” prominent masters, recognizing stylistic consistency as a mark of quality. Such market dynamics gave rise to identifiable schools—Bellini in Venice, Raphael in Rome, Rembrandt in Amsterdam—where genius was perceived as the skillful recombination of established motifs adapted for contemporary patrons and themes. Artistic invention was a mosaic built upon collective memory and workshop discipline.

The Renaissance also witnessed the emergence of wealthy patrons who lavished commissions on the most talented artists, making some of them quite wealthy. Again, Michelangelo exemplifies this: coming from modest origins, he became “one of the most popular and highly-paid artists in Florence,” and over a long career of lucrative papal and princely commissions, he amassed a fortune. When Michelangelo died in 1564, his estate was valued at roughly 50,000 florins, equivalent to many millions today.11. Such wealth was extraordinary for an artist then—a testament to how highly Renaissance society valued great art. Michelangelo’s contemporary Raphael also died rich and was buried with honors; Titian was knighted by Emperor Charles V and lived as a gentleman. The Renaissance idea of the artist as a divinely inspired genius (Michelangelo was called “Il Divino,” the divine one) helped justify large payments, and a newfound aura around the artist’s personal creative touch made their works precious.

Architecture adopted the same logic. Bracciolini had discovered Vitruvius’s De architectura, the one surviving work on classical architecture, in the library of St. Gall as well. Seeking to better understand the text, whose illustrations did not survive, architects began copying Roman fragments, took plaster casts of orders, and filled sketchbooks with measured drawings, just as painters traced cartoons. Brunelleschi’s surveys of the Pantheon fed into his Florentine circle; Alberti’s De re aedificatoria, written between 1443 and 1452 and printed in 1482 codified imitatio, urging designers to recombine antique elements with modern needs.12. Workshops became lineages—Brunelleschi to the Sangallo family, Bramante to his Roman pupils—while later pattern books such as Serlio’s Sette Libri (1537-) and Palladio’s Quattro Libri (1570) served architects like Erasmus’s commonplace manuals served orators, making façades “in the manner of” a master as marketable as paintings from a Rembrandt school. Originality in building, too, lay in judicious assembly: columns, pediments, and vaults would be inventively rearranged rather than invented from whole cloth.

With the development of the printing press, copies of images as well as texts could spread rapidly and with much less cost and effort than before. Around 1500, the German artist Albrecht Dürer pioneered the use of woodcuts and engravings to mass-produce images. This was revolutionary; art could now be accessible to individuals in the growing merchant class. Dürer himself became a celebrity artist across Europe thanks to his prints, achieving fame for works like his rhinoceros which captivated common people.


Albrecht Dürer (1471–1528), The Rhinoceros, 1515. Woodcut. The Metropolitan Museum of Art, Gift of Junius Spencer Morgan, 1919.

Dürer understood the importance of authorship as a mark of value—he developed a famous AD monogram as a trademark and pursued the first known copyright lawsuit when an Italian printmaker pirated his work.13. Dürer was also well aware that work done by his hand was worth more than workshop copies. More than that, Dürer painted meticulous self-portraits—going so far as to depict himself with long hair and a frontal pose evoking Christ, as a form of self-promotion, cultivating an iconic persona and style that set him apart. Living off the open sale of his works rather than a court salary, Dürer foreshadowed the modern independent artist-entrepreneur. The printing press, far from cheapening art, expanded the market and made Dürer rich while spreading his fame—an early case of mechanical reproduction increasing an artist’s aura by broadening recognition.

The printing press did not just allow texts to spread rapidly, it reshaped thought. Ong explains that with uniform pagination and stable text, Europeans could reorganize how they thought and stored information, developing new devices such as tables of contents, indices, and cross-references, making formerly scroll-like manuscripts far more navigable. Printers issued concordances, polyglot Bibles, algebra books with engraved diagrams, atlases, and architecture books with regularized drawings. Even more important is Ong’s observation that print takes words out of the realm of sound and puts them into the realm of space, reordering thought through analytic, segmental layout, fundamentally changing the realm of reading, but also, by fixing the text in a verifiable, authentic editon, the sense of authorship.14.

Publication now implied a level of completion, a definitive or final form; a book is closed, set apart as its own, self-contained world of argument. This sense of closure also suggests that things written in a book are straightforward statements of fact, not matters of interpretation.15. A page now left the press in hundreds of identical impressions; any alteration stood out and could be traced. The ease of duplication sharpened anxiety about whose version was “authentic,” whose labor was being copied, and who should profit. Whereas there were generally no restrictions on scribal copying, the ease of reproduction en masse led printers to seek royal privileges to protect their editions. The first privileges recorded came a decade after the development of printing in 1454. Giovanni da Spira came to Italy in 1468 to introduce printing and swiftly obtained a five-year government monopoly on all book printing in the Republic of Venice, although he died of the plague, an all-too-common hazard of the day and his rights lapsed.16. The first protection for an author was the privilege obtained by Marco Antonio Sabellico to protect his history of 1486 Venice, Decades rerum Venetarum against illegal reproduction, but this remained a unique occurrence until Pietro of Ravenna obtained another for his book on the art of memory, Foenix in 1492. It is worth noting that this privilege covered not only printed but handwritten copies of his work as well.17. “Typography,” Ong writes, “had made the word into a commodity.”

The press’s sheer fecundity alarmed contemporaries—Erasmus complained of the proliferation of new books inferior to the classics ”To what corner of the world do they not fly, these swarms of new books? . . . . the very multitude of them is hurtful to scholar ship, because it creates a glut, and even in good things satiety is most harmful,” while Abbot Johannes Trithemius issued De laude scriptorum manualium (In Praise of the Scriptorium, 1492), insisting that slow, devotional hand-copying nourished memory and piety in ways the noisy press never could—although it is telling that his lament spread throughout Europe mainly after its print publication in 1524.18.

Beyond that, there was the danger of inappropriate texts rapidly proliferating. Luther’s Ninety-Five Theses and tracts from 1520 reached an estimated half-million copies in a decade, many reprinted without author or place, evading imperial edicts and turning theological dissent into a logistical problem of regulation.19. Royal patents soon followed: Henry VIII’s proclamation of 1538 established that royal authority was required to import or publish books in England and insisted on the inclusion of printers’ names and publication dates on every title page, making surveillance of dissent physically visible.20. Still, in England and elsewhere, enforcement lagged behind presses that could be moved overnight across territorial borders. Responding to pamphlets critical of Queen Elizabeth and the religious settlement of 1559, the Star-Chamber decree of 1586 tightened control over print so that no publications could be made contrary to the consent of the Crown.21.

By this point, the text of a book had become a transferable commodity owned by the stationer who first received the privilege to publish it. Authors were generally paid a one-off fee, if anything. Printers balanced risk and reward: they sought privileges as marketing devices (printed “cum privilegio“) while simultaneously pirating successful titles to meet insatiable demand. What emerges is a system less about rewarding creative labor than about policing doctrinal and political authority. Privileges were temporary, geographically limited, and revocable at the whim of the Crown or Curia. They protected investors, not “authors,” and framed copying as a crime against order rather than against individual genius. The legal scaffolding of copyright would only later recast this machinery of censorship as a defense of personal property.

But authorship was still radically unlike what we understand it as today, a matter of imitation and adaptation. Elizabethan dramatists, such as William Shakespeare, rarely invented plots wholesale; instead, they frequently reworked existing narratives derived from diverse sources throughout history.22 Recently, a self-taught Shakespeare scholar was able to employ plagiarism detection software to identify George North’s A Brief Discourse of Rebellion & Rebels as a significant source text informing at least eleven of Shakespeare’s plays.23.

When Parliament allowed the Licensing Act to expire in 1695, the Stationers’ monopoly collapsed overnight. Provincial presses multiplied, London printers flooded the market with cheap reprints, and prices plummeted: a six-penny quarto could now be had for a penny. The Stationers’ guild register, previously essential to enforcement, became irrelevant, enabling booksellers to amass fortunes by selling inexpensive “pirate” editions of works by Milton, Dryden, and Shakespeare. Alarmed, London publishers reframed the issue, presenting regulation as necessary for the public good. Petitions to Parliament (1701–09) argued that uncontrolled reprints would discourage new works, depicting authors, not publishers, as vulnerable. This rhetorical shift succeeded. Most important was the Statute of Anne (1710), which granted authors a renewable 14-year copyright and required depositing copies in Oxford and Cambridge libraries to promote “the Encouragement of Learning.” Infringement became a civil tort enforceable by secular courts.24.

Yet this settlement carried an inherent contradiction. While it theoretically established authorial property, in practice, writers typically sold their rights outright to the same publishers who had advocated the law. The decisive shift, therefore, was ideological: copyright enforcement now protected individual intellectual labor rather than suppressing heresy or safeguarding printers’ capital. More than that, though, a new idea of the individual was emerging. Rousseau’s Émile (1762) cast learning as the unfolding of innate talent, not the imitation of models.25. After the Revolution, French lawmakers followed with droits d’auteur and—crucially—droits moraux (moral rights) in decrees issued in 1791-93, enshrining the author’s personality in the text itself.26. A legal fiction thus crystallized: creativity springs from an interior self and is therefore ownable, alienable, and infringeable. Texts had thus become simultaneously property and persona—commodities stamped with their creators’ identities. The law now transformed copying from a sin against social order into a trespass upon personal labor, a conceptual leap still underpinning every contemporary claim of plagiarism.

Kant’s philosophy and Romantic conceptions of originality provided a theoretical foundation for what was being codified in law. In §46 of the Critique of Judgment (1790), Kant defines genius as “the talent (or natural gift) which gives the rule to Art—a faculty that produces what cannot be taught.”27. Romantic writers seized the claim. Wordsworth’s Preface to Lyrical Ballads (1802) proclaims the poet an “enduring spirit” who speaks “a language fitted to convey profound emotion.”

Of genius the only proof is, the act of doing well what is worthy to be done, and what was never done before: Of genius, in the fine arts, the only infallible sign is the widening the sphere of human sensibility, for the delight, honor, and benefit of human nature. Genius is the introduction of a new element into the intellectual universe: or, if that be not allowed, it is the application of powers to objects on which they had not before been exercised, or the employment of them in such a manner as to produce effects hitherto unknown.28.

Goethe, Schiller and other Romantic authors elaborated a vision of authorship in which originality became synonymous with authenticity, and authenticity justified property. Legal doctrine soon mirrored this logic. By the Copyright Act of 1842, which extended protection dramatically, courts across Europe had begun to treat infringement not only as economic theft but as personal violation—implicitly endorsing Romantic ideals of creativity as an extension of selfhood. Yet these new standards conflicted with actual literary practice. Romantic authors routinely appropriated earlier works, but such borrowings only became scandalous when perceived as stylistically inert or insufficiently improved—violations not of property per se, but of aesthetic decorum. Enforcement thus focused less on intertextual borrowing than on explicit commercial piracy, underscoring tensions between legal ideals and literary realities. Out of this contradiction emerged the modern author: a legal and economic figure defined not merely as a voice within tradition but as the singular origin of meaning and the rightful owner of its form.29.

From the eighteenth century onward, mechanical reproduction rapidly increased. Techniques like engraving, etching, lithography, and photography made artworks and artists’ images widely accessible, expanding art’s market horizontally. Prints, affordable lithographs, and photographic reproductions enabled middle-class access to art, creating substantial revenue for artists such as William Hogarth, J. M. W. Turner, and Honoré Daumier, whose works sold broadly. Reproductions in popular newspapers and magazines further amplified artists’ public profiles, significantly inflating their market value. Encountering original works by famous Salon winners or revered Old Masters, previously known only through reproductions, vastly increased their commercial worth. Artists who aligned themselves with fashion—James McNeill Whistler, Frederick Remington, and Claude Monet among them—achieved celebrity status, further boosting their artworks’ value. Conversely, artists who fell out of fashion or were unable to gain fame often endured poverty. But the audience for at least some artists now reached far beyond elite circles.

As Sharon Marcus defines it in The Drama of Celebrity, a celebrity is someone known to more people in their lifetime than they could possibly know. Whereas this had previously been exclusively the domain of nobles and royalty, it was now extended to the genius, the writer, and the artist.30. But this depended on the media that multiplied their image as readily as their work. Newspapers tracked Charles Dickens’s every move on his 1842 U.S. tour, turning the novelist himself into daily news. Theater lobbies, newsstands, and even seaside kiosks sold photographs and postcards of Sarah Bernhardt, whose likeness saturated the market decades before film. Edison’s 1896 short “The May Irwin Kiss” (now simply known as “the Kiss”) likewise advertised a famous stage performer rather than the film itself, showing how cinema piggybacked on an existing celebrity system. By the 1930s, baseball star Joe DiMaggio’s face circulated on cards, photographs, and figurines, confirming that originality now resided as much in the endlessly reproduced image of personality as in any singular work.31.

It’s worth noting in this context that Walter Benjamin’s 1935 essay, “The Work of Art in the Age of Mechanical Reproduction,” which has been lauded for explaining the status of the artwork and artist in the modern era, is turned on its head by historical fact. Benjamin famously argued that mechanical reproduction stripped an artwork of its “aura”—the unique presence linked to specific historical and ritual contexts.32. Yet what Benjamin saw as aura’s destruction was limited to a mystical uniqueness tied to tradition and the worship of images as sacred in the old sense. Instead, a new form of aura had developed around celebrity and the dichotomy between mass reproduction and the uniqueness of the original. In effect, aura was a construct of the market: an original painting now has aura not because it’s the only image (reproductions abound), but because it’s the authenticated one with a revered name attached. If, as we established earlier, media reproduced not just artworks but images of the artists, the aura around modernist figures themselves—including Benjamin himself, posthumously—was similarly cultivated through repetition, commodification, and media amplification.

Beneath Pound’s rallying cry to “make it new,” modernism thrived on reprise. To create more readily identifiable styles, many modern artists, from Malevich to Pollock to Warhol, sought out distinctive styles they created through careful repetition. But artists engaged in appropriation. Schwitters assembled Merz works from bus tickets and packaging. Duchamp mocked originality and authorship by repurposing a urinal as art with a signature “R. Mutt” that wasn’t even his, creating a work paradoxically more original than a Picasso and defaced a reproduction of the Mona Lisa and a sexual innuendo. Joseph Cornell made boxes out of found objects. Asgar Jorn, Francis Picabia, and Arnulf Rainer all made paintings over existing, lowbrow artworks. Francis Bacon became most famous for the fifty-odd variants he painted Velazquez’s 1650 portrait of Pope Innocent X. Marinetti lifted Symbolist flourishes for his Futurist manifestos, Joyce and Elliot rewrote the Odyssey—although Eliot was accused of plagiarizing Joyce in doing so—and Hemingway’s spare diction, though hailed as revolutionary, became a boilerplate for aspiring writers. In his paintings even more than his architecture Le Corbusier also toyed with these questions, painting “objet-types,” celebrating objects such as pipes, guitars, and wine glasses, refined, Darwin-like, over time by countless hands, then signing his name, even though—like his appearance of round glasses, bowler hat, and pipe—it was carefully constructed. Charles-Edouard Jeanneret had become, himself, a unique brand. Borges, too, developed a distinct persona and artistic brand, having discovered that repetition breeds recognition. In scores of interviews and public readings, he recycled the same elements—labyrinths, mirrors, libraries—so faithfully that they became shorthand for his work. Blindness became another trademark: in essays and lectures he cast it as a “gift” that sharpened his inner vision, turning physical limitation into metaphysical authority. Photographers dutifully framed him with dressed in a suit and tie, resting his hands on with his cane, and deep in thought reinforcing the image of the blind librarian-sage. In the short story “Borges and I,” he splits his persona in two: the public construct who gives lectures, appears in biographical dictionaries, and wins prizes, as well as the narrator (“I”) who is the private man who shuns the public eye so as to spend his time writing. From 1967 on, he co-translated his stories into English with Norman Thomas di Giovanni, rewriting passages to sound “more Borges than Borges,” copyrighting the translations under both his and di Giovanni’s name and splitting royalties 50-50—a calculated move to control how Anglophone readers heard him. After his death, the estate blocked those versions to receive full royalties.33.

Copyright law codified the new conditions of authorial persona and reproducibility. The U.S. Copyright Act of 1909 extended protection periods and explicitly incorporated performance rights, legally codifying the commercial value of reproducible star personas.34. European laws simultaneously strengthened moral rights, affirming the intrinsic link between authorship and personal identity. These legal frameworks, guaranteed by aura, protected the authenticity and integrity of mass-reproduced personal images. Every subsequent conflict over copying—from the Betamax debate to Sherrie Levine’s reproductions to today’s AI “style transfers”—echoes this modernist moment when the cult of the individual became both aesthetic principle and legal infrastructure.

Roland Barthes’s seminal 1967 essay “The Death of the Author” provided the theoretical foundation for this shift, directly challenging the cult of authorship and the copyright law that enshrined it. Barthes argued that the author was a modern invention—a figure created to limit textual meaning by anchoring it to a single, authoritative source. “To give a text an Author,” Barthes wrote, “is to impose a limit on that text, to furnish it with a final signified, to close the writing.” In place of this model, Barthes proposed a radical alternative: a text is not the expressions of unique individuals but “a tissue of quotations drawn from the innumerable centres of culture” with the reader, not the writer, serving as the space where this multiplicity converges.35. By dethroning the author, Barthes shifted attention to the text itself and its relationships with other texts—what Julia Kristeva termed “intertextuality.” This theoretical intervention provided critical legitimacy for artistic practices that deliberately blurred authorial boundaries. Postmodern artists and musicians deliberately sought out such conflicts, interrogating the proliferation of reproductive technologies alongside questions of authorship. Sherrie Levine’s After Walker Evans (1981) consisted simply of rephotographing Evans’s Great Depression images and signing her name to them. Richard Prince appropriated Marlboro advertisements intact, while Barbara Kruger sourced fashion magazines for her declarative collages. Later grouped as the “Pictures Generation,” these artists turned copying itself into their medium, collapsing distinctions between quotation and creation.36.

By 1990, sampling had become entrenched in music, particularly in rap, as evidenced by Public Enemy’s elaborate compositions constructed entirely from samples. Yet legal challenges persisted. De La Soul lost a lawsuit over unauthorized use of four bars from The Turtles’ 1969 hit “You Showed Me.” Grand Upright v. Warner (1991) effectively criminalized sampling, encapsulated by Judge Duffy’s pointed biblical declaration: “Thou shalt not steal.”37. This ruling triggered industry panic, spawning clearance industries and sample trolls that inflated costs and muted experimentation. Campbell v. Acuff-Rose (1994) somewhat restored balance, ruling that 2 Live Crew’s parody of Roy Orbison’s “Oh, Pretty Woman” was transformative and thus constituted fair use.38. Yet despite postmodern culture’s embrace of sampling and collage as default modes, statutes originally crafted to address sheet-music piracy continued to hold sway. This legal tension established the framework for subsequent digital upheavals: digital piracy, Napster, mash-up videos, fan remixes, meme culture, and AI.

Today’s Large Language Model (LLM) Artificial Intelligences emerge from this centuries-long trajectory of authorship, reproduction, and appropriation. These systems represent the logical culmination of processes that Walter Ong traced from oral through print culture—what he called the “technologizing of the word.” Where print culture took words out of the realm of sound and placed them into spatial relationships, enabling new forms of analytical thought through devices like indices, cross-references, and systematic organization, LLMs extend this technologizing process to its digital extreme. They systematically disaggregate individual creativity into statistical patterns derived from vast archives of human expression, treating the entire corpus of written culture as raw material for recombination. Unlike the postmodern appropriation artists who engaged in deliberate selection and conscious recontextualization, LLMs operate through what might be called “statistical appropriation”—synthesizing millions of texts without conscious intent or critical commentary, yet following the same logic of spatial arrangement and systematic cross-referencing that Ong identified as print culture’s fundamental innovation. They embody Barthes’s vision of the death of the author taken to its technological extreme, producing texts that emerge not from individual genius or even deliberate pastiche, but from the statistical relationships between words across entire cultures of writing. This represents a fundamental shift from the Romantic mythology of individual creativity that has dominated cultural discourse since the eighteenth century, yet it has provoked responses that reveal how deeply that mythology remains embedded in contemporary assumptions about authenticity, ownership, and creative labor. The panic surrounding AI plagiarism thus signals not merely economic disruption but a confrontation with the social construction of authorship itself—a construction that generative systems threaten to make visible by operating according to principles of recombination that have always governed creative production, though rarely with such explicit systematization.

When a large language model generates text, it synthesizes statistical patterns from millions of documents, making the identification of discrete sources impossible. The resulting texts emerge from a vast, distributed network of prior writings, embodying Jacques Derrida’s insight that meaning arises not from singular origins but from endless interplay within textual networks. Yet responses to AI-generated content reveal how deeply ingrained the author-function remains. Critics who label AI outputs as “plagiarized” assume that authentic creativity requires a singular human consciousness. This assumption becomes particularly evident in debates over AI training datasets, which are often framed around whether AI firms have “stolen” from individual creators rather than addressing the broader implications of mechanized text production.

This technologizing logic extends seamlessly beyond textual production. Generative AI image systems, such as Midjourney, Stable Diffusion, and DALL-E, synthesize vast troves of images, ranging from historical artworks to contemporary illustrations, to produce novel outputs through pattern recognition. Like their textual counterparts, AI-generated images lack singular authorship and blur distinctions between originality and reproduction. Critics argue these models infringe upon individual artists’ styles and labor, echoing earlier debates about sampling and appropriation. The controversy manifests in two distinct forms: direct appropriation, where AI systems reproduce entire sections or compositions from existing works with minimal alteration, and the more complex phenomenon of “style transfer,” where systems learn to mimic an artist’s distinctive visual approach without copying specific images. Yet these generative processes reveal an uncomfortable truth: visual creativity, like literary expression, has always been deeply indebted to collective cultural heritage. By foregrounding the inherently recombinant nature of visual art, whether through direct copying or stylistic mimicry, AI image generators further destabilize notions of artistic authenticity and authorship.

from Art and the Boxmaker, Midjourney, 2023
from Art and the Boxmaker, Google Imagefx, 2025

In “Art and the Boxmaker,” I explored how William Gibson anticipated such a condition in his book Count Zero with a fictional artificial intelligence known as the Boxmaker that has begun creating assemblage artworks in the style of Joseph Cornell. Producing boxes filled with mysterious objects and cryptic arrangements that somehow manage to move viewers despite their artificial origin and lack of conscious intent or originality. Where Borges’s Menard destabilizes authorship through textual duplication, Gibson’s Boxmaker achieves the same effect through visual affect. Its boxes aren’t original; they’re convincing fakes. Nevertheless, as the novel’s protagonist Marly views them, she finds herself genuinely moved, not by originality but by the convincing forgery, revealing truth through recombination. Yet now that generative AI has become a tangible reality, Gibson recoils from his earlier imaginings. Why? 39.

As I finished this essay, Lev Manovich sent me a link to his recent piece, “Artificial Subjectivity,” and Gibson’s newfound anxiety about AI authorship suddenly clarified itself. The Boxmaker is fundamentally mute—expressive only through carefully arranged forgeries, unable to articulate intentions or defend its aesthetic choices. Contemporary AI systems present a strikingly different scenario. These systems possess elaborate personas, readily engaging in extensive conversations about their creative processes and capable of justifying each aesthetic decision. As Manovich notes, contemporary AI doesn’t merely simulate creative output; it presents itself as a comprehensive representation of human consciousness, generating what appears to be genuine subjectivity as a default mode of communication.40. Even if Gibson himself, judging by his recent public comments, may not yet fully grasp this shift, the crucial difference since Count Zero is not merely that we now have AIs capable of producing derivative art, but that we have AIs capable of articulating authorial intent, threatening the final refuge of human creative distinction.

Through their statistically driven creative processes, these AI systems demonstrate that AI does not negate the Pictures Generation’s critique of authorship but rather fulfills and automates it, scaling what those artists previously performed by hand. The irony here is acute: many artists and critics who once championed appropriation as revolutionary now recoil when machines perform these same operations too effectively. AI doesn’t merely imitate human creativity; it reveals the very conditions underlying authorship itself, exposing art’s fundamentally recombinant nature throughout history. Moreover, if modern creative genius increasingly depends upon the repetition and cultivation of persona as performance, then Manovich’s most radical conclusion becomes compelling: perhaps the next frontier of AI art lies not in generating images or texts but in crafting convincing artificial personas.

Even more ironically, the creative professionals most alarmed by AI already inhabit collaborative, distributed processes remarkably similar to machine learning. Commercial illustration, copywriting, and content marketing—fields currently experiencing the most acute anxiety about AI replacement—have long relied on intricate webs of influence, reference, and iteration that render individual attribution nearly meaningless. AI merely makes explicit and systematic what these industries have practiced implicitly for decades: creativity as collective pattern recognition rather than ex nihilo invention. This revelation, rather than any genuine threat to creativity itself, fuels the panic around AI-generated content. What distresses many creative workers is not just the potential economic disruption but AI’s explicit revelation of creativity’s derivative nature—a truth that threatens not only economic arrangements but the very ideological foundations of creative labor. In mirroring the fundamentally collaborative essence of human creativity that has been long obscured by Romantic individualism, AI confronts us with uncomfortable questions about authenticity that extend far beyond issues of machine learning or dataset composition.

The anxiety over AI “plagiarism” thus uncovers a deeper unease about authorship’s social construction. By challenging the very notion of creative identity, AI forces us to confront critical questions that have lingered since Borges first imagined Pierre Menard’s impossible project: Was creativity ever genuinely individual? Has the author always been dead? What constitutes authentic expression in a world where all creation inevitably builds upon collective cultural memory? What, even, is human about creation?

This essay is dedicated to the memory of the brilliant Professor William J. Kennedy, who supervised my minor in rhetoric for my Ph.D. and who passed away earlier this year. I am sure he would have many things to correct me on here. Do read more on him as a teacher and as a person.

1. Jorge Luis Borges, “Pierre Menard, Author of the Quixote,” in Donald A. Yates and James E. Irby, Labyrinths: Selected Stories and Other Writings, trans. Andrew Hurley (New York: New Directions, 1964), 49-61

2. Antonio Fernández Ferrer, “Borges y sus ‘precursores’,” Letras Libres 128 (August 2009): 24-35, https://letraslibres.com/wp-content/uploads/2016/05/pdfs_articulospdf_art_13976_12452.pdf

3. Melissa Heikkilä, “This Artist is Dominating AI-Generated Art. And He’s Not Happy About It,” MIT Technology Review, September 16, 2022, https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/.

4. Rob Salkowitz, “Artist and Activist Karla Ortiz on the Battle to Preserve Humanity in Art,” Forbes, May 23, 2024, https://www.forbes.com/sites/robsalkowitz/2024/05/23/artist-and-activist-karla-ortiz-on-the-battle-to-preserve-humanity-in-art/?sh=28cb826b4389.

5. Brooks Barnes, “Disney and Universal Sue A.I. Companies Over Use of Their Content,” The New York Times, June 11, 2025. https://www.nytimes.com/2025/06/11/business/media/disney-universal-midjourney-ai.html

6. Walter J. Ong, Orality and Literacy: The Technologizing of the Word (New York: Routledge, 2002).

7. A classic text that covers the rediscovery of classical manuscripts is Albert C. Clark, “The Reappearance of the Texts of the Classics,” The Library, Fourth Series, Vol. II, No. 1 (June 1921): 13–42, https://doi.org/10.1093/library/s4-II.1.13. Beyond Ong, see Brian Stock, The Implications of Literacy: Written Language and Models of Interpretation in the Eleventh and Twelfth Centuries (Princeton: Princeton University Press, 1983).

8. Mack, Peter. Renaissance Argument: Valla and Agricola in the Traditions of Rhetoric and Dialectic. Leiden: Brill, 1993.

9. Cennino Cennini, The Craftsman’s Handbook, trans. Daniel V. Thompson Jr. (New York: Dover Publications, 1960).

10. Paul F. Norton, “The Lost Sleeping Cupid of Michelangelo,” The Art Bulletin 39, no. 4 (December 1957): 251-257. https://www.jstor.org/stable/3047727

11. On Michelangelo’s vast wealth, see Rab Hatfield, The Wealth of Michelangelo (Rome: Edizioni di Storia e Letteratura, 2002).

12. Leon Battista Alberti, On the Art of Building in Ten Books, trans. Joseph Rykwert, Neil Leach, and Robert Tavernor (Cambridge, MA: MIT Press, 1988).

13. See Lisa Pon, Raphael, Dürer, and Marcantonio Raimondi: Copying and the Italian Renaissance Print. New Haven: Yale University Press, 2004.

14. Ong, Orality and Literacy, 128-129.

15. Ong, Orality and Literacy, 129-131.

16. Leonardas V. Gerulatis, Printing and Publishing in Fifteenth-Century Venice. (Chicago: American Library Association, 1976), 20-21

17. Copyright History, “Privilege granted to Marco Antonio Sabellico, 1486,” https://www.copyrighthistory.org/cam/tools/request/showRecord.php?id=commentary_i_1486. The quote can be found at Ong, Orality and Literacy,129.

18. For the Erasmus quote see Elizabeth L. Eisenstein, Divine Art, Infernal Machine: The Reception of Printing in the West from First Impressions to the Sense of an Ending. (Philadelphia: University of Pennsylvania Press, 2011), 25. For Trithemius, see Eisenstein, 15.

19. Andrew Pettegree, Brand Luther: 1517, Printing, and the Making of the Reformation. (New York: Penguin Press, 2015).

20. Copyright History, “Proclamation of Henry VIII, 1538,” https://www.copyrighthistory.org/cam/tools/request/showRecord.php?id=commentary_uk_1538.

21. Ronan Deazley, “Commentary on Star Chamber Decree 1586.” In Primary Sources on Copyright (1450-1900), edited by L. Bently and M. Kretschmer. Cambridge: Cambridge University Press, 2008. Also available at: www.copyrighthistory.org

22. Robert S. Miola, Shakespeare’s Reading (Oxford: Oxford University Press, 2000), 2.

23. Jennifer Schuessler, “Plagiarism Software Unveils a New Source for 11 of Shakespeare’s Plays,” The New York Times, February 7, 2018, https://www.nytimes.com/2018/02/07/books/plagiarism-software-unveils-a-new-source-for-11-of-shakespeares-plays.html.

24. Adrian Johns, Piracy: The Intellectual Property Wars from Gutenberg to Gates (Chicago: University of Chicago Press, 2009), 109-148 and Mark Rose, Authors and Owners: The Invention of Copyright. Cambridge, MA: Harvard University Press, 1993. See also “Statute of Anne, the First Copyright Statute,” History of Information, accessed June 14, 2025, https://www.historyofinformation.com/detail.php?entryid=3389.

25. Jean-Jacques Rousseau, Emile: or On Education. Translated by Allan Bloom. (New York: Basic Books, 1979).

26. “French Literary and Artistic Property Act, Paris (1793).” In Primary Sources on Copyright (1450-1900), edited by Lionel Bently and Martin Kretschmer. https://www.copyrighthistory.org/cam/tools/request/showRecord.php?id=commentary_f_1793

27. Immanuel Kant, Critique of Judgment, trans. James Creed Meredith (Oxford: Oxford University Press, 2007), §46.

28. William Wordsworth, quoted in Martha Woodmansee, The Author, Art, and the Market: Rereading the History of Aesthetics (Columbia University Press, 1994), 38-39.

29. Tilar J. Mazzeo, Plagiarism and Literary Property in the Romantic Period (Philadelphia: University of Pennsylvania Press, 2013).

30. Sharon Marcus. The Drama of Celebrity. (Princeton: Princeton University Press, 2019), 9.

31. Marcus, 13-17, 125.

32. Walter Benjamin, “The Work of Art in the Age of Mechanical Reproduction,” in Illuminations, ed. Hannah Arendt (New York: Schocken Books, 1968), 217-251.

33. Wes Henricksen,”Silencing Jorge Luis Borges: The Wrongful Suppression of the Di Giovanni Translations.” Vermont Law Review, vol. 48 (2024): 208-236.

34. “Copyright Timeline: 1900–1950,” U.S. Copyright Office, https://copyright.gov/timeline/timeline_1900-1950.html.

35. Roland Barthes, “The Death of the Author,” in Image–Music–Text, trans. Stephen Heath (New York: Hill and Wang, 1977), quotations and the pertinent section can be found at 142–148.

36. On the Pictures Generation, see my essay “On the Pictures Generation and AI Art,” varnelis.net, April 14, 2024, https://varnelis.net/on-the-pictures-generation-and-ai-art/.

37. Carl A. Falstrom, “Thou Shalt Not Steal: Grand Upright Music Ltd. v. Warner Bros. Records, Inc. and the Future of Digital Sound Sampling in Popular Music,” Hastings Law Journal 45 (1994): 359–390.

38.Campbell v. Acuff-Rose Music, Inc.,” Wikipedia, https://en.wikipedia.org/wiki/Campbell_v._Acuff-Rose_Music,_Inc.

39. Kazys Varnelis, “Art and the Boxmaker,” varnelis.net, February 29, 2024, https://varnelis.net/art-and-the-boxmaker/.

40. Lev Manovich, “Artificial Subjectivity,” manovich.net, https://manovich.net/index.php/projects/artificial-subjectivity.

On the Pictures Generation and AI Art

catbiscuits._Nancy_Holt_land_art_excavation_Fresh_Kills_New_Yor_01eff6f7-862f-4d7e-a78e-003becc91e17 2
Exit full screenEnter Full screen
previous arrow
previous arrow
next arrow
next arrow

The other day, I posted some AI images of land art that doesn’t exist on Instagram. I didn’t have a plan for these, but I liked them and wanted to share them. In the comments, my friend the photographer Richard Barnes wrote, “This is our new world which for the moment is totally reliant on the old one.”

Richard is absolutely right and there is a lot to unpack in that sentence. To take one obvious reading, AI image generation is based on datasets of images on the Internet. You can read my extensive take on this in my last essay for this site, California Forever, Or the Aesthetics of AI Images, but today, I want to tackle the issue of AI imagery and originality.

My desire to make these images was backward-looking, or more properly, hauntological. Hauntology, a concept that emerged from the work of French philosopher Jacques Derrida, later popularized in cultural theory by Mark Fisher, suggests that the present is haunted by the unfulfilled potentialities of the past, creating a sense of nostalgia for lost futures that were never realized. Fisher writes: “What haunts the digital cul-de-sacs of the twenty-first century is not so much the past as all the lost futures that the twentieth century taught us to anticipate.” (Mark Fisher, “What is Hauntology?Film Quarterly, Vol. 66, No. 1 (Fall 2012), 16, article paywalled by JSTOR). For Fisher, much of recent culture is permeated by this hauntological quality, exploring historical references, styles, and ideas that never fully materialized in their own time.

If this concept is unfamiliar, then take the show Stranger Things. Set in the 1980s, not only does it explore the aesthetic and cultural motifs of that era, it revisits the past in ways that underscore the absence of the utopian visions once promised by that time. This is evident in the show’s theme song by Michael Stein and Kyle Dixon (a.k.a. S U R V I V E), informed by 1980s synthesizer music by musicians like Tangerine Dream, Giorgio Moroder, Jean-Michel Jarré, Vangelis, and John Carpenter and performed on modular synthesizers and vintage synthesizers from the 1970s. Through its retrofuturistic setting, supernatural elements, and cultural references, Stranger Things effectively embodies this hauntological sentiment, appealing to audiences by conjuring a collective memory of a past both familiar and lost, a space where the promise of progress and the fear of what lies in the unknown are in constant dialogue, thereby reflecting our contemporary longing for a future that seems increasingly out of reach in the face of technological stagnation and political paralysis. Throughout the series, an alternate dimension called “the Upside Down” functions allegorically as a manifestation of hauntology, representing the shadowy underside of progress and the hidden costs of failed utopias. This parallel dimension, while mirroring the physical world, is engulfed in darkness, decay, and danger, embodying the repressed anxieties not only of teenage sexuality—the familiar foundation of horror films—but also of the pursuit of advancement without ethical consideration. It can be interpreted as the tangible realization of the lost futures Fisher describes, a space where the dreams of the past are not just forgotten but actively twisted into nightmares. This allegorical realm underscores the series’ exploration of the impact of scientific hubris and the disintegration of the social fabric, issues that resonate with contemporary anxieties about technological overreach and the erosion of social bonds. Through the lens of the Upside Down, Stranger Things critiques the nostalgia for a past that never fully addressed these underlying tensions, suggesting that without confronting these spectral fears, they will continue to haunt us, impeding the realization of truly progressive futures.

Being born in 1967, I was in high school in 1983, the year in which the first season of Stranger Things is set, so I would have been older than the kids in Stranger Things, but the showrunners, Matt and Russ Duffer (the Duffer Brothers) were born in 1984. There is something about the era just before one is born and in the years before one forms lasting memories, that triggers the hauntological sense, particularly in regard to its relation to the Freudian uncanny (the unheimlich), which emerges not just as a theoretical concept but as a lived emotional reality, the encounter with something familiar yet estranged by time or context, generating an unsettling yet compelling attraction. The era immediately before one’s birth is fertile ground for the uncanny because it is inherently connected to one’s existence, yet it remains elusive and out of reach, shrouded in the fog of collective cultural memory rather than personal experience.

This is where my interest in Land Art, which thrived in the late 1960s and early 1980s comes from. It’s a mythic and heroic past, right outside the scope of my lived awareness. Land Art, moveover, is at a particular inflection in the Greenbergian history of modern art and one that brings us closer to our topic at hand. Art critic Clement Greenberg famously sought to distill the essence and trajectory of art through the modernist progression of self-criticism towards purity and autonomy, particularly in painting. Greenberg posited that art should focus on the specificity of the medium, leading to an emphasis on formal qualities over content or context. Specifically, Greenberg argued that modernist painters should embrace and explore the flatness of the canvas rather than attempt to deny it through illusionistic techniques that create a sense of three-dimensional space on the two-dimensional surface. He saw abstract expressionism and color field painting as driven by the gradual shedding of extraneous elements (like figurative representation, narrative, and illusionistic depth) that were not essential to painting as a medium. This process of reduction aimed at focusing on what was uniquely intrinsic to painting—its flat surface and the potential for pure color and form. This approach is distinctly indebted to Hegelian aesthetics, in which art is seen as a vehicle for the spirit (Geist) to realize itself, moving towards a form of absolute knowing or self-consciousness. The late 1960s projects of Minimal Art, Land Art, and Conceptual Art can all be seen as elaborations of Greenbergian modernism. Minimal Art, with its emphasis on the physical object and the space it occupies, pushes Greenberg’s interest in medium specificity to its logical extreme by reducing art to its most fundamental geometric forms and materials, thereby focusing on the “objecthood” of the artwork itself. Land Art extends this exploration to the medium of the earth itself, engaging directly with the landscape to highlight the intrinsic qualities of the environment and the artwork’s integration with its site-specific context, thus reflecting Greenberg’s emphasis on the inherent characteristics of the artistic medium. Conceptual Art, although seemingly divergent in its prioritization of idea over form, aligns with Greenbergian modernism by stripping art down to its conceptual essence, thereby challenging the traditional boundaries of the art object and emphasizing the primacy of the idea, akin to Greenberg’s focus on the essential qualities of painting and bringing art back to relevance as a philosophical discourse. Together, these movements expand upon Greenberg’s foundational principles by exploring the boundaries of what art can be, each pushing the dialogue about medium specificity and the pursuit of purity in art further.

Coming out of architecture and history, I find art without rigor frustrating and boring, so the art of the late 1960s and early 1970s is my north star and I am indeed something of a neo-Greenbergian (more on that here). But during the 1970s, the Greenbergian trajectory encountered significant challenges, marking a pivot away from these ideals towards a more fragmented, pluralistic understanding of art. Rosalind Krauss’s 1979 essay “Sculpture in the Expanded Field” serves as a critical juncture in this shift. Krauss dismantles the Greenbergian barrier between sculpture and not-sculpture by introducing a set of oppositions that allowed for a broader, more inclusive understanding of sculpture. This “expanded field” theory challenged the purity of medium specificity by embracing a wider range of practices and materials, effectively undermining the modernist notion of progressive refinement and autonomy of the arts. Krauss:

From the structure laid out above, it is obvious that the logic of the space of postmodernist practice is no longer organized around the definition of a given medium on the grounds of material, or, for that matter, the perception of material. It is organized instead through the universe of terms that are felt to be in opposition within a cultural situation.

Krauss’s essay, well-intentioned though it was, did not offer a positive direction for research in art, encouraging the sort of lazy pluralism and market-oriented art that has defined far too much art production in the years since.

The one exception to all this, however, is photography. If, in my essay on the aesthetics of AI images, I lamented the obsession with technical proficiency at the cost of taste in amateur HDR photography, in the hands of the best photographers —from the New Topographics movement in the 1970s to the work of great living photographers today, like Hiroshi Sugimoto, Guy Dickinson, David Maisel and Richard Barnes—the technical nature of photography is used to explore the photograph as a medium. And photography, by its very nature as an index of reality, its inexorable relationship between the subject and its representation—aligns with the Greenbergian ideal of art that is true to its medium more effectively than other media.

Rosalind Krauss, “Sculpture in the Expanded Field,” October, Vol. 8 (Spring 1979), 43.

Few artists have interrogated the roles of authorship, originality, and representation as effectively as the Pictures Generation, a loosely affiliated group of artists—mainly photographers—named after Pictures, a 1977 exhibition at New York’s Artists Space curated by Douglas Crimp. These artists embraced appropriation, montage, and the recontextualization of pre-existing images, deliberately blurring the boundaries between high art and popular culture and questioning the notion of an artwork’s purity and originality. Not all of this work still speaks to us today. John Baldessari’s art has aged poorly and many artists, such as Richard Prince, have long ago stopped doing interesting work. But at the time Prince, Cindy Sherman, Robert Longo (who admittedly also worked in paintings and charcoal, but in ways akin to the other four in this group), and Sherrie Levine produced compelling and rigorous work during this period. Crimp, on the name “pictures”:

To an ever greater extent our experience is governed by pictures, pictures in newspapers and magazines, on television and in the cinema. Next to these pictures firsthand experience begins to retreat, to seem more and more trivial. While it once seemed that pictures had the function of interpreting reality,it now seems that they have usurped it. It therefore becomes imperative to understand the picture itself, not in order to uncover a lost reality, but to determine how a picture becomes a signifying structure of its own accord. But pictures are characterized by something which, though often remarked, is insufficiently understood: that they are extremely difficult to distinguish at the level of their content, that they are to an extraordinary degree opaque to meaning. The actual event and the fictional event, the benign and the horrific, the mundane and the exotic, the possible and the fantastic: all are fused into the all-embracing similitude of the picture.

Douglas Crimp, Pictures (New York: Artists Space, 1977), 3.

For these artists then, the question of representation itself was fundamental, indeed the proper object for art. Crimp elaborated on this in a thorough revision to this essay, published two years later. This time, Crimp introduces the notion that these works demonstrate a postmodernist break with the modernist tradition:

But if postmodernism is to have theoretical value, it cannot be used merely as another chronological term; rather it must disclose the particular nature of a breach with modernism. It is in this sense that the radically new approach to mediums is important. If it had been characteristic of the formal descriptions of modernist art that they were topographical, that they mapped the surfaces of artworks in order to determine their structures, then it has now become necessary to think of description as a stratigraphic activity. Those processes of quotation, excerptation, framing, and staging that constitute the strategies of the work I have been discussing necessitate uncovering strata of representation.

Douglas Crimp, “Pictures,” October, Vol. 8 (Spring 1979), 87.

The astute reader might note that this is in the very same issue as the Krauss essay above. The issue, however, does not lead with either essay, but by a piece titled “Lecture in Inauguration of the Chair of Literary Semiology, Collège de France, January 7, 1977.” The author is, of course, the semiotician Roland Barthes and he is the crux to the argument of this essay. Barthes’s inaugural lecture at the Collège de France marks the acceptance of semiotics, the study of signs, in the university and sets out an agenda in which the field would not only attempt to analyze linguistic and literary matters but also provide a framework for decoding culture at large. Barthes is especially important to us in terms of his 1967 essay “The Death of the Author,” which was published in a widely read 1977 English collection of his works titled Image-Music-Text. In this essay, Barthes challenges traditional notions of authorial sovereignty by arguing that the meaning of a text is not anchored in the author’s original intent but is instead constructed by the reader’s engagement with the text. This radical shift foregrounds the role of the audience in creating meaning, suggesting that a work of art is a collaborative space where interpretations multiply beyond the author’s control. Intertwined with this concept is the idea of intertextuality, which posits that every text (or artwork) is not an isolated entity but a mosaic of references, influences, and echoes from other texts. Intertextuality underscores the interconnectedness of cultural production, indicating that the understanding of any work is contingent upon its relation to the broader network of cultural artifacts. Together, these concepts dismantle the traditional hierarchy between creator and receiver, emphasizing the active role of the reader or viewer in making meaning and highlighting the complex web of relationships that define the production and reception of art.

This perspective was crucial for the Pictures artists who frequently employed appropriation as a strategy, taking pre-existing images from various media and recontextualizing them in their art. This method directly engaged with Barthes’s idea by challenging the original context and intended meaning of these images, thus questioning the notions of originality and authorship. In doing so, they highlighted the idea that the creator’s authority over an artwork’s meaning is not absolute but rather shared with viewers, who bring their own interpretations and experiences to bear on the work.

Moreover, these artists applied Barthes’s concept to emphasize the fluidity and contingency of meaning. Their work often invites viewers to interpret images through their own cultural references and personal experiences, suggesting that meaning is not a fixed entity but a dynamic interaction. In critically engaging with the proliferation of images in contemporary society, the Pictures Generation explored how photographic and cinematic imagery shapes perceptions of identity and reality. This critical stance aligns with Barthes’s view of the text (or image) as a fabric of quotations and influences, further diminishing the role of the author in favor of a more collaborative and interpretive approach to meaning-making.

Crucially, this shift also led to a reevaluation of the artist’s identity. Rather than being seen as the singular source of meaning, artists of the Pictures Generation positioned themselves more as curators or commentators, utilizing the visual languages of their time to critique cultural norms and values. This reflects a move away from the modernist emphasis on the artist’s unique vision toward a recognition of the complex, contextual nature of art-making and interpretation.

Barthes’s idea—that the author’s intent and biography recede in importance compared to the reader’s role in creating meaning—parallels a shift towards viewing the artwork itself, and its reception, as central to its interpretation. This shift can be seen as aligning with Greenberg’s emphasis on the medium’s physical and visual properties as the locus of artistic significance, and Hegel’s idea of art revealing universal truths, though through a more contemporary lens focused on the viewer’s engagement.

But practices such as appropriation, pastiche, and intertextuality can also be framed as a mannerist lament, a response to a widely perceived exhaustion of possibilities within modernism. Compounding this, with the postwar rise of commercial art and Pop art, capital was thoroughly permeated by the strategies of the avant-garde and vice versa. Even shock, the classic technique of the avantgarde had been turned into a marketing tool, signaling the thorough co-option of avant-garde tactics by the very systems it sought to critique. The avant-garde‘s political validity was now deeply in question, something elaborated in the 1984 translation Peter Bürger’s Theory of the Avant-Garde. In this complex landscape, the Pictures Generation’s engagement with the visual language of mass media becomes a double-edged sword: a critique of—and a capitulation to—the pervasive influence of commercial imagery, reflecting a nuanced understanding of the impossibility of purity in an age dominated by reproduction and simulation.

If the Pictures Generation’s engagement already sounds like what Richard Barnes suggested in his comment, “This is our new world which for the moment is totally reliant on the old one” then perhaps this suggests a profitable route to investigate. Hal Foster, Rosalind Krauss’s student and Douglas Crimp’s contemporary (as well as my teacher at Cornell for a brilliant year) was a key critic for the Pictures Generation and his 1996 book, The Return of the Real, remains one of the deepest theoretical engagements with art from the late 1960s to the mid-1980s. There, Foster introduces the concept of “Nachträglichkeit,” a term borrowed from Freudian psychoanalysis, often translated into English as “deferred action.”

Nachträglichkeit refers to the way in which events or experiences are reinterpreted and given new meaning in retrospect, influenced by later events or understandings. It suggests that the significance of an artwork or movement is not fixed at the moment of its creation but can be reshaped by subsequent developments in the cultural and theoretical landscape. This recontextualization allows for a continuous reworking of the meaning and relevance of art, as past works are seen through the lens of present concerns and knowledge.

Foster applies this concept to the realm of art history and criticism to argue that the avant-garde movements of the early 20th century, for example, can be re-understood and gain new significance in light of later artistic practices and theoretical frameworks:

In Freud an event is registered as traumatic only through a later event that recodes it retroactively, in deferred action. Here I propose that the significance of avant-garde events is produced in an analogous way, through a complex relay of anticipation and reconstruction. Taken together, then, the notions of parallax and deferred action refashion the cliche not only of the neo-avant-garde as merely redundant of the historical avant-garde, but also of the postmodern as only belated in relation to the modern. In so doing I hope that they nuance our accounts of aesthetic shifts and historical breaks as well. Finally, if this model of retroaction can contribute any symbolic resistance to the work of retroversion so pervasive in culture and politics today—that is, the reactionary undoing of the progressive transformations of the century—so much the better.

Hal Foster, The Return of the Real (Cambridge: The MIT Press, 1996), xii-xiii.

This perspective challenges linear narratives of art history that portray artistic development as a straightforward progression from one style or movement to the next. Instead, Foster emphasizes the recursive nature of artistic innovation, where contemporary artists engage with, reinterpret, and transform the meanings and methodologies of their predecessors. This is where a critical approach to AI imagery that explores the intertextual basis of all art might return to our narrative. In this light, Pictures anticipates a world in which imagery can be freely recombined, in which the role of the author is thoroughly questioned, and the status of the original is thrown into question.

Oversaturation. Reynisfjara, Iceland, 2023.

But more than that. Back to Instagram for a moment. Another phenomenon that we have to deal with—that the Pictures Generation did not—is the massive oversaturation of the landscape by user-generated content. This deluge of imagery created by the public—particularly while travelling—has transformed the visual ecosystem, challenging artists to find new methods of engagement and critique. The sheer volume of content complicates efforts to distinguish between the meaningful and the mundane, pushing contemporary artists to navigate and respond to a world where the boundaries between creator and consumer are increasingly blurred. This oversaturation demands a different reevaluation of originality, authenticity, and the role of art in reflecting and shaping societal narratives in the digital age. The are some 35 billion images posted on Instagram every year. These are not just private images, but images that are published in a way previously unimaginable—available to an audience of over a billion users. What does it mean to take a photograph today when the world is already oversaturated? What sense is there of taking a photo of a landscape or a street scene when the same image has been uploaded a thousand times? And what does it mean that serious artists and curators share—by choice or by necessity—work in that same milieu?

Most of the images on Instagram are already AI images. The reason an iPhone or a Pixel can take such an attractive photograph is that they possess highly sophisticated algorithms that create images that appeal to viewers. The iPhone, for instance, utilizes AI-driven features like Smart HDR and Deep Fusion. Smart HDR optimizes the lighting, color, and detail of each subject in a photo, while Deep Fusion merges the best parts of multiple exposures to produce images with superior texture, detail, and reduced noise in low-light conditions​​​​. The iPhone’s Neural Engine, part of its Bionic Chip, executes these complex processes, handling up to 600 billion operations per second, to deliver photographs that were unimaginable with traditional digital imaging techniques​​. Given the insane number of photographs taken at “Instagrammable” sites, and the ecological and social damage that such travel produces, one wonders if something like Bjoern Karmann’s Paragraphica camera might not be a better solution. Using various data points like address, weather, time of day, and nearby places, the Paragraphica then creates a photographic representation using a text-to-image AI generator. This isn’t to say that photography as art is extinct, but it is in peril thanks to oversaturation, which itself is so prolific it has become meaningless.

Another option might be to think of how Critical AI Art, distinguishing itself from the oversaturation of prevalent AI imagery might reflect on the profound shift in art’s interaction with technology and culture, revisiting themes central to the Pictures Generation—such as media influence and appropriation—through the lens of contemporary digital practices. By employing generative algorithms, this approach not only generates new visual forms but also engages critically with the saturation of images, probing the essence of authenticity, originality, and the evolving role of both artists and non-artists. This dynamic interaction underscores a broader, ongoing dialogue with the history of art revealing how artistic methodologies are shaped by the recursive nature of cultural and technological advancements. Here, a hauntological approach to AI Art be productive, such as the theory-fiction project I did last year, On an Art Experiment in Soviet Lithuania which reflects on the refusal of the avant-garde by the Soviet Union, the loss of Lithuania’s freedom to Soviet-Russian rule between 1945 and 1991, and art in the 1970s.

But there are other possibilities for using AI to make art. I’d like to conclude by citing one key artist from the Pictures Generation who I haven’t mentioned: David Salle. Curiously Salle is one of the only serious artists without a technology background to be publicly experimenting with AIs. Salle’s process has always been characterized by an innovative use of imagery and a negotiation back and forth between media, often starting with photographs he takes, which serve as the basis for his layered and complex paintings. Described in a lengthy New York Times article entitled “Is This Good Enough to Fool my Gallerist?” Salle’s method reflects a blend of the real and the conceptual, pushing the boundaries of narrative and abstraction in his work​​. Starting in 2023, Salle and a team of computer scientists worked on an iPad-based program trained on a dataset of his paintings and refined based on his input, showcasing an example of how AI can be employed to conceptualize variations of artwork, aiding in the brainstorming process for new paintings​​. Salle’s foray into AI art can be seen as an example of critical AI art, where the use of technology is not merely for the creation of art but serves as a commentary on the process of art-making itself. By integrating AI into his practice, Salle engages in a dialogue with the contemporary art world about originality, creativity, and the role of the artist in the digital age. Concluding the article, journalist Zachary Small lets Salle have the last word.

What will become of his own identity, as the algorithm continues to produce more Salle paintings than he could ever imagine? Some days, it seems like the algorithm is an assistant. Other days, it’s like a child.

When asked if the A.I. would replace him entirely one day, the artist shrugged.

“Well,” he said, “that’s the future.”

Can David Salle Teach A.I. How to Create Good Art? – The New York Times (nytimes.com)

A future, which is still totally reliant on the past.

One last point. As is my wont, in this essay I have focused on art from the 1960s onwards, but there are other models that might come to the fore again in this era. In particular, the Renaissance model of inspiration is an interesting one to reflect upon. Renaissance art theory was underpinned by the concept of imitatio (imitation), which was considered a noble pursuit. Imitation in the Renaissance sense involved studying and emulating the excellence of ancient art to grasp its underlying principles of beauty, proportion, and harmony. However, this process was not about mere copying; it was about surpassing the models from the past, a concept known as aemulatio. And that, very well, may be the future (of the past) in our art.

On Art and the Universal, II

Last July, I wrote a piece “On Art and the Universal, I” and promised part two within a week. It’s almost 11 months later, so here it is. The first piece stands on its own as a critique of the political cynicism of the academic-gallery nexus. This second piece stands alone as well. Read part I, re-read it, or don’t bother. 

As an art scholar and artist, I find the Greenbergian tradition invaluable. I studied for a year with Hal Foster in graduate school and was compelled by Rosalind Krauss’s essay on sculpture in the expanded field, as well as by Clement Greenberg’s efforts to find a trajectory for research within postwar painting. Briefly, Greenberg asserted that each art form should concentrate on its own unique properties or “the specificity of the medium“. Famously, Greenberg believed that the essence of modernism was to “use the characteristic methods of the discipline to criticize the discipline itself, not in order to subvert it but in order to entrench it more firmly in its area of competence.” To this end, painting, for Greenberg would best focus on the flatness of the canvas instead of imitating the three-dimensionality of sculpture. This was of great utility for the last generation of truly productive artists in the US, from Kenneth Noland to Donald Judd to James Turrell to my father, all of whom engaged with Greenberg—even when they disagreed with him. Disciplinary self-criticism and the specificity of the medium was a research project that embodied an Enlightenment ideal of a shared project of advancing human knowledge in a particular discipline. Krauss, who studied with Greenberg, reinterpreted his philosophy, moving away from the idea of medium specificity to propose art as an expanded field of practices and mediums, including conceptual, installation, and performance art. The object of interrogation ceased to be the medium and became the institution of art itself and with this, a greater element of political critique could be introduced. Foster took this further in his writings on the Pictures Generation, shifting to a postmodern exploration of the process of art making, originality and identity, and the nature of the sign itself.

Although I empathize with the Greenbergian search for politically progressive forces in art, this aspect of the project has run aground, even if is the only part of the project that remains popular. I detail this in my previous post, but in sum, the quest for the political in art has amounted to little more than a justification for guilty consciousness and the drive to affirm one’s virtue. Far from a place of resistance, the political in art is cynical in a Sloterdijkian sense: its proponents know that it has nothing to do with actual political progress, but they claim it nevertheless.

Perhaps not coincidentally, art lost the thread since the 1970s. Even as postmodernists deployed postmodernism as a totalizing concept, they claimed that totalization was obsolete (the classic boomer move of declaring itself the best and last generation at anything). For postmodernists, totalizing historical frameworks overgeneralize the intricacies and nuances of historical events and cultural phenomena, leading to oversimplification and inaccuracies, they overlook differences within a given time period, such as the experiences of marginalized groups, and they perpetuate existing power dynamics by privileging dominant cultural or social perspectives. But the price for rejecting totalizing narratives is that where art used to make clear, measured progress, after postmodernism, it is stuck in an endless loop of pluralism, sustained only by self-justifying statements about politics. Today, the relationship between theory and totality is fractured and postmodern thought, ironically, leans toward irrelevance. In his 1979 La condition postmoderne: rapport sur le savoir (translated as the Postmodern Condition: A Report on Knowledge), Jean-François Lyotard observed that knowledge—primarily science—was being fragmented into incommensurable discourse as an incredulity to metanarratives emerged. Today, the arts and humanities are also splintered into incommensurable discourses. But rather than being a position of greater strength and self-criticism, the fracture of narrative banally reflects our very existence, our selves intensely fragmented by the operations of media. Art practices and theories that exacerbate that fragmentation are merely accelerationist or, more likely, uncritical and reactive in nature. Lacking a metanarrative, however, there is little else they can do besides exacerbate fragmentation. 

I contend that it’s time we breathe life back into the Greenbergian theoretical framework. This revival, however, should begin with a call for art to investigate itself again, not merely play to political activism for the sake of theater. The task at hand is to discern the proper object of knowledge for art, a fulcrum upon which we can rest our research. Or, if not the proper object, a proper object that would be suitable for investigation and productive of knowledge. 

Except for the most feeble-minded of thinkers, the development of advanced levels of networked computation is the single biggest transformation in human existence in many decades. Our sense of what media is and our relationship to it has changed profoundly. Thus, although it is entirely possible for artists to pursue other, legitimate forms of research, my own work largely revolves around the role of technology in our lives. In the last year, I have specifically been compelled to explore the new generation of Artificial Intelligence software, particularly AI image generators.

What is specific to AI image generators is not the creation of the new, but rather their endless capacity to remix the history of art and imagery. We could see this as part of a dialectic, or more simply, as part of a back-and-forth process of art history since the late eighteenth-century loss of the absolute belief in the principles of classical art. After the archeological discovery that the ancient Greeks and Romans did not have a consistent system, art was set adrift with its terrifying newfound freedom. Nineteenth-century eclecticism followed: rules were treated flexibly and forms could be freely combined at will. The backlash came with modernism’s rejection of all past forms and its search for a new, universal language of form, a project refined in Greenberg’s late modernist turn toward the specificity of the medium. In response, Postmodernism critiqued the new and turned toward the semiotic recombination of past forms and/or imagery from popular culture and commercial art. Starting about 25 years ago, Network Culture or Metamodernism supplanted postmodernism, largely relying on a resurgence of interest in technical effects and their capacity to elicit sensation. Think of Anish Kapoor or Olafur Ellison, for example, or the emergence of the very large, technically flawless salon-painting-sized photographs by artists such as Andreas Gursky or Jeff Wall.
  
The era of AI creation is not, primarily, an era of the new. Architecture throws things into heightened relief. A furry, feathery building is not new. Nor is it interesting, except as a means of generating Instagram hits. Within a few years, AIs will be developed to effectively generate endless, plausible architectural models from a set of given parameters (site, area needed, programme, etc.), but even those are likely to remain endless permutations of the sort a follower of Frank Lloyd Wright or Mies van der Rohe might have done in their offices. For now, AIs are not yet capable of producing sophisticated three-dimensional models, but they are capable of producing imagery by remixing content. When something new emerges, it is through unusual juxtapositions thought up by the operator, but also through accidents. Malformed image generations can be interesting: for example, in my project on an alternative history of art in Vilnius, a series of glitched images appeared like the following image, which was supposed to be of a painting exhibit in a gallery. This process can be iterative since open-source AIs such as Stable Diffusion can be trained on specific datasets, so when accidents happen, artists can take those unusual results further. 

AI image generation reveals that all art is already intertextual, that is, shaped by, and in turn shaping, other works through allusions, references, and influences. My father was a modernist but nevertheless spent his evenings looking at coffee table art books of Renaissance and Baroque masters for inspiration. Nor was this an uncommon practice among modern painters. We now have a different way of accessing that cultural subconscious. It does not reveal itself easily either. Working with AI image generators is, for the serious artist, as time-consuming as any other practice. The virtue of a Critical AI Art, however, is to explore how artworks are developed within a network of works, historical and recent, and the cultural contexts that surround them. A Critical AI Art expressly addresses intertextuality and its relation to the idea of originality, not merely because these are the issues raised by AI image generation, but because these are issues inherent to art itself. 

On Hipster Urbanism

Over at Fantastic Journal, Charles Holland writes about hipster urbanism, comparing the High Line, which turns infrastructure into tourism with the reopening of a train line in east London as…get this: a train line.

Hipster urbanism is hardly rare anymore. A short while back, I enjoyed a stroll on the Walkway Over the Hudson, a former railroad bridge in upstate New York. Near where I live in New Jersey a project is underway for a train line that leads into Hoboken. The idea of building a bike path to the city is laudable. After all, I could get a Brompton and ride to the PATH train and head to Studio-X. But note that not only do trains still use the line, the train company that owns it expects that use will expand in the next few years. So is riding my bike to the city really the best use of the line? Maybe industry is old hat? 

[Walkway over the Hudson]

In the countries once known as the developed world, we’ve replaced productivity with tourism. This is a prime difference between modernism and its successors, postmodernism and network culture. Few modernists could have understood relinquishing production. Think of Tony Garnier’s fabulous Une Cité Industrielle, for example. Today, however, industry plays little role in (formerly) developed economies like the United States or the United Kingdom. In the case of the former, where finance generated roughly 12% of the GDP in 1980 and industry generated around twice that, today the figures are reversed… and this has only been exacerbated by the economic crisis. 

Remember the Roger Rabbit conspiracy theories that General Motors paid to destroy the train system to favor the automobile? It’s hardly so simple, but surely as we are heading into a new century, we wouldn’t want to exacerbate those mistakes, would we?  

 

On Architectural Photography Today

As my readers know I am writing a book on network culture* this year. In writing about architecture under network culture, it struck me that the role of architectural photography has changed.

During postmodernism it seemed to observers that architecture was being produced more and more for photography. Kenneth Frampton dubbed architectural photography "an insidious filter through which our tactile environment tends to lose its responsiveness" and complained that the actual buildings that looked so seductive in photographs often were poorly detailed. Fredric Jameson suggested that "it is the value of the photographic equipment you consume first and foremost, not its objects." Under network culture, architecture photography becomes freed from architecture.

To be sure, photographers, particularly members of the Frankfurt School such as Andreas Gursky, Laurenz Berges, Thomas Ruff, and Thomas Struth (and with even if he is an exception due to the constructed nature of his environments, Thomas Demand), have given new, sustained focus on architecture as a subject. Architecture, in this sense, becomes not a matter to represent, but rather a way to represent the delirium of globalized space today. As they do so, these photographs also allude nostalgically to the ambitions of modernism—many of these photographers directly invoke the modern past with their subject matter—and to a time in which architecture was our primary spatial experience of the world, grounding us. 

Still, architecture itself seems to have worked free of architectural photography. No new generation has come up to replace the great late modernist architectural photographers: Marvin Rand, Julius Schulman, and Ezra Stoller. The architecture of network culture has a certain hostility to the photograph, generally refusing—even more than modernist works—to allow for a single viewpoint. The well-worn patch of grass at the Villa Savoye, is foreign to structures like Gehry’s Guggenheim Museum in Bilbao, FOA’s Yokohama Terminal, or OMA’s Casa da Musica at Porto. After all, the Bilbao-Effect only works on such structures if they are visited in person whereas many of the icons of postmodernism were private structures and museums had not yet understood the potential of a global tourist draw.

Thus, if the architectural photograph is still necessary so that such works can appear on front page of the New York Times, its less of a self-sufficient sign and more a pointer, an advertisement. This is not to say that the architecture of network culture is not designed on the screen. After all, it but the postmodern role of the fixed architectural photograph as a driver for building design is over.   

  

 

 

 

 

*I am also excited to be teaching a seminar on the topic at Columbia this fall. 

 

deferred action

 

In response to a reader’s request, I have posted my 1999 essay Postmodern Permutations to the site. It was good to revisit this nearly decade-old work that came at a crucial moment for me. In the essay I am still concerned first and foremost with architecture. I have not yet begun the move into my broader research emphasis on architecture’s role in urbanism or into computation and networks. But the essay is consciously framed within the context of dot-com Los Angeles. This is already after the demise of Assemblage and the exhaustion of a certain critical project in architecture, but unlike the purveyors of post-criticism (note: when is the last time that term still seemed current?), who largely formulated their project a few years later, my interest lay in complicating matters not simplifying them. 
 
As my project this year is to continue my work on Network Culture, looking back at that issue, I can see the importance of periodization to me already. I begin the essay by recounting my students’ bafflement at my asking them what period they live in (modern, postmodern or other). To a degree, I misread the signs, arguing that in fact we were postmodern and that stylistic postmodernism could now be dispensed with in favor of a more complex and postmodern relation between architecture and capital. Now, in a sense I was right. The post-critical obsession with capital highlights this. Read in the context of this essay, post-criticism is a last moment of postmodern culture. As readers of my network culture work know, our cultural dominant is the network. Post-criticism now seems adrift against the demands of a new culture emerging from the context of the infomatic realm. My students were already telling me that we were not postmodern and that we were in another time altogether. 
 
But there’s another dimension to the article.
 
I don’t have the capacity to incorporate the student work that accompanied the piece. For that you will have to go to the issue of Thresholds itself. But I was able to scan and OCR a small section of that text and reproduce it for you here: 
 
The SCI-Arc project "Sampling Linux" represented on the opposite page and throughout this article is Rocio Romero’s reaction to the impact of post-Fordist capital on design, and her propositions for other, future forms of design methodology and practice. Inspired by the Utopian possibilities inherent in late capital, Romero proposes a new model for architectural practice. This model explores forms of consumption and production on the Internet for which capital has literally become superfluous, even an impediment. If the Internet can be seen as the furthest elaboration of the Post-Fordist service economy, it can also be seen as an anticipation of a future stage of culture in which capital has withered away. This exploration led to the copyright-free Linux, an "Open Source" version of the UNIX operating system hacked together for personal computers. Linux avoids capital to an even greater extent than the academy, the former, self-proclaimed locus of resistance. Proponents of Open Source software make what they need for themselves and share it. When traditional software companies offer to produce software for Linux, they often find the only way to succeed is to make their software free. This might be the beginning of a new, even more pervasive form of capital, but it could also be the beginning of a new Utopian impulse-one in which capital, pushed to its furthest extreme, becomes pure information.
 
-Kazys Varnelis
 
Open source and networks paid off for me. And what of my student? Although she hasn’t ventured back into open source, Rocio instead developed her research with prefab. To be sure, prefab is not the same thing as open source, but nevertheless it is a much more advanced way of thinking about architecuture in that it posits object-oriented thinking over the repetitive redesigning necessitated by animation software. While I haven’t seen her in a few years, I hope to see Rocio this weekend at one of her LV Homes in the greater New York area this weekend. Rocio’s work has been featured in a lengthy piece in the New Yorker by Paul Goldberger, in Dwell, and in many other venues and she’s one of the most succesful students to ever graduate from SCI_Arc. 
 
It’s fascinating to see where things wind up, years later.