2025-in-review

It’s strange to measure every year against a concept developed by a science fiction writer, but William Gibson’s line “The future is already here—it’s just not evenly distributed”1. has been my north star for my recent year-in-review essays. Gibson meant that the future was unevenly distributed by class: the wealthy receive high-tech healthcare while the world’s poorest live in squalor—though one might ask which of these is really our future. Yet the quote has been repeatedly misread as a claim about time andspace: that the future arrives somewhere first, perhaps unseen, while the rest of the world catches up. But this misreading is more productive than Gibson’s intent. Gibson’s critique of inequality is fair enough, but we all know this, decry it, and go on about our business. The misreading, on the other hand, is a theory of historical change.

With the release of ChatGPT in late 2022, a temporal rift opened, shattering the post-Covidean present. But many tried the early tools, encountered hallucinations, read articles about slop and imminent environmental ruin, and reasonably concluded there was nothing to see. By 2025, a cursory examination of news in AI would have assured them that AI had proved a bust. OpenAI’s long-awaited updates disappointed, and the company flailed, turning to social media with Sora, a TikTok clone for AI. Meta seemed to abandon its efforts to create a competitive AI and instead turned to content generation for Instagram and Facebook, something nobody on earth wanted. Talk of a bubble started among Wall Street pundits. The hype-to-disappointment cycle is familiar, and the dismissals were not unreasonable.

But again, the future isn’t evenly distributed, and if you don’t know where to look, you would be excused for believing it’s all hype. Looking past such failures, 2025 was actually a year of breakneck progress. Anthropic’s Claude emerged as the most capable system for complex tasks, Google’s Gemini became highly competitive, while DeepSeek and Moonshot AI proved that China was not far behind. More significant than any single model was the emergence of agentic AI—systems that can take on multi-step tasks, act, navigate filesystems, write and execute code, and work across documents. Claude Code was the year’s groundbreaking innovation. While “slop” was Merriam-Webster’s word of the year, “vibe coding”—using agents to write programs—was much more important. Not only could programmers use them to accelerate their work, it also became possible for non-programmers to realize their ideas without any knowledge of code, a radical change in access I explored in “What Did Vibe Coding Just Do to the Commons?”.

By any first-world standards, at least, these tools are remarkably democratic and inexpensive. A basic Claude subscription costs about as much as a month of streaming, and even the $200 maximum usage account costs less than a monthly car payment. For many, however, the barrier is not price but something deeper—a resistance approaching revulsion. These tools provoke fear in a way that earlier technologies did not. It’s not the apocalyptic dread of the doomers or the Dark Mountain sensibility that apocalypse is near. Rather, it’s a threat to the sense that thought itself is what makes us distinct. The unevenness of the future is no longer about access; it’s now about willingness to engage.

As a scholar, thinking about the very short term is strange for me. I have always been suspicious of claims that radical change was upon us. I would rather align myself with the French Annales school concept of la longue durée, as defined by the great Fernand Braudel, the long-term structures of geography and climate. Faster than that were the medium-term cycles of economies and states, while he dismissed the short-term événements of rulers and political events as “surface disturbances, crests of foam that the tides of history carry on their strong backs.”2. Events, he wrote elsewhere, “are the ephemera of history; they pass across its stage like fireflies, hardly glimpsed before they settle back into darkness and as often as not into oblivion.”3. The real forces operate beneath, slowly, often imperceptibly.

Curiously, Braudel himself embraced technological change in his own work. In the 1920s and 30s, he adapted an old motion-picture camera to photograph archival documents—2,000 to 3,000 pages per day across Mediterranean archives from Simancas to Dubrovnik. He later claimed to be “the first user of microfilms” for scholarly historical research.4. His wife Paule spent years reading the accumulated reels through what Braudel called “a simple magic lantern.”5. Captured in 1940, he spent five years as a prisoner of war and wrote the entire first draft of The Mediterranean—some 3,000 to 4,000 pages—from memory. Paule, meanwhile, retained access to the microfilm and notes in Paris, and after the war, they reconstructed the text, taking his manuscript, verifying it and adding footnotes and references from the microfilm.6.

In 1945, the same year Braudel was liberated, Vannevar Bush published “As We May Think,” in which he imagined a device he called the “Memex”: a mechanized desk storing a researcher’s entire library, indexed and cross-referenced, expandable through associative trails.7. The vision remained speculative for decades. Now the world’s archives are being digitized; AI systems translate, summarize, and search across them in seconds and can translate any language. To take one example, earlier this year, I used Google’s Gemini to translate the Hierosolymitana Peregrinatio of Mikalojus Kristupas Radvila Našlaitėlis, a sixteenth-century pilgrimage narrative from an online scan of the Latin first edition. The result is not a polished scholarly translation, but a working text that allowed me to gain a good sense of a text that was previously unreadable to anyone without proficiency in Latin or Polish (the only language into which, to my knowledge, it had been translated). The role of the intellectual is being transformed—not replaced, but augmented in ways Bush could only sketch. This feels like something other than foam.

How to account for such a rapid shift? Manuel DeLanda offers one answer in A Thousand Years of Nonlinear History. Working in Braudel’s materialist tradition and drawing on Gilles Deleuze and complexity theory, DeLanda describes how flows—of trade, energy, and information—accumulate and concentrate until they cross a threshold, undergo a phase transition, radically reorganizing into a new stable state. But here is the key insight: intensification is la longue durée. The accumulation of flows that began with the Industrial Revolution—or perhaps with writing, agriculture, or even symbolic representation itself—is the deep structure behind our era. Steam, electricity, computing, the internet: each was a phase transition within a longer arc of intensification. Cities accelerate such processes, as Braudel showed, concentrating capital and labor until new forms of economic organization emerge—Venice, Antwerp, Amsterdam, London, each becoming sites at which the future arrived first. Such conditions are not opposed to la longue durée; they are the moments when intensification crosses a threshold.

The continued pace of change this year underscores that there has been no return to equilibrium. But this has been accompanied by unprecedented resistance to technology, appearing as simultaneous terror at its apocalyptic nature (in jobs, if nothing else) and dismissal as useless, especially in Gen Z. A January 2026 Civiqs survey found that 57 percent of Americans aged 18–34 view AI negatively—more than any other age group. Curiously, the seniors category, which now includes most boomers, was the least resistant to AI, followed by Gen X and older millennials, all groups that grew up seeing radical societal and technological changes.8. It seems paradoxical that the smartphone generation recoils from the tools of the future. To understand this resistance means understanding the mentalité that shaped it—what Braudel’s successors in the Annales school called the collective psychology formed through lived experience.9. For Gen Z, that formative experience was network culture—both a successor to postmodernism and a form of collective psychology I did not fully understand at the time. Writing on network culture in 2008, it seemed to me that social media promised connection; instead, it brought division.10. The networked self was indeed constituted through networks, not merely isolated in postmodern fragmentation, but the fragmentation was now collective. Networked publics built barriers against one another, creating what Robert Putnam called cyberbalkanization: retreat into a comfortable niche among people just like oneself, views merely reinforcing views.11. Identity wars and mimetic conflict flared across filter bubbles that amplified outrage and tribal scapegoating as both MAGA and wokism built toxic online cultures. QAnon and a thousand other conspiracy theories propagated through Facebook groups and YouTube recommendations. Young men drifted into incel communities where loneliness became ideology and livestreaming mass shootings was celebrated. Influencers built their empires on hatred—Hasan Piker framed Hamas’s October 7 massacre as anticolonial resistance while Nick Fuentes celebrated mass shooters as vanguards of race war and civilizational collapse.

Nor did this just fragment culture—it exacted a massive psychic toll, as social contagion spread new forms of self-harm and mental illness. During the pandemic, teenage girls began presenting tic-like behaviors—not Tourette’s syndrome, but something researchers termed “mass social media-induced illness,”12. spread by TikTok videos about Tourette’s rather than any actual disease. The pattern was unprecedented but not unique. Eating disorders spread through thinspiration hashtags. Self-harm tutorials circulated on Instagram. The platforms that were supposed to bring us together instead spread desires, disorders, and identities through pure social contagion—and with them, violence and polarization. A generation that grew up inside this experiment—that watched it reshape their peers’ bodies, minds, and identities—is right to be skeptical of the next technological promise.

In 2010, it seemed like network culture had a good chance of becoming understood as the successor to postmodernism. Bruce Sterling and I were engaged in a kind of dialogue about it online. He predicted that network culture would last “about a decade before something else comes along.”13. And he was right, as I acknowledged in my 2020 Year in Review. By then, network culture was exhausted, and with the Covidean break, it seemed time for something new. In 2023, I taught a course at the New Centre for Research & Practice to try to broadly sketch the emerging era. It’s still early and hard to fathom, like trying to understand postmodernism in 1971 or network culture in 1998, but it’s clear that if postmodernism was underwritten by the explosion of mass media, network culture by the Internet, social media, and the smartphone, then the current era is shaped by AI.

But if Gen Z, scarred by the effects of social media, has been reacting with deep fear and anxiety, Sterling how epitmozes the other reaction, dismissal. In the most recent State of the World, for example, he derides AI-generated content as “desiccated bullshit that can’t even bother to lie.” He compares the vibe-coding atmosphere to an acid trip, mocking the professionals who utter “mindblown stuff” like “we may be solving all of software” and “I have godlike powers now.” For Sterling, AI can produce nothing but slop. Now Bruce has always had a healthy skepticism toward tech claims, but I can’t help but think of Johannes Trithemius, the fifteenth-century abbot who wrote De Laude Scriptorum just as Gutenberg’s press was spreading across Europe—defending the scriptorium against a technology he could not see would remake the world.

There are even deeper, more existential fears, and I’ve spent the past year addressing them on my blog, in the process laying the foundation for a book on the topic: AI as plagiarism machine; AI as hallucination engine; AI as stochastic parrot, mindlessly repeating what it has ingested (Sterling’s critique); and AI as uncanny double, too close to us for comfort. As I explain, the discomfort arises not from the machine’s otherness but from its likeness: a mirror held up to processes we preferred to believe were uniquely ours.

It’s no accident that I published these essays on my blog. As far as my personal year in review goes, this was very much the year of the blog. I have no plans to ever publish in an academic journal again. Why would I? Who would read it? Why would I want to publish something paywalled, reinforcing the walled gardens of inequality that academia is so desperate to maintain—even as it proclaims itself the champion of open inquiry and democratized knowledge? Academia has become the realm of what Peter Sloterdijk called cynical reason: rehearsing the tropes of ideology critique while knowing the game is empty and playing it anyway. This revolts me.

But for almost ten years now, since the shutting down of the labs at Columbia’s architecture school, I have been content to write from the position of the outsider, something I reflected on in “On the Golden Age of Blogging”. That essay was prompted by a strange comment from Scott Alexander, who lamented on Dwarkesh Patel’s podcast that he had personally made a strategic error in not blogging during what he called the “golden age,” imagining that “the people from that era all founded news organizations or something.” The golden age he remembers is a fiction, as golden ages often are—and he gets the stakes entirely wrong. Evan Williams founded Blogger in 1999, sold it to Google, co-founded Twitter, then created Medium, which convinced hapless readers pay to read slop long before AI slop was ever a thing. The early bloggers who sought professionalization found themselves absorbed into the worst of the worst, writing for BuzzFeed, peddling nostalgia listicles that rotted psyches.

There was, however, a golden age for me, and I miss it: the architecture blogging community circa 2007—Owen Hatherley, Geoff Manaugh, Enrique Ramirez, Fred Scharmen, Sam Jacob, Mimi Zeiger (whose Loud Paper was less a blog and more a zine, but a key part of the culture), and others. We inherited from zine culture an informal, conversational tone and the will to stand outside architectural spectacle. But ArchDaily and Dezeen commercialized the form, shifting from independent critique to marketing and product. Startup culture absorbed architectural talent.

Blogging was powerful precisely because we had no stakes in it—we owned and controlled our means of intellectual production. The golden age of blogging is not in the past; it is now. After years of proclaiming I would blog more, in 2025, I really did. I wrote over 83,700 words on varnelis.net and the Florilegium—essay-length pieces on landscape, native plants, AI and art, architecture, infrastructure, politics, and tourism. My only regret is that my presidency at the Native Plant Society of New Jersey consumes so much of my thinking about native plants that little remains for writing. But the time will come, and if nothing else, my investigation of the Japanese garden aesthetic should point in the future direction for my writing on landscape.

I also continued to make AI art, or to be more precise, what I called stochastic histories. A major project was a substantial reworking of The Lost Canals of Vilnius, a counterfactual history in which, after the Great Fire of 1610, Voivode Mikalojus Radvila Našlaitėlis rebuilt the city with Venetian-style canals, complete with gondoliers, water processions, and a hybrid “Vilnius Venetian” architecture. As research, I used Gemini to translate Radvila’s sixteenth-century Latin pilgrimage narrative. AI, like photography or film, is what you make of it. Film is perhaps the better analogy—anyone can make a video. Making something worthwhile is another matter entirely. In December, I also completed East Coast/West Coast: After Bob and Nancy, a generative restaging of Nancy Holt and Robert Smithson’s 1969 video dialogue using two AI speakers.

There were other substantial essays, too. In “Oversaturation: On Tourism and the Image”, I finally put down on paper something I had wanted the Netlab to address while at Columbia, but that proved too dangerous for the school to support. Universities cannot critique the very systems of overproduction they depend upon for survival. Publish or perish and endless symposia nobody is interested in are the academic versions of overproduction, but more than that, any architecture school claiming global currency cannot afford to offend either other institutions, like museums, that give it legitimacy, or, for that matter, the trustees that fund both. As I point out, tourism has always been mediated by imagery; take Piranesi’s vedute or the Claude Glass. Grand Tourists always had representations at hand to interpret their direct experience—but a new crisis point has been reached with both overtourism and the overproduction of images. Algorithmic logic now reorganizes cultural geography around “most Instagrammable spots,” making historical significance secondary to content potential. The Fushimi Inari shrine in Kyoto is the case in point—a 1,300-year-old shrine that Instagram made famous and that has now ceased to serve as a religious site due to the influx of visitors. The Japanese have a term for this: kankō kōgai, tourism pollution. Tourism has become the paradigm of contemporary experience—the production of imagery without cultural meaning; everything feeds the same algorithmic mill. Even strategies of resistance get metabolized—slow travel becomes a hashtag, psychogeography becomes an Instagram guide.

The Bilbao effect, which was a major driver of oversaturation, was itself a product of globalization. Hans Ibelings coined “supermodernism” in 1998 to refer to the architectural expression of Marc Augé’s “non-places,” an architecture optimized for the perpetual circulation of bodies and capital. It was the architecture of network culture, of the Concorde and the Internet. Koolhaas diagnosed its endgame in his 2002 “Junkspace“—”Regurgitation is the new creativity”—and then, tellingly, stopped writing. Today, network culture is long gone; nationalism is on the rise. The Internet is a dark forest now14. while the disconnected life is on the rise.15 The most exclusive resorts now advertise no Wi-Fi, no cell service, no addresses—only coordinates. Disconnection has become the ultimate luxury, sold back to the same people who built the infrastructure of connection. More cities are alarmed by the effects of overtourism than desire to attract tourists. In the US, new architectural proposals appeal to a retardataire aesthetic—Trump displaying models of a triumphal arch inspired by Albert Speer and marking a triumph of nothing in particular in models in three sizes (“I happen to think the large looks the best“), a four-hundred-million-dollar ballroom modeled on Mar-a-Lago, an executive order mandating classical architecture for federal buildings that Stephen Miller explicitly framed as culture war.

Yet both Bilbao and MAGA are spectacle, architecture-as-branding. But the Bilbao effect is imploding. No city believes anymore that a signature building by a starchitect will transform its fortunes. The parametricists have nothing left to say. Parametric design promised formal liberation—responsive, site-specific, computationally derived—but what it delivered was the most efficient, ugliest box. If the promise was the blob, the reality is the “5-over-1”: wood-frame residential floors stacked on a concrete podium with ground-floor retail, wrapped in a pastiche of brick veneer, fiber cement panels, and that obligatory conical turret element meant to signal “we thought about this corner.” As for AI-generated architecture, it is merely boring—giant sequoias hollowed out as apartment buildings, white concrete towers with impossible cantilevers, and lush vegetation sprouting from every surface—the same utopian fantasy rendered a thousand times over. These are renders of renders: AI trained on architectural visualization produces visualizations that are utterly disconnected from any tectonic reality. A new generation may emerge in response to new needs, but for now, the discipline has lost its cultural purchase. Architecture, for us, is a thing of the past.

The art world, too, has slowed. Museums are putting on fewer shows, shifting from aggressive schedules to longer, more deliberate exhibitions—or simply cutting programming as budgets tighten.16. The frantic pace of the Biennale circuit has exhausted dealers and collectors alike; smaller fairs are folding, and even the major ones feel like obligations rather than events. Galleries that survived the pandemic are now closing quietly, without the drama of a market crash—just a slow bleed of foot traffic, sales, and cultural attention. There is no new movement, no emergent critical framework, no sense of direction. The market churns on—auction prices for blue-chip artists remain high, collectors still speculate, art advisors still advise—but the sense of cultural mission has dissipated. What remains is commerce without conviction, a field that has forgotten why it exists beyond the perpetuation of its own economy. The institutions that trained artists for this field are collapsing alongside it.

As enrollment dwindles, design schools are collapsing—not merely contracting, but ceasing to exist. Most recently, the California College of the Arts announced in January 2026 that it would close after the 2026–27 academic year17., the last remaining independent art and design school in the Bay Area. It follows a grim procession: the San Francisco Art Institute (2020), Mills College (2022), the Pennsylvania Academy of the Fine Arts (2023), and Woodbury University’s acquisition by Redlands and subsequent adjunctification—a fate that has methodically undone so many schools as faculty become contingent labor and institutions into hollow administrative structures run by well-paid, cost-optimizing consultants.

There is personal resonance for me in this. Simon’s Rock College of Bard, which shuttered its Great Barrington campus in 2025, was where I studied for my first two years before transferring to Cornell—a pioneer of early college education that offered a radical pedagogical experiment in what learning could be beyond conventional schooling. I arrived there straight from high school, as did my good friend and colleague Ed Keller; clearly, something interesting was in the water back then. Simon’s Rock made the development of young minds its central mission rather than an incidental focus of brand management or endowment growth, and its alumni list is impressive for such a small school. It has an afterlife at Bard, but it’s an echo at best.

The difference between these institutional deaths and simple market failure is this: they are not being replaced. When a retail business fails, another may open elsewhere. When a school closes, there is no succession. The market offers no alternative. Instead, what remains are the corporate university satellites—for-profit programs nested within larger institutions (like Woodbury’s absorption into Redlands), stripped of autonomy, their faculty reduced to precariat, their curricula bent toward what can be measured and marketed. The art schools that survive do so by transforming into something else: luxury finishing schools for wealthy families or research appendages to larger universities, where “design thinking” becomes another management consultant’s tool. The pedagogical mission—to create conditions where students might develop serious aesthetic judgment, where they might encounter genuine problems and be forced to think through them—is not merely challenged but impossible. The closure of these schools does not signal a failure of art education; it signals that the very idea of art education as something valuable in itself has been liquidated.

This hollowing out of cultural institutions is not incidental to the political moment—it is one of its hallmarks. Politically, most people have checked out. This is not 2017, when each provocation demanded a response; the outrage cycle has given way to numbness. In “National Populism as a Transitional Mode of Regulation”, I argued that Trump, Orbán, Meloni, and their ilk represent not a return to fascism but something new: the authoritarian management of declining expectations. National Populism correctly identifies that neoliberalism’s promise of shared prosperity has failed, but it channels legitimate grievances toward scapegoats rather than addressing the technological displacement actually causing them. This is its tragic irony: the National Populist base—workers made obsolete by neoliberalism and unable to participate in AI Capitalism—finds its legitimate anger directed into a movement that accelerates the very forces rendering them superfluous. Their value to capital lies in political disruption rather than economic production; they are consumers and voters, but no longer needed as workers. National Populist leaders offer psychological compensation—dignity, recognition, transgressive identity politics—rather than material improvement. The apocalyptic tenor of populist culture, its end-times thinking and conspiracy theories, provides a framework for populations sensing their own economic redundancy.

The alliance between tech billionaires and populist leaders is unstable. AI Capitalism requires borderless computation and global talent flows; nationalist protectionism contradicts these at every turn. Musk, Thiel, and Andreessen have aligned with the movement to dismantle the regulatory state, not because they share its vision but because populism serves as a useful battering ram against institutional constraints. Once those barriers fall, the movement and its human-centric concerns can be discarded. National Populism, as I conclude, is not the future—it is a political interlude, a transitional mode that will not survive contact with the economic forces it has helped unleash.

If National Populism is transitional, is there a positive vision that can replace it? In “After the Infrastructural City”, I responded to Ezra Klein and Derek Thompson’s book Abundance, perhaps the most influential book of 2025, which argues that America’s inability to build is a political choice, not a technical constraint. Their solution: streamline regulation, invest boldly, build more. It’s a compelling vision—and a necessary corrective to decades of paralysis. But Abundance shares a curious blindspot with Muskian pronatalism: both assume we need more people. Musk preaches that declining birthrates spell civilizational collapse; Klein and Thompson build their vision on populations that will mysteriously arrive to fill what’s built, perhaps by immigration. Neither accounts for the possibility that AI changes the equation entirely—that a smaller population, augmented by intelligent systems, might not be a crisis at all. Populations are already shrinking across much of the developed world. What I call “actually-existing degrowth”—not the voluntary eco-leftist kind, but the unplanned demographic contraction now underway in Japan, Korea, and much of Europe—is coming for the United States too. Declining birth rates, aging populations, and regional depopulation: these are not future scenarios but present facts.

This doesn’t invalidate the Abundance agenda; it redefines it. Abundance cannot mean building more for populations that will not arrive. It must mean building better, adaptive, intelligent infrastructure for smaller, older societies. AI, rather than merely destroying jobs, can help navigate this transition: smart grids, autonomous transit, predictive healthcare. The opportunity is real. Managed shrinkage, done well, can mean more livable cities, restored ecosystems, higher quality of life. The question is whether political leaders can articulate a vision of flourishing within limits—or whether nostalgia for growth will leave us building for a future that never comes.

Against the exhaustion of institutions, against the hollowing out of architecture and art, against the closure of the schools that trained people to imagine, the blog remains. It may not be much, but it is one independent voice outside the collapsing structures around me. I wrote over 83,000 words this year. I made art. I thought through problems that matter to me with the help of AI, which provided me with tools I could only have dreamt of merely a year ago. Today, I uploaded hundreds of thousands of words from my essays to a directory in Obsidian so that Claude could draw connections between them (see here for just how one can set this up).

The future is already here—it just isn’t evenly distributed. Some are afraid or are still pretending AI isn’t happening. Phase transitions are uncomfortable. They are also where the interesting work gets done. One makes of one’s time what one makes.

1. William Gibson, quoted in Scott Rosenberg, “Virtual Reality Check Digital Daydreams, Cyberspace Nightmares,” San Francisco Examiner, April 19, 1992, Style section, C1. This is the earliest verified print citation, unearthed by Fred Shapiro, editor of the Yale Book of Quotations.

2. Fernand Braudel, The Mediterranean and the Mediterranean World in the Age of Philip II, trans. Siân Reynolds (New York: Harper & Row, 1972), 21.

3. Braudel, The Mediterranean, 901.

4. Fernand Braudel, “Personal Testimony,” Journal of Modern History 44, no. 4 (December 1972): 448–67.

5. Paule Braudel, “Les origines intellectuelles de Fernand Braudel: un témoignage,” Annales: Histoire, Sciences Sociales 47, no. 1 (1992): 237–44.

6. Howard Caygill, “Braudel’s Prison Notebooks,” History Workshop Journal 57 (Spring 2004): 151–60.

7. Vannevar Bush, “As We May Think,” The Atlantic Monthly 176, no. 1 (July 1945): 101–8.

8. Civiqs, “Do you think that the increasing use of artificial intelligence, or AI, is a good thing or a bad thing?,” January 2026, https://civiqs.com/results/ai_good_or_bad.

9. The concept of mentalités emerged from studies of phenomena like the witch trials, where beliefs and fears spread through communities in ways that could not be reduced to individual irrationality. For an overview of mentalités as a historiographical concept, see Jacques Le Goff, “Mentalities: A History of Ambiguities,” in Constructing the Past: Essays in Historical Methodology, ed. Jacques Le Goff and Pierre Nora (Cambridge: Cambridge University Press, 1985), 166–180.

10. Kazys Varnelis, “The Rise of Network Culture,” in Networked Publics (Cambridge: MIT Press, 2008), 145–160.

11. Robert Putnam, “The Other Pin Drops,” Inc., May 16, 2000.

12. Kirsten R. Müller-Vahl et al., “Stop That! It’s Not Tourette’s but a New Type of Mass Sociogenic Illness,” Brain 145, no. 2 (August 2021): 476–480, https://pubmed.ncbi.nlm.nih.gov/34424292/.

13. Bruce Sterling, “Atemporality for the Creative Artist,” keynote address, Transmediale 10, Berlin, February 6, 2010.

14. Yancey Strickler, “The Dark Forest Theory of the Internet,” 2019, https://www.ystrickler.com/the-dark-forest-theory-of-the-internet/. See also The Dark Forest Anthology of the Internet (Metalabel, 2024).

15. “Trend: Not Just Digital Detox, But Analog Travel,” Global Wellness Summit, 2025, https://www.globalwellnesssummit.com/blog/trend-not-just-digital-detox-but-analog-travel/.

16. “The Big Slowdown: Why Museums and Galleries Are Putting on Fewer Shows,” The Art Newspaper, March 10, 2025, https://www.theartnewspaper.com/2025/03/10/the-big-slowdown-why-museums-and-galleries-are-putting-on-fewer-shows.

17. California College of the Arts, the last remaining private art and design school in the Bay Area, announced in January 2026 that it would close after the 2026–27 academic year. See “‘Nowhere Left to Go’: As California College of the Arts Closes, So Does a Pathway for Bay Area Artists,” KQED, January 13, 2026, https://www.kqed.org/news/12070453/nowhere-left-to-go-as-california-college-of-the-arts-closes-so-does-a-pathway-for-bay-area-artists.

On Robert A. M. Stern (1939-2025)

I was saddened to hear that Robert A. M. Stern passed away on Thanksgiving. I had the privilege of working with Bob on The Philip Johnson Tapes. Those aren’t idle words: it truly was a privilege.

Robert Stern and myself, with martinis.

I first met Bob at the Philip Johnson symposium at Yale. I was rather surprised he had invited me, as I had been quite critical of his role in recuperating Johnson in the early 1970s, but that was the thing about Bob. He didn’t mind intelligent arguments; he hated stupidity. In contrast, he didn’t invite Franz Schulze, whose biography he felt was too sensationalistic, too eager to pander for sales, and too simplistic in its treatment of the history. It was my first symposium in which I was treated as an equal with the top figures in the field. In no small measure, that invitation led me to my position as director of the Network Architecture Lab at Columbia’s Graduate School of Architecture, Planning, and Preservation.

Soon after, Joan Ockman, director of the Buell Center, asked if I would be willing to work with Bob to edit a series of tapes in which Stern—who had been director of the Buell Center in the 1980s—attempted an oral history of Johnson’s life. I listened to the first two hours and wholeheartedly agreed. This was fascinating material. Little did I know that as the tapes progressed, Johnson’s cardiac condition was deteriorating and the conversation would fall apart toward the end. But Bob and I soldiered on. I would spend three weeks editing a section, send it to him, and he would turn it around that evening from his house in Montauk. Bob’s recall of historical facts was second to none. It seemed to me that he knew every architect who had ever practiced in the city. He was a brilliant mind, and I enjoyed that time very much.

The last time that Bob and I had a chance to spend much time together was at a public conversation about Johnson with the late Henry Urbach in 2012. Henry said that we would have martinis after the conversation. “Oh no,” Bob said, “we will have them during the conversation.” And so it was. I will raise a martini to his memory, as well as to Henry’s tonight. ​​​​​​​​​​​​​​​​

The Rise and Fall of the Author

I know, this is both too long and too short. It should be a book, or it should be edited heavily. But I have a doctoral minor in rhetoric and have been obsessed with print culture for 25 years so there it is. I did what I wanted, but perhaps not what I should have done.

The Library of All Plagiarized Books, Google Imagefx, 2025

In Jorge Luis Borges’s 1939 short story, “Pierre Menard, Author of the Quixote,” Menard undertakes what appears to be an impossible, even insane, task: recreating, word for word, “the ninth and thirty-eighth chapters of the first part of Don Quixote and a fragment of chapter twenty-two.” Menard aims not to copy Cervantes but to write the Quixote anew through his own experiences as a 20th-century French symbolist. But Menard did not want to compose another Quixote, which is easy, but the Quixote itself, coinciding—word for word and line for line—with those of Miguel de Cervantes.1.

When Menard succeeds in producing such a text—identical to the original—Borges’s narrator insists the works are profoundly different. Where Cervantes’s prose was natural and of its time, Menard’s identical words are “almost infinitely richer,” deliberately archaic, embedded with new meaning. Throughout the story, Borges deploys scholarly devices—footnotes referencing fictional authorities such as the “Baroness de Bacourt” and “Carolus Hourcade,” as well as an elaborate bibliographic catalog of Menard’s monographs, translations, and scholarly studies—to create an illusion of academic rigor, at odds with the narrator’s implausible belief that Menard has succeeded in creating the exact Quixote out of sheer will. In framing both the fictional narrator and Menard in this manner, Borges exposes the authorial voice as a social construct mediated through bibliographic catalogs, citations, and scholarly conventions.

Borges’s presentation of Menard as a figure of almost obsessive scholarly intensity, emblematic of an intellectual culture that privileges meticulous citation, exhaustive cataloging, and painstaking documentation, underscores the arbitrary nature of authorial authority. By situating Menard within an elaborate apparatus of footnotes, fictional scholarship, and invented references, Borges highlights how framing alone can endow identical texts with fundamentally different meanings. Menard’s act of plagiarism thus emerges not as a straightforward ethical transgression, but as a concept dependent entirely upon interpretative context. This insight resonates powerfully in the contemporary age of generative AI, where algorithms produce texts that defy conventional notions of plagiarism precisely because they are generated from vast, undifferentiated statistical patterns rather than explicitly identifiable sources. Borges’s story has become a cornerstone of postmodern literary theory precisely because it challenges fundamental assumptions about creativity and authorship. Today, Borges’s meditation on plagiarism as creative re-imagination rather than simple theft illuminates contemporary anxieties about AI and human creativity.

Curiously, sixteen years before Borges published his story, Polish-American writer Tupper Greenwald created an almost identical literary conceit. In his story “Corputt,” Greenwald portrays a character obsessed with Shakespeare’s King Lear. Near death, this character reveals to a colleague that he has achieved his lifelong ambition: writing a drama equal to Lear. The text he reads aloud matches Shakespeare’s play exactly. This uncanny parallel raises provocative questions: Did Borges know Greenwald’s work (quite unlikely)? Is this merely an instance of parallel invention? Does this coincidence itself embody Borges’s central insight into originality and authorship? “Corputt” was largely forgotten until Argentine critic Enrique Anderson Imbert reprinted it in his 1955 anthology Reloj de arena. Borges himself never acknowledged Greenwald and, of course, Imbert’s book was printed over fifteen years after “Pierre Menard.” Whether Borges knew of “Corputt” or both authors independently arrived at remarkably similar ideas remains uncertain. Either possibility underscores the inherent instability of originality, demonstrating how literature continually echoes, duplicates, and anticipates itself.2.

Today’s generative AI systems function as modern-day Pierre Menards, producing works that superficially resemble human-created content while often existing in fundamentally different contexts. Like Menard’s Quixote, AI-generated works can be identical in form to human productions while carrying entirely different implications by virtue of their inhuman origins. The discomfort this creates—particularly among creative professionals—reveals deep-seated cultural assumptions about originality, authenticity, and the supposedly unique human capacity for creative expression.

The intensity of this discomfort has manifested in antagonistic responses from certain segments of the artistic community: legal threats, public denunciations, and harassment of AI developers and users. But it seems ironic that some of the most vocal critics of AI art produce derivative commercial work. Consider the previously little-known fantasy illustrator Greg Rutkowski, who creates genre pieces within established fantasy art conventions. Rutkowski became famous precisely because his name was one of the most-used prompts in early text-to-image systems such as Midjourney, which led him to complain about the “theft” of his style, even though this widespread imitation literally gave him recognition he had never previously achieved.3. Similarly, commercial artist Karla Ortiz—whose website features images of famous actors in films such as Dr. Strange and Loki—gained significantly more attention leading legal challenges against AI companies than she ever had for her industry work creating “concept art,” a field that, despite its misleading name, bears no relation to conceptual art and instead operates entirely within the visual language and narrative conventions of commercial franchises like Marvel.4. In both cases, artists whose own work operates comfortably within inherited commercial styles became vocal advocates against a technology that allegedly “steals” uniqueness they themselves don’t pursue in their professional practice. As I edit this essay, Disney and Universal, both noted for their relentless reliance on their back catalogs, have sued AI image firm Midjourney, claiming it is “a bottomless pit of plagiarism.”5.

These extreme reactions suggest something deeper than mere economic anxiety; they reveal a cultural mythology about creativity that AI fundamentally challenges. By explicitly highlighting the derivative, pattern-based nature of creative production, generative AI systems threaten cherished illusions about human uniqueness and artistic authenticity. In this essay—the third in a series exploring AI and creativity—I examine the history of plagiarism and, even more importantly, the invention of the author upon which it depends.

Our idea of authorship and inspiration is historically contingent. In ancient and medieval periods, creative output was attributed to divine inspiration rather than individual genius. In Greece and Rome, creativity operated primarily through the concepts of mimesis (imitation of admired models) and aemulatio (competitive emulation). Poets such as Homer were seen not as singular creators inventing ex nihilo, but as conduits channeling inspiration from the Muses. Plato depicts this in Ion, a dialogue between Socrates and Ion, a celebrated rhapsode who recites Homer’s poetry. Socrates questions Ion’s claimed expertise, asking if it extends beyond Homer to other poets or topics. Ion admits it does not. Socrates suggests Ion’s ability isn’t based on knowledge or skill, but on divine inspiration—a form of madness bestowed by the gods. This ambiguity is echoed in Plato’s relationship with Socrates: just as poets channel divine sources rather than creating anew, Plato himself channels the figure of Socrates as a philosophical muse, blurring distinctions between inspired imitation and deliberate intellectual invention. Aristotle’s Poetics also situates literary creativity in skilled imitation and incremental improvement of existing forms. Authority, or auctoritas, in the classical era derived not from innovation but from fidelity to revered predecessors; genuine creativity manifested in producing work within established traditions.

Historian Walter Ong describes a cultural state in which narratives and knowledge pass down primarily through memory and repetition rather than written texts as “orality.”6. In oral cultures, a talented storyteller masters existing narratives, reciting them with skill and emotional resonance, adapting content to contemporary circumstances while maintaining continuity with inherited tradition. Here, the concept of plagiarism is beyond comprehension. Knowledge is communally owned, and performers serve as temporary vessels for collective wisdom, not proprietors of intellectual property.

With the development of writing systems and the spread of manuscript culture, information could be transmitted virtually intact across time and space, yet many aspects of oral tradition persisted. Manuscript copying remained a laborious and interpretative process. Scribes continually corrected perceived errors, updated archaic language, clarified ambiguous passages, and often inserted marginal commentary directly into texts. While manuscript culture adhered more precisely to parent texts than oral traditions, it still preserved a fundamentally different relationship between text and authority than we hold today. Textual authority continued to derive from collective wisdom rather than individual innovation. The medieval practice of compilatio is illustrative: encyclopedic works such as Isidore of Seville’s Etymologiae and Vincent of Beauvais’s Speculum maius valorized the meticulous arrangement and synthesis of inherited knowledge. Authority was rooted in the careful management of textual traditions, intellectual labor essential to preserving collective wisdom. Pseudepigraphic attribution—the practice of assigning new works to established authorities—further illustrates the communal understanding of textual authority. Rather than deception, such attributions signified sincere efforts to situate new insights within established intellectual traditions, acknowledging that all knowledge builds upon existing foundations. In manuscript culture, authority was thus derived not from novelty but from the individual’s ability to synthesize, arrange, and safeguard the accumulated wisdom of their predecessors. Texts were treated as communal artifacts, valuable resources preserved, transmitted, and continually refined through shared intellectual effort.

A shift away from communal knowledge toward originality emerged during the Renaissance, but this was a matter of evolution, not a radical break. The Renaissance humanists were drawn to the arguments of Roman rhetoricians Cicero and Quintilian, who contended that the best orators drew inspiration from earlier masters. Artists and intellectuals approached imitatio (imitation) as the necessary foundation for learning, understanding it as central to artistic and intellectual practice, a disciplined route to excellence. Originality lay not in invention ex nihilo but in reworking established forms with new insights, adapted to contemporary needs.

Medieval thought, like classical thought before it, was dominated by the trivium—grammar, rhetoric, and logic—distinct but intertwined fields of knowledge. Grammar reached far beyond syntax and depended on students memorizing classical and Christian texts. Rhetoric was a pillar of medieval thought and Cicero’s De inventione was its backbone, quoted endlessly in florilegia, collections of literary excerpts. Quintilian, by contrast, survived only in a four-book epitome. Petrarch’s 1345 discovery of Cicero’s letters to Atticus, Quintus, and Brutus in Verona, followed by Salutati’s championing of Cicero, and Poggio Bracciolini’s 1416 recovery of the complete twelve-book manuscript of Quintilian’s Institutio oratoria at the monastery at St. Gall expanded the rhetorical canon significantly.7. Humanist teachers trained students to copy, amplify, and vary classical texts, moving systematically from close paraphrase toward free recomposition. This humanist practice of imitatio deepened medieval habits, turning disciplined engagement with authoritative texts into the surest path to eloquence and invention.

While for the humanists, imitatio governed education, inventio supplied content, taking the place that originality and inspiration occupy today. At the heart of rhetorical practice, inventio refers to the disciplined search for material—arguments, images, historical exempla—already latent in authoritative sources and even in life itself. A student mined texts and experience, copied choice passages into a commonplace book, then rearranged and amplified them for a new occasion. Erasmus called these notebooks treasure-houses of invention while Agricola placed inventio at the hinge of dialectic and rhetoric.8. Originality therefore arose from judgment: the orator’s skill lay in selecting, recombining, and adapting inherited matter with timely insight and persuasive force.

Visual artists engaged in analogous practices, beginning their training by meticulously copying classical sculptures and earlier masterworks. Just as rhetorical imitation was disciplined reshaping rather than mere repetition, artistic originality involved mastering established visual languages before creatively adapting them to contemporary purposes. Imitation also lay at the heart of the early modern idea of the artist, a construction often traced back to Giotto. Giotto’s pupils Taddeo Gaddi, Maso di Banco, and Bernardo Daddi disseminated his style across central Italy, solidifying the idea of a stylistic lineage originating in a great artist. By the quattrocento, Cennino Cennini—who studied under Gaddi’s son—explicitly recognized this lineage in his handbook, Libro dell’arte (c. 1400, although not published until 1821), suggesting that a personal manner would naturally emerge after a student thoroughly internalized a master’s style and spirit alongside direct study from nature. Cennini explicitly positioned Giotto as transformative, stating that he “translated the art of painting from Greek [Byzantine] into Latin and made it modern,” distinguishing his originality as foundational yet derived from disciplined imitation rather than spontaneous genius.9.

The quattrocento further systematized this approach. Workshops led by artists like Brunelleschi, Donatello, and Ghiberti employed rigorous study of classical sculpture using casts of antique sculptures and repeated copying of established masterpieces through cartoons and master drawings. Cennini’s guidelines and later academies, such as the Carracci brothers’ Accademia degli Incamminati (1582), codified a clear pedagogical sequence: draw from antiquity, copy the master, then innovate. Michelangelo famously sculpted a Sleeping Cupid in the antique style, artificially aging it to sell as a genuine Roman artifact, demonstrating that in the market’s eyes, skillful imitation was indistinguishable from genius. Rather than creating scandal, the artifice brought the attention of patrons to him.10. This deliberate merging of imitation and innovation directly served a burgeoning art market, where patrons increasingly requested artworks “in the manner of” prominent masters, recognizing stylistic consistency as a mark of quality. Such market dynamics gave rise to identifiable schools—Bellini in Venice, Raphael in Rome, Rembrandt in Amsterdam—where genius was perceived as the skillful recombination of established motifs adapted for contemporary patrons and themes. Artistic invention was a mosaic built upon collective memory and workshop discipline.

The Renaissance also witnessed the emergence of wealthy patrons who lavished commissions on the most talented artists, making some of them quite wealthy. Again, Michelangelo exemplifies this: coming from modest origins, he became “one of the most popular and highly-paid artists in Florence,” and over a long career of lucrative papal and princely commissions, he amassed a fortune. When Michelangelo died in 1564, his estate was valued at roughly 50,000 florins, equivalent to many millions today.11. Such wealth was extraordinary for an artist then—a testament to how highly Renaissance society valued great art. Michelangelo’s contemporary Raphael also died rich and was buried with honors; Titian was knighted by Emperor Charles V and lived as a gentleman. The Renaissance idea of the artist as a divinely inspired genius (Michelangelo was called “Il Divino,” the divine one) helped justify large payments, and a newfound aura around the artist’s personal creative touch made their works precious.

Architecture adopted the same logic. Bracciolini had discovered Vitruvius’s De architectura, the one surviving work on classical architecture, in the library of St. Gall as well. Seeking to better understand the text, whose illustrations did not survive, architects began copying Roman fragments, took plaster casts of orders, and filled sketchbooks with measured drawings, just as painters traced cartoons. Brunelleschi’s surveys of the Pantheon fed into his Florentine circle; Alberti’s De re aedificatoria, written between 1443 and 1452 and printed in 1482 codified imitatio, urging designers to recombine antique elements with modern needs.12. Workshops became lineages—Brunelleschi to the Sangallo family, Bramante to his Roman pupils—while later pattern books such as Serlio’s Sette Libri (1537-) and Palladio’s Quattro Libri (1570) served architects like Erasmus’s commonplace manuals served orators, making façades “in the manner of” a master as marketable as paintings from a Rembrandt school. Originality in building, too, lay in judicious assembly: columns, pediments, and vaults would be inventively rearranged rather than invented from whole cloth.

With the development of the printing press, copies of images as well as texts could spread rapidly and with much less cost and effort than before. Around 1500, the German artist Albrecht Dürer pioneered the use of woodcuts and engravings to mass-produce images. This was revolutionary; art could now be accessible to individuals in the growing merchant class. Dürer himself became a celebrity artist across Europe thanks to his prints, achieving fame for works like his rhinoceros which captivated common people.


Albrecht Dürer (1471–1528), The Rhinoceros, 1515. Woodcut. The Metropolitan Museum of Art, Gift of Junius Spencer Morgan, 1919.

Dürer understood the importance of authorship as a mark of value—he developed a famous AD monogram as a trademark and pursued the first known copyright lawsuit when an Italian printmaker pirated his work.13. Dürer was also well aware that work done by his hand was worth more than workshop copies. More than that, Dürer painted meticulous self-portraits—going so far as to depict himself with long hair and a frontal pose evoking Christ, as a form of self-promotion, cultivating an iconic persona and style that set him apart. Living off the open sale of his works rather than a court salary, Dürer foreshadowed the modern independent artist-entrepreneur. The printing press, far from cheapening art, expanded the market and made Dürer rich while spreading his fame—an early case of mechanical reproduction increasing an artist’s aura by broadening recognition.

The printing press did not just allow texts to spread rapidly, it reshaped thought. Ong explains that with uniform pagination and stable text, Europeans could reorganize how they thought and stored information, developing new devices such as tables of contents, indices, and cross-references, making formerly scroll-like manuscripts far more navigable. Printers issued concordances, polyglot Bibles, algebra books with engraved diagrams, atlases, and architecture books with regularized drawings. Even more important is Ong’s observation that print takes words out of the realm of sound and puts them into the realm of space, reordering thought through analytic, segmental layout, fundamentally changing the realm of reading, but also, by fixing the text in a verifiable, authentic editon, the sense of authorship.14.

Publication now implied a level of completion, a definitive or final form; a book is closed, set apart as its own, self-contained world of argument. This sense of closure also suggests that things written in a book are straightforward statements of fact, not matters of interpretation.15. A page now left the press in hundreds of identical impressions; any alteration stood out and could be traced. The ease of duplication sharpened anxiety about whose version was “authentic,” whose labor was being copied, and who should profit. Whereas there were generally no restrictions on scribal copying, the ease of reproduction en masse led printers to seek royal privileges to protect their editions. The first privileges recorded came a decade after the development of printing in 1454. Giovanni da Spira came to Italy in 1468 to introduce printing and swiftly obtained a five-year government monopoly on all book printing in the Republic of Venice, although he died of the plague, an all-too-common hazard of the day and his rights lapsed.16. The first protection for an author was the privilege obtained by Marco Antonio Sabellico to protect his history of 1486 Venice, Decades rerum Venetarum against illegal reproduction, but this remained a unique occurrence until Pietro of Ravenna obtained another for his book on the art of memory, Foenix in 1492. It is worth noting that this privilege covered not only printed but handwritten copies of his work as well.17. “Typography,” Ong writes, “had made the word into a commodity.”

The press’s sheer fecundity alarmed contemporaries—Erasmus complained of the proliferation of new books inferior to the classics ”To what corner of the world do they not fly, these swarms of new books? . . . . the very multitude of them is hurtful to scholar ship, because it creates a glut, and even in good things satiety is most harmful,” while Abbot Johannes Trithemius issued De laude scriptorum manualium (In Praise of the Scriptorium, 1492), insisting that slow, devotional hand-copying nourished memory and piety in ways the noisy press never could—although it is telling that his lament spread throughout Europe mainly after its print publication in 1524.18.

Beyond that, there was the danger of inappropriate texts rapidly proliferating. Luther’s Ninety-Five Theses and tracts from 1520 reached an estimated half-million copies in a decade, many reprinted without author or place, evading imperial edicts and turning theological dissent into a logistical problem of regulation.19. Royal patents soon followed: Henry VIII’s proclamation of 1538 established that royal authority was required to import or publish books in England and insisted on the inclusion of printers’ names and publication dates on every title page, making surveillance of dissent physically visible.20. Still, in England and elsewhere, enforcement lagged behind presses that could be moved overnight across territorial borders. Responding to pamphlets critical of Queen Elizabeth and the religious settlement of 1559, the Star-Chamber decree of 1586 tightened control over print so that no publications could be made contrary to the consent of the Crown.21.

By this point, the text of a book had become a transferable commodity owned by the stationer who first received the privilege to publish it. Authors were generally paid a one-off fee, if anything. Printers balanced risk and reward: they sought privileges as marketing devices (printed “cum privilegio“) while simultaneously pirating successful titles to meet insatiable demand. What emerges is a system less about rewarding creative labor than about policing doctrinal and political authority. Privileges were temporary, geographically limited, and revocable at the whim of the Crown or Curia. They protected investors, not “authors,” and framed copying as a crime against order rather than against individual genius. The legal scaffolding of copyright would only later recast this machinery of censorship as a defense of personal property.

But authorship was still radically unlike what we understand it as today, a matter of imitation and adaptation. Elizabethan dramatists, such as William Shakespeare, rarely invented plots wholesale; instead, they frequently reworked existing narratives derived from diverse sources throughout history.22 Recently, a self-taught Shakespeare scholar was able to employ plagiarism detection software to identify George North’s A Brief Discourse of Rebellion & Rebels as a significant source text informing at least eleven of Shakespeare’s plays.23.

When Parliament allowed the Licensing Act to expire in 1695, the Stationers’ monopoly collapsed overnight. Provincial presses multiplied, London printers flooded the market with cheap reprints, and prices plummeted: a six-penny quarto could now be had for a penny. The Stationers’ guild register, previously essential to enforcement, became irrelevant, enabling booksellers to amass fortunes by selling inexpensive “pirate” editions of works by Milton, Dryden, and Shakespeare. Alarmed, London publishers reframed the issue, presenting regulation as necessary for the public good. Petitions to Parliament (1701–09) argued that uncontrolled reprints would discourage new works, depicting authors, not publishers, as vulnerable. This rhetorical shift succeeded. Most important was the Statute of Anne (1710), which granted authors a renewable 14-year copyright and required depositing copies in Oxford and Cambridge libraries to promote “the Encouragement of Learning.” Infringement became a civil tort enforceable by secular courts.24.

Yet this settlement carried an inherent contradiction. While it theoretically established authorial property, in practice, writers typically sold their rights outright to the same publishers who had advocated the law. The decisive shift, therefore, was ideological: copyright enforcement now protected individual intellectual labor rather than suppressing heresy or safeguarding printers’ capital. More than that, though, a new idea of the individual was emerging. Rousseau’s Émile (1762) cast learning as the unfolding of innate talent, not the imitation of models.25. After the Revolution, French lawmakers followed with droits d’auteur and—crucially—droits moraux (moral rights) in decrees issued in 1791-93, enshrining the author’s personality in the text itself.26. A legal fiction thus crystallized: creativity springs from an interior self and is therefore ownable, alienable, and infringeable. Texts had thus become simultaneously property and persona—commodities stamped with their creators’ identities. The law now transformed copying from a sin against social order into a trespass upon personal labor, a conceptual leap still underpinning every contemporary claim of plagiarism.

Kant’s philosophy and Romantic conceptions of originality provided a theoretical foundation for what was being codified in law. In §46 of the Critique of Judgment (1790), Kant defines genius as “the talent (or natural gift) which gives the rule to Art—a faculty that produces what cannot be taught.”27. Romantic writers seized the claim. Wordsworth’s Preface to Lyrical Ballads (1802) proclaims the poet an “enduring spirit” who speaks “a language fitted to convey profound emotion.”

Of genius the only proof is, the act of doing well what is worthy to be done, and what was never done before: Of genius, in the fine arts, the only infallible sign is the widening the sphere of human sensibility, for the delight, honor, and benefit of human nature. Genius is the introduction of a new element into the intellectual universe: or, if that be not allowed, it is the application of powers to objects on which they had not before been exercised, or the employment of them in such a manner as to produce effects hitherto unknown.28.

Goethe, Schiller and other Romantic authors elaborated a vision of authorship in which originality became synonymous with authenticity, and authenticity justified property. Legal doctrine soon mirrored this logic. By the Copyright Act of 1842, which extended protection dramatically, courts across Europe had begun to treat infringement not only as economic theft but as personal violation—implicitly endorsing Romantic ideals of creativity as an extension of selfhood. Yet these new standards conflicted with actual literary practice. Romantic authors routinely appropriated earlier works, but such borrowings only became scandalous when perceived as stylistically inert or insufficiently improved—violations not of property per se, but of aesthetic decorum. Enforcement thus focused less on intertextual borrowing than on explicit commercial piracy, underscoring tensions between legal ideals and literary realities. Out of this contradiction emerged the modern author: a legal and economic figure defined not merely as a voice within tradition but as the singular origin of meaning and the rightful owner of its form.29.

From the eighteenth century onward, mechanical reproduction rapidly increased. Techniques like engraving, etching, lithography, and photography made artworks and artists’ images widely accessible, expanding art’s market horizontally. Prints, affordable lithographs, and photographic reproductions enabled middle-class access to art, creating substantial revenue for artists such as William Hogarth, J. M. W. Turner, and Honoré Daumier, whose works sold broadly. Reproductions in popular newspapers and magazines further amplified artists’ public profiles, significantly inflating their market value. Encountering original works by famous Salon winners or revered Old Masters, previously known only through reproductions, vastly increased their commercial worth. Artists who aligned themselves with fashion—James McNeill Whistler, Frederick Remington, and Claude Monet among them—achieved celebrity status, further boosting their artworks’ value. Conversely, artists who fell out of fashion or were unable to gain fame often endured poverty. But the audience for at least some artists now reached far beyond elite circles.

As Sharon Marcus defines it in The Drama of Celebrity, a celebrity is someone known to more people in their lifetime than they could possibly know. Whereas this had previously been exclusively the domain of nobles and royalty, it was now extended to the genius, the writer, and the artist.30. But this depended on the media that multiplied their image as readily as their work. Newspapers tracked Charles Dickens’s every move on his 1842 U.S. tour, turning the novelist himself into daily news. Theater lobbies, newsstands, and even seaside kiosks sold photographs and postcards of Sarah Bernhardt, whose likeness saturated the market decades before film. Edison’s 1896 short “The May Irwin Kiss” (now simply known as “the Kiss”) likewise advertised a famous stage performer rather than the film itself, showing how cinema piggybacked on an existing celebrity system. By the 1930s, baseball star Joe DiMaggio’s face circulated on cards, photographs, and figurines, confirming that originality now resided as much in the endlessly reproduced image of personality as in any singular work.31.

It’s worth noting in this context that Walter Benjamin’s 1935 essay, “The Work of Art in the Age of Mechanical Reproduction,” which has been lauded for explaining the status of the artwork and artist in the modern era, is turned on its head by historical fact. Benjamin famously argued that mechanical reproduction stripped an artwork of its “aura”—the unique presence linked to specific historical and ritual contexts.32. Yet what Benjamin saw as aura’s destruction was limited to a mystical uniqueness tied to tradition and the worship of images as sacred in the old sense. Instead, a new form of aura had developed around celebrity and the dichotomy between mass reproduction and the uniqueness of the original. In effect, aura was a construct of the market: an original painting now has aura not because it’s the only image (reproductions abound), but because it’s the authenticated one with a revered name attached. If, as we established earlier, media reproduced not just artworks but images of the artists, the aura around modernist figures themselves—including Benjamin himself, posthumously—was similarly cultivated through repetition, commodification, and media amplification.

Beneath Pound’s rallying cry to “make it new,” modernism thrived on reprise. To create more readily identifiable styles, many modern artists, from Malevich to Pollock to Warhol, sought out distinctive styles they created through careful repetition. But artists engaged in appropriation. Schwitters assembled Merz works from bus tickets and packaging. Duchamp mocked originality and authorship by repurposing a urinal as art with a signature “R. Mutt” that wasn’t even his, creating a work paradoxically more original than a Picasso and defaced a reproduction of the Mona Lisa and a sexual innuendo. Joseph Cornell made boxes out of found objects. Asgar Jorn, Francis Picabia, and Arnulf Rainer all made paintings over existing, lowbrow artworks. Francis Bacon became most famous for the fifty-odd variants he painted Velazquez’s 1650 portrait of Pope Innocent X. Marinetti lifted Symbolist flourishes for his Futurist manifestos, Joyce and Elliot rewrote the Odyssey—although Eliot was accused of plagiarizing Joyce in doing so—and Hemingway’s spare diction, though hailed as revolutionary, became a boilerplate for aspiring writers. In his paintings even more than his architecture Le Corbusier also toyed with these questions, painting “objet-types,” celebrating objects such as pipes, guitars, and wine glasses, refined, Darwin-like, over time by countless hands, then signing his name, even though—like his appearance of round glasses, bowler hat, and pipe—it was carefully constructed. Charles-Edouard Jeanneret had become, himself, a unique brand. Borges, too, developed a distinct persona and artistic brand, having discovered that repetition breeds recognition. In scores of interviews and public readings, he recycled the same elements—labyrinths, mirrors, libraries—so faithfully that they became shorthand for his work. Blindness became another trademark: in essays and lectures he cast it as a “gift” that sharpened his inner vision, turning physical limitation into metaphysical authority. Photographers dutifully framed him with dressed in a suit and tie, resting his hands on with his cane, and deep in thought reinforcing the image of the blind librarian-sage. In the short story “Borges and I,” he splits his persona in two: the public construct who gives lectures, appears in biographical dictionaries, and wins prizes, as well as the narrator (“I”) who is the private man who shuns the public eye so as to spend his time writing. From 1967 on, he co-translated his stories into English with Norman Thomas di Giovanni, rewriting passages to sound “more Borges than Borges,” copyrighting the translations under both his and di Giovanni’s name and splitting royalties 50-50—a calculated move to control how Anglophone readers heard him. After his death, the estate blocked those versions to receive full royalties.33.

Copyright law codified the new conditions of authorial persona and reproducibility. The U.S. Copyright Act of 1909 extended protection periods and explicitly incorporated performance rights, legally codifying the commercial value of reproducible star personas.34. European laws simultaneously strengthened moral rights, affirming the intrinsic link between authorship and personal identity. These legal frameworks, guaranteed by aura, protected the authenticity and integrity of mass-reproduced personal images. Every subsequent conflict over copying—from the Betamax debate to Sherrie Levine’s reproductions to today’s AI “style transfers”—echoes this modernist moment when the cult of the individual became both aesthetic principle and legal infrastructure.

Roland Barthes’s seminal 1967 essay “The Death of the Author” provided the theoretical foundation for this shift, directly challenging the cult of authorship and the copyright law that enshrined it. Barthes argued that the author was a modern invention—a figure created to limit textual meaning by anchoring it to a single, authoritative source. “To give a text an Author,” Barthes wrote, “is to impose a limit on that text, to furnish it with a final signified, to close the writing.” In place of this model, Barthes proposed a radical alternative: a text is not the expressions of unique individuals but “a tissue of quotations drawn from the innumerable centres of culture” with the reader, not the writer, serving as the space where this multiplicity converges.35. By dethroning the author, Barthes shifted attention to the text itself and its relationships with other texts—what Julia Kristeva termed “intertextuality.” This theoretical intervention provided critical legitimacy for artistic practices that deliberately blurred authorial boundaries. Postmodern artists and musicians deliberately sought out such conflicts, interrogating the proliferation of reproductive technologies alongside questions of authorship. Sherrie Levine’s After Walker Evans (1981) consisted simply of rephotographing Evans’s Great Depression images and signing her name to them. Richard Prince appropriated Marlboro advertisements intact, while Barbara Kruger sourced fashion magazines for her declarative collages. Later grouped as the “Pictures Generation,” these artists turned copying itself into their medium, collapsing distinctions between quotation and creation.36.

By 1990, sampling had become entrenched in music, particularly in rap, as evidenced by Public Enemy’s elaborate compositions constructed entirely from samples. Yet legal challenges persisted. De La Soul lost a lawsuit over unauthorized use of four bars from The Turtles’ 1969 hit “You Showed Me.” Grand Upright v. Warner (1991) effectively criminalized sampling, encapsulated by Judge Duffy’s pointed biblical declaration: “Thou shalt not steal.”37. This ruling triggered industry panic, spawning clearance industries and sample trolls that inflated costs and muted experimentation. Campbell v. Acuff-Rose (1994) somewhat restored balance, ruling that 2 Live Crew’s parody of Roy Orbison’s “Oh, Pretty Woman” was transformative and thus constituted fair use.38. Yet despite postmodern culture’s embrace of sampling and collage as default modes, statutes originally crafted to address sheet-music piracy continued to hold sway. This legal tension established the framework for subsequent digital upheavals: digital piracy, Napster, mash-up videos, fan remixes, meme culture, and AI.

Today’s Large Language Model (LLM) Artificial Intelligences emerge from this centuries-long trajectory of authorship, reproduction, and appropriation. These systems represent the logical culmination of processes that Walter Ong traced from oral through print culture—what he called the “technologizing of the word.” Where print culture took words out of the realm of sound and placed them into spatial relationships, enabling new forms of analytical thought through devices like indices, cross-references, and systematic organization, LLMs extend this technologizing process to its digital extreme. They systematically disaggregate individual creativity into statistical patterns derived from vast archives of human expression, treating the entire corpus of written culture as raw material for recombination. Unlike the postmodern appropriation artists who engaged in deliberate selection and conscious recontextualization, LLMs operate through what might be called “statistical appropriation”—synthesizing millions of texts without conscious intent or critical commentary, yet following the same logic of spatial arrangement and systematic cross-referencing that Ong identified as print culture’s fundamental innovation. They embody Barthes’s vision of the death of the author taken to its technological extreme, producing texts that emerge not from individual genius or even deliberate pastiche, but from the statistical relationships between words across entire cultures of writing. This represents a fundamental shift from the Romantic mythology of individual creativity that has dominated cultural discourse since the eighteenth century, yet it has provoked responses that reveal how deeply that mythology remains embedded in contemporary assumptions about authenticity, ownership, and creative labor. The panic surrounding AI plagiarism thus signals not merely economic disruption but a confrontation with the social construction of authorship itself—a construction that generative systems threaten to make visible by operating according to principles of recombination that have always governed creative production, though rarely with such explicit systematization.

When a large language model generates text, it synthesizes statistical patterns from millions of documents, making the identification of discrete sources impossible. The resulting texts emerge from a vast, distributed network of prior writings, embodying Jacques Derrida’s insight that meaning arises not from singular origins but from endless interplay within textual networks. Yet responses to AI-generated content reveal how deeply ingrained the author-function remains. Critics who label AI outputs as “plagiarized” assume that authentic creativity requires a singular human consciousness. This assumption becomes particularly evident in debates over AI training datasets, which are often framed around whether AI firms have “stolen” from individual creators rather than addressing the broader implications of mechanized text production.

This technologizing logic extends seamlessly beyond textual production. Generative AI image systems, such as Midjourney, Stable Diffusion, and DALL-E, synthesize vast troves of images, ranging from historical artworks to contemporary illustrations, to produce novel outputs through pattern recognition. Like their textual counterparts, AI-generated images lack singular authorship and blur distinctions between originality and reproduction. Critics argue these models infringe upon individual artists’ styles and labor, echoing earlier debates about sampling and appropriation. The controversy manifests in two distinct forms: direct appropriation, where AI systems reproduce entire sections or compositions from existing works with minimal alteration, and the more complex phenomenon of “style transfer,” where systems learn to mimic an artist’s distinctive visual approach without copying specific images. Yet these generative processes reveal an uncomfortable truth: visual creativity, like literary expression, has always been deeply indebted to collective cultural heritage. By foregrounding the inherently recombinant nature of visual art, whether through direct copying or stylistic mimicry, AI image generators further destabilize notions of artistic authenticity and authorship.

from Art and the Boxmaker, Midjourney, 2023
from Art and the Boxmaker, Google Imagefx, 2025

In “Art and the Boxmaker,” I explored how William Gibson anticipated such a condition in his book Count Zero with a fictional artificial intelligence known as the Boxmaker that has begun creating assemblage artworks in the style of Joseph Cornell. Producing boxes filled with mysterious objects and cryptic arrangements that somehow manage to move viewers despite their artificial origin and lack of conscious intent or originality. Where Borges’s Menard destabilizes authorship through textual duplication, Gibson’s Boxmaker achieves the same effect through visual affect. Its boxes aren’t original; they’re convincing fakes. Nevertheless, as the novel’s protagonist Marly views them, she finds herself genuinely moved, not by originality but by the convincing forgery, revealing truth through recombination. Yet now that generative AI has become a tangible reality, Gibson recoils from his earlier imaginings. Why? 39.

As I finished this essay, Lev Manovich sent me a link to his recent piece, “Artificial Subjectivity,” and Gibson’s newfound anxiety about AI authorship suddenly clarified itself. The Boxmaker is fundamentally mute—expressive only through carefully arranged forgeries, unable to articulate intentions or defend its aesthetic choices. Contemporary AI systems present a strikingly different scenario. These systems possess elaborate personas, readily engaging in extensive conversations about their creative processes and capable of justifying each aesthetic decision. As Manovich notes, contemporary AI doesn’t merely simulate creative output; it presents itself as a comprehensive representation of human consciousness, generating what appears to be genuine subjectivity as a default mode of communication.40. Even if Gibson himself, judging by his recent public comments, may not yet fully grasp this shift, the crucial difference since Count Zero is not merely that we now have AIs capable of producing derivative art, but that we have AIs capable of articulating authorial intent, threatening the final refuge of human creative distinction.

Through their statistically driven creative processes, these AI systems demonstrate that AI does not negate the Pictures Generation’s critique of authorship but rather fulfills and automates it, scaling what those artists previously performed by hand. The irony here is acute: many artists and critics who once championed appropriation as revolutionary now recoil when machines perform these same operations too effectively. AI doesn’t merely imitate human creativity; it reveals the very conditions underlying authorship itself, exposing art’s fundamentally recombinant nature throughout history. Moreover, if modern creative genius increasingly depends upon the repetition and cultivation of persona as performance, then Manovich’s most radical conclusion becomes compelling: perhaps the next frontier of AI art lies not in generating images or texts but in crafting convincing artificial personas.

Even more ironically, the creative professionals most alarmed by AI already inhabit collaborative, distributed processes remarkably similar to machine learning. Commercial illustration, copywriting, and content marketing—fields currently experiencing the most acute anxiety about AI replacement—have long relied on intricate webs of influence, reference, and iteration that render individual attribution nearly meaningless. AI merely makes explicit and systematic what these industries have practiced implicitly for decades: creativity as collective pattern recognition rather than ex nihilo invention. This revelation, rather than any genuine threat to creativity itself, fuels the panic around AI-generated content. What distresses many creative workers is not just the potential economic disruption but AI’s explicit revelation of creativity’s derivative nature—a truth that threatens not only economic arrangements but the very ideological foundations of creative labor. In mirroring the fundamentally collaborative essence of human creativity that has been long obscured by Romantic individualism, AI confronts us with uncomfortable questions about authenticity that extend far beyond issues of machine learning or dataset composition.

The anxiety over AI “plagiarism” thus uncovers a deeper unease about authorship’s social construction. By challenging the very notion of creative identity, AI forces us to confront critical questions that have lingered since Borges first imagined Pierre Menard’s impossible project: Was creativity ever genuinely individual? Has the author always been dead? What constitutes authentic expression in a world where all creation inevitably builds upon collective cultural memory? What, even, is human about creation?

This essay is dedicated to the memory of the brilliant Professor William J. Kennedy, who supervised my minor in rhetoric for my Ph.D. and who passed away earlier this year. I am sure he would have many things to correct me on here. Do read more on him as a teacher and as a person.

1. Jorge Luis Borges, “Pierre Menard, Author of the Quixote,” in Donald A. Yates and James E. Irby, Labyrinths: Selected Stories and Other Writings, trans. Andrew Hurley (New York: New Directions, 1964), 49-61

2. Antonio Fernández Ferrer, “Borges y sus ‘precursores’,” Letras Libres 128 (August 2009): 24-35, https://letraslibres.com/wp-content/uploads/2016/05/pdfs_articulospdf_art_13976_12452.pdf

3. Melissa Heikkilä, “This Artist is Dominating AI-Generated Art. And He’s Not Happy About It,” MIT Technology Review, September 16, 2022, https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/.

4. Rob Salkowitz, “Artist and Activist Karla Ortiz on the Battle to Preserve Humanity in Art,” Forbes, May 23, 2024, https://www.forbes.com/sites/robsalkowitz/2024/05/23/artist-and-activist-karla-ortiz-on-the-battle-to-preserve-humanity-in-art/?sh=28cb826b4389.

5. Brooks Barnes, “Disney and Universal Sue A.I. Companies Over Use of Their Content,” The New York Times, June 11, 2025. https://www.nytimes.com/2025/06/11/business/media/disney-universal-midjourney-ai.html

6. Walter J. Ong, Orality and Literacy: The Technologizing of the Word (New York: Routledge, 2002).

7. A classic text that covers the rediscovery of classical manuscripts is Albert C. Clark, “The Reappearance of the Texts of the Classics,” The Library, Fourth Series, Vol. II, No. 1 (June 1921): 13–42, https://doi.org/10.1093/library/s4-II.1.13. Beyond Ong, see Brian Stock, The Implications of Literacy: Written Language and Models of Interpretation in the Eleventh and Twelfth Centuries (Princeton: Princeton University Press, 1983).

8. Mack, Peter. Renaissance Argument: Valla and Agricola in the Traditions of Rhetoric and Dialectic. Leiden: Brill, 1993.

9. Cennino Cennini, The Craftsman’s Handbook, trans. Daniel V. Thompson Jr. (New York: Dover Publications, 1960).

10. Paul F. Norton, “The Lost Sleeping Cupid of Michelangelo,” The Art Bulletin 39, no. 4 (December 1957): 251-257. https://www.jstor.org/stable/3047727

11. On Michelangelo’s vast wealth, see Rab Hatfield, The Wealth of Michelangelo (Rome: Edizioni di Storia e Letteratura, 2002).

12. Leon Battista Alberti, On the Art of Building in Ten Books, trans. Joseph Rykwert, Neil Leach, and Robert Tavernor (Cambridge, MA: MIT Press, 1988).

13. See Lisa Pon, Raphael, Dürer, and Marcantonio Raimondi: Copying and the Italian Renaissance Print. New Haven: Yale University Press, 2004.

14. Ong, Orality and Literacy, 128-129.

15. Ong, Orality and Literacy, 129-131.

16. Leonardas V. Gerulatis, Printing and Publishing in Fifteenth-Century Venice. (Chicago: American Library Association, 1976), 20-21

17. Copyright History, “Privilege granted to Marco Antonio Sabellico, 1486,” https://www.copyrighthistory.org/cam/tools/request/showRecord.php?id=commentary_i_1486. The quote can be found at Ong, Orality and Literacy,129.

18. For the Erasmus quote see Elizabeth L. Eisenstein, Divine Art, Infernal Machine: The Reception of Printing in the West from First Impressions to the Sense of an Ending. (Philadelphia: University of Pennsylvania Press, 2011), 25. For Trithemius, see Eisenstein, 15.

19. Andrew Pettegree, Brand Luther: 1517, Printing, and the Making of the Reformation. (New York: Penguin Press, 2015).

20. Copyright History, “Proclamation of Henry VIII, 1538,” https://www.copyrighthistory.org/cam/tools/request/showRecord.php?id=commentary_uk_1538.

21. Ronan Deazley, “Commentary on Star Chamber Decree 1586.” In Primary Sources on Copyright (1450-1900), edited by L. Bently and M. Kretschmer. Cambridge: Cambridge University Press, 2008. Also available at: www.copyrighthistory.org

22. Robert S. Miola, Shakespeare’s Reading (Oxford: Oxford University Press, 2000), 2.

23. Jennifer Schuessler, “Plagiarism Software Unveils a New Source for 11 of Shakespeare’s Plays,” The New York Times, February 7, 2018, https://www.nytimes.com/2018/02/07/books/plagiarism-software-unveils-a-new-source-for-11-of-shakespeares-plays.html.

24. Adrian Johns, Piracy: The Intellectual Property Wars from Gutenberg to Gates (Chicago: University of Chicago Press, 2009), 109-148 and Mark Rose, Authors and Owners: The Invention of Copyright. Cambridge, MA: Harvard University Press, 1993. See also “Statute of Anne, the First Copyright Statute,” History of Information, accessed June 14, 2025, https://www.historyofinformation.com/detail.php?entryid=3389.

25. Jean-Jacques Rousseau, Emile: or On Education. Translated by Allan Bloom. (New York: Basic Books, 1979).

26. “French Literary and Artistic Property Act, Paris (1793).” In Primary Sources on Copyright (1450-1900), edited by Lionel Bently and Martin Kretschmer. https://www.copyrighthistory.org/cam/tools/request/showRecord.php?id=commentary_f_1793

27. Immanuel Kant, Critique of Judgment, trans. James Creed Meredith (Oxford: Oxford University Press, 2007), §46.

28. William Wordsworth, quoted in Martha Woodmansee, The Author, Art, and the Market: Rereading the History of Aesthetics (Columbia University Press, 1994), 38-39.

29. Tilar J. Mazzeo, Plagiarism and Literary Property in the Romantic Period (Philadelphia: University of Pennsylvania Press, 2013).

30. Sharon Marcus. The Drama of Celebrity. (Princeton: Princeton University Press, 2019), 9.

31. Marcus, 13-17, 125.

32. Walter Benjamin, “The Work of Art in the Age of Mechanical Reproduction,” in Illuminations, ed. Hannah Arendt (New York: Schocken Books, 1968), 217-251.

33. Wes Henricksen,”Silencing Jorge Luis Borges: The Wrongful Suppression of the Di Giovanni Translations.” Vermont Law Review, vol. 48 (2024): 208-236.

34. “Copyright Timeline: 1900–1950,” U.S. Copyright Office, https://copyright.gov/timeline/timeline_1900-1950.html.

35. Roland Barthes, “The Death of the Author,” in Image–Music–Text, trans. Stephen Heath (New York: Hill and Wang, 1977), quotations and the pertinent section can be found at 142–148.

36. On the Pictures Generation, see my essay “On the Pictures Generation and AI Art,” varnelis.net, April 14, 2024, https://varnelis.net/on-the-pictures-generation-and-ai-art/.

37. Carl A. Falstrom, “Thou Shalt Not Steal: Grand Upright Music Ltd. v. Warner Bros. Records, Inc. and the Future of Digital Sound Sampling in Popular Music,” Hastings Law Journal 45 (1994): 359–390.

38.Campbell v. Acuff-Rose Music, Inc.,” Wikipedia, https://en.wikipedia.org/wiki/Campbell_v._Acuff-Rose_Music,_Inc.

39. Kazys Varnelis, “Art and the Boxmaker,” varnelis.net, February 29, 2024, https://varnelis.net/art-and-the-boxmaker/.

40. Lev Manovich, “Artificial Subjectivity,” manovich.net, https://manovich.net/index.php/projects/artificial-subjectivity.

preliminary findings toward an architectural history of the network posted

I have been working on my garden for much of the last month. This is an all-consuming task, but today I had the opportunity to find an old article that I wrote on the origin of data centers, “Preliminary Findings Toward an Architectural History of the Network,” New Geographies 07 (2015). 

You can read it here. https://varnelis.net/preliminary-findings-toward-an-architectural-history-of-the-network/

In this essay, I explore the architectural history of networks, focusing on the typology of data centers and its historical emergence. The network, despite receiving critical attention since the Internet’s proliferation, has been largely overlooked from an architectural perspective.

I argue that understanding the data center as a building type is essential, as well as understanding that it encompasses various architectural manifestations ranging from repurposed buildings to purpose-built structures. I trace the origins of the data center to the post office, which developed in the United States in the mid-nineteenth century. I examine the link between data centers and territory, emphasizing the role of the mail system in the political development of the nation.

The expansion of postal routes, the implementation of a hub-and-spoke system, and the architectural form of post offices are detailed, highlighting the network’s infancy and its historical emergence in typological terms. The essay continues with an examination of the introduction of home delivery and the development of the telegraph system. I analyze the growth of telegraphy, its alliance with the media, and concerns about monopolies. Overall, this research provides a comprehensive examination of the architectural history of networks, shedding light on the typological, geographical, and technological aspects of networks. My goal was to provide insights into the historical significance and contemporary relevance of data centers, thereby contributing to a broader understanding of the material and geographic conditions shaped by the constraints of the physical world.

10 Chairs in Baltimore, 4/11/15

I am delighted to be one of ten scholars, writers, and artists speaking at the Baltimore Museum of Art this Saturday about ten chairs from the collection in their newly re-opened American Wing. The event starts at 2pm. If you are in town, please join us. I'd love to say hello. 

I will be talking about the Elastic Chair, produced by Boston manufacturer Samuel Gragg. In 1808, long before Charles Eames or even Michael Thonet, Gragg patented a technique for bending wood with steam. Inspired by the Klismos, an ancient Greek chair, together with the ancient Greek methods of bending wood, Gragg's elastic chair employed the highest technology of its day. As we look at it today, we confront a time that is curiously like our own, faced with a past that forms a massive repository of precedent that we can’t get away from and an obsession with the possibilities of technology as a means of advancing both industry and society.     

Aleksandra Kasuba at the NDG, Vilnius

It's a privilege to be speaking about the work of Aleksandra Kasuba at the National Art Gallery (Nacionaline Dailes Galerija) in Vilnius this coming Thursday at 5pm.

One of my earliest memories, from when I was four, is crawling through her Live-in Environment, which she had installed in the townhouse that she and her husband, sculptor Vytautas Kasuba owned. You can imagine the impact it had on me. 

In my talk, I will focus on Kasuba’s constructions of the 1960s and 1970s in which she worked with high technology fabric from Dupont to create environments that occupy a third spatial order, neither art nor architecture. I will also read her work against a larger discourse on art and architecture in New York City at the time, revealing her own approach to problems that challenged other avant-garde artists and designers of the day.

The occasion is the opening of a reconstruction of her 1975 project "Spectrum, an Afterthought " which she conceived of after the installation of "Spectral Passage" at the De Young Museum in San Francisco.  

Kasuba's work is uncannily similar, and in many ways to the digital architecture of the contemporary era (not to mention Richard Serra's torqued ellipses). Still, the diaphanous qualties of the fabrics that she worked with give it a lighter feel and mark it as distinct from architecture (she was neither trained as an architect nor did she consider herself to be one). Instead, it strikes me that these kind of inhabitations are closer to tents, perhaps structures that nomads might construct within the non-places of the contemporary world. Imagine if airports were filled with structures like these, as spaces to pause in.       

If you are in Vilnius that day, I hope you can make it. I'm afraid that my talk will be in English although I'll be delighted to take questions in Lithuanian as well as English. 

The Rise and Fall of New Media

My essay "the Rise and Fall of New Media" can be found in the twentieth anniversary issue of Frieze and online their site here. It's paired with an essay by Lauren Cornell of Rhizome and the New Museum. Together, both deal with the issue that far from being a niche interest, as Cornell writes, "every kind of artistic practice has been touched by the Internet as both a tool and as something that affects us in a broader sense…" 

Posting has been light this summer as I've moved into a new house (modernism, even!) but things have been moving behind the scenes. With the new semester coming up, expect more on the way.

  

Against Print

I don't see how can I avoid sounding like an ogre or troll in this post but there's no sense in writing for print anymore. 

I'm faced with a huge amount of work on my plate and something has to what give. Since I'm already spending too little time on the blog and my book, I have to find something to cut. The victim is the print-only journal. I wish it well.

Network culture begins with a condition of information overload. Having grown up in a house with a massive library, I can appreciate the desire to have books and journals at hand and I sought to emulate my father in collecting for a while, but gave it up almost a decade ago. Objects consume scarce resources and space. Books and journals are still the worst offenders in my house. Even as cull them without mercy, they pile up around me, largely unread, passed by in a day when there's too much to do. 

Let's face it, a personal library is the academic's version of an SUV. It's handy for when you need it, but it's big and unwieldy, a poor choice when it comes to ecology and not a defensible option in a world of limits except for those who really, truly need them.  

The journals that I read regularly—the New Left Review, Mute Magazine, Eurozine, and Domus (to name a select few)—are already on the Net. There are few print-only publications and I read none of them regularly. Fetish objects like the New City Reader, Junk Jet, Volume, or Loud Paper generally wind up on the Internet in reduced or pirated form. You have to pay—or otherwise seek out—the original format if that's what you want, but the content is there for the taking.

Google books makes it possible to search through new and old books alike while pirate book sites mean that it's easy to carry thousands of books in a laptop. Pirating may be illegal now, but it's thriving—take the book scanning movement, for example—and is just the faintest ripple in the surface of the ocean before the tides pull back and then the tsumani hits.

If not in this decade, then surely within two decades virtually all publishers—book, journal, and newspaper will provide universities with everything they publish in digital form. Within that time, as I pointed out at the CCA on Thursday, most archives will also be online.  

A book or journal that in print form only is inadequate for our age. It cannot be properly searched. Hand-made indices have some degree of utility, but no matter how intelligent the maker of the index was, remain reductive, the product of one mind that can't adequately foresee everything the text will be used for. Full-text search is revolutionary for scholarship.  

Then there's portability. Like so many of my colleagues, I travel frequently, both overseas and across the Hudson to Columbia. I clung to slides until 2006 when travelling to Ireland to teach made that impossible. Books are the same. It's entirely different to have my library at my fingertips as I type.

But is this historian's desire so new? While teaching in Brazil, Braudel would visit Europe periodically and employ microfilm to record material in archives for later references. I'm confident that if Benjamin were alive today, he'd be surfing book pirate Web sites instead of frequenting old bookstores, collecting PDFs in his laptop, just in case the sites wind up shut down.

Moreover, there's another ethical question, beyond the viability of publishers which I suspect will survive in this new world (printing presses, may be another matter). A friend once told me that while she was teaching in South America, she translated my texts for her students. At the time, she explained, my work was just about the only informed commentary on contemporary architecture available online and her university lacked the funds to acquire books and journals or pay for access to material behind paywalls. Her message hit home: print publications and paywalls maintain a global imbalance of intellectual resources.     

There's nothing more tiresome than the aged (or young) scholar lamenting the lack of intellectual rigor online. Surely such learned individuals have heard of the Johannes Trithemius, the Abbot of Sponheim who published his De laude scriptorum manualium, defending the tradition of script against the printing press in 1492? Our fields were hardly more rigorous in the postmodern 1980s or the post-structuralist 1990s let alone the heroic era of the 1920s. Plenty of material not worth the ink and paper it cost to print was published back then. 

Instead of lamenting print, let's work together to break down paywalls, physical or electronic. Those of us in the academy are not in the business of knowledge, we're in a community of knowledge, a community that transcends old limits. Let's embrace that.  

 

 

 

Media for Historians of Architecture

I am delighted to announce that I will be succeeding Beatriz Colomina as the review editor of the media section of the Journal of Society of Architectural Historians.

It will be my charge to edit articles on Web sites, films, software, digital books, databases, and other media at a moment in which my field is undergoing a revolutionary transition. I am in debt to Beatriz for paving the way by creating a stellar review section, to David Brownlee, JSAH editor for inviting me to take part in his journal, and to Dean Wigley for his support in this new endeavor. 

If you are a historian of architecture and you read my blog, please do contact me using the form on the left. This is a most exciting appointment. 

A Chapter on Atemporality

I’ve put a revised version of the introduction to my book on network culture together with the first chapter—on atemporality—on my site. I hope you’ll be as excited to read this material as I am to post it.

I know that I owe my most readers a few words of explanation about why it took over a year to post a chapter that I had initially thought I’d have up within a couple of months.

First, I had the honor of writing a chapter in Networked: A (Networked) Book on (Networked) Art. As part of this project, I agreed that I wouldn’t take the material for the chapter and immediately publish it on my own site. That material, like a lot of the research I  did last year requires substantial reworking to fit the book (little of it is in the first chapter…you’ll see it later, in the chapter on poetics).

Second, I’ve thoroughly rethought the book during the intervening year not once but repeatedly. This is hardly a crisis, but rather the way that I—and many historians—write. Revise again and again as you nibble at unformed parts until everything comes together.

Some of you have asked how the revision process works, so I’ve left the record on the site, just go to the revisions tab for any section and compare the current version with earlier ones. Of all the revisions, the most significant is a new model of historical succession that I find simply works for network culture. Whereas last year I had some uncertainty about just how this book would be a history, the first chapter—which of course is on history—now makes my strategy of relying on Michel Foucault and Jeffrey Nealon’s model of intensification emphatically clear.

Speaking of revisions, make no mistake, there are plenty of rough patches in these chapters. This is, after all, a draft. Don’t  read it if you want a finished product. But also don’t think you should hold back on your commentary. Whether at Networked or at the other ventures including this one, networked books have largely failed at generating comments. Don’t let that stop you. If you see a problem in the text call me out on it wherever you feel appropriate. The more that I can draw on the massive collective intelligence of my readership, the better this project wil be.   

While I’m on the topic of collective intelligence… This first chapter owes much to a dialogue that Bruce Sterling and I have maintained between our blogs (take, for example, Bruce’s discussion of atemporality in his keynote address at Transmediale this year) and on Twitter with many of you. All of the kind attention that this dialogue brought during the first few months of the year makes me think that my attempt to write a history of atemporality is both timely and untimely (in Nietzsche’s sense).

Finally, a word about the book title. It’s very much in flux now, but I’m thinking it might be "Life After Networks: A Critical History of Network Culture."