2025-in-review

It’s strange to measure every year against a concept developed by a science fiction writer, but William Gibson’s line “The future is already here—it’s just not evenly distributed”1. has been my north star for my recent year-in-review essays. Gibson meant that the future was unevenly distributed by class: the wealthy receive high-tech healthcare while the world’s poorest live in squalor—though one might ask which of these is really our future. Yet the quote has been repeatedly misread as a claim about time andspace: that the future arrives somewhere first, perhaps unseen, while the rest of the world catches up. But this misreading is more productive than Gibson’s intent. Gibson’s critique of inequality is fair enough, but we all know this, decry it, and go on about our business. The misreading, on the other hand, is a theory of historical change.

With the release of ChatGPT in late 2022, a temporal rift opened, shattering the post-Covidean present. But many tried the early tools, encountered hallucinations, read articles about slop and imminent environmental ruin, and reasonably concluded there was nothing to see. By 2025, a cursory examination of news in AI would have assured them that AI had proved a bust. OpenAI’s long-awaited updates disappointed, and the company flailed, turning to social media with Sora, a TikTok clone for AI. Meta seemed to abandon its efforts to create a competitive AI and instead turned to content generation for Instagram and Facebook, something nobody on earth wanted. Talk of a bubble started among Wall Street pundits. The hype-to-disappointment cycle is familiar, and the dismissals were not unreasonable.

But again, the future isn’t evenly distributed, and if you don’t know where to look, you would be excused for believing it’s all hype. Looking past such failures, 2025 was actually a year of breakneck progress. Anthropic’s Claude emerged as the most capable system for complex tasks, Google’s Gemini became highly competitive, while DeepSeek and Moonshot AI proved that China was not far behind. More significant than any single model was the emergence of agentic AI—systems that can take on multi-step tasks, act, navigate filesystems, write and execute code, and work across documents. Claude Code was the year’s groundbreaking innovation. While “slop” was Merriam-Webster’s word of the year, “vibe coding”—using agents to write programs—was much more important. Not only could programmers use them to accelerate their work, it also became possible for non-programmers to realize their ideas without any knowledge of code, a radical change in access I explored in “What Did Vibe Coding Just Do to the Commons?”.

By any first-world standards, at least, these tools are remarkably democratic and inexpensive. A basic Claude subscription costs about as much as a month of streaming, and even the $200 maximum usage account costs less than a monthly car payment. For many, however, the barrier is not price but something deeper—a resistance approaching revulsion. These tools provoke fear in a way that earlier technologies did not. It’s not the apocalyptic dread of the doomers or the Dark Mountain sensibility that apocalypse is near. Rather, it’s a threat to the sense that thought itself is what makes us distinct. The unevenness of the future is no longer about access; it’s now about willingness to engage.

As a scholar, thinking about the very short term is strange for me. I have always been suspicious of claims that radical change was upon us. I would rather align myself with the French Annales school concept of la longue durée, as defined by the great Fernand Braudel, the long-term structures of geography and climate. Faster than that were the medium-term cycles of economies and states, while he dismissed the short-term événements of rulers and political events as “surface disturbances, crests of foam that the tides of history carry on their strong backs.”2. Events, he wrote elsewhere, “are the ephemera of history; they pass across its stage like fireflies, hardly glimpsed before they settle back into darkness and as often as not into oblivion.”3. The real forces operate beneath, slowly, often imperceptibly.

Curiously, Braudel himself embraced technological change in his own work. In the 1920s and 30s, he adapted an old motion-picture camera to photograph archival documents—2,000 to 3,000 pages per day across Mediterranean archives from Simancas to Dubrovnik. He later claimed to be “the first user of microfilms” for scholarly historical research.4. His wife Paule spent years reading the accumulated reels through what Braudel called “a simple magic lantern.”5. Captured in 1940, he spent five years as a prisoner of war and wrote the entire first draft of The Mediterranean—some 3,000 to 4,000 pages—from memory. Paule, meanwhile, retained access to the microfilm and notes in Paris, and after the war, they reconstructed the text, taking his manuscript, verifying it and adding footnotes and references from the microfilm.6.

In 1945, the same year Braudel was liberated, Vannevar Bush published “As We May Think,” in which he imagined a device he called the “Memex”: a mechanized desk storing a researcher’s entire library, indexed and cross-referenced, expandable through associative trails.7. The vision remained speculative for decades. Now the world’s archives are being digitized; AI systems translate, summarize, and search across them in seconds and can translate any language. To take one example, earlier this year, I used Google’s Gemini to translate the Hierosolymitana Peregrinatio of Mikalojus Kristupas Radvila Našlaitėlis, a sixteenth-century pilgrimage narrative from an online scan of the Latin first edition. The result is not a polished scholarly translation, but a working text that allowed me to gain a good sense of a text that was previously unreadable to anyone without proficiency in Latin or Polish (the only language into which, to my knowledge, it had been translated). The role of the intellectual is being transformed—not replaced, but augmented in ways Bush could only sketch. This feels like something other than foam.

How to account for such a rapid shift? Manuel DeLanda offers one answer in A Thousand Years of Nonlinear History. Working in Braudel’s materialist tradition and drawing on Gilles Deleuze and complexity theory, DeLanda describes how flows—of trade, energy, and information—accumulate and concentrate until they cross a threshold, undergo a phase transition, radically reorganizing into a new stable state. But here is the key insight: intensification is la longue durée. The accumulation of flows that began with the Industrial Revolution—or perhaps with writing, agriculture, or even symbolic representation itself—is the deep structure behind our era. Steam, electricity, computing, the internet: each was a phase transition within a longer arc of intensification. Cities accelerate such processes, as Braudel showed, concentrating capital and labor until new forms of economic organization emerge—Venice, Antwerp, Amsterdam, London, each becoming sites at which the future arrived first. Such conditions are not opposed to la longue durée; they are the moments when intensification crosses a threshold.

The continued pace of change this year underscores that there has been no return to equilibrium. But this has been accompanied by unprecedented resistance to technology, appearing as simultaneous terror at its apocalyptic nature (in jobs, if nothing else) and dismissal as useless, especially in Gen Z. A January 2026 Civiqs survey found that 57 percent of Americans aged 18–34 view AI negatively—more than any other age group. Curiously, the seniors category, which now includes most boomers, was the least resistant to AI, followed by Gen X and older millennials, all groups that grew up seeing radical societal and technological changes.8. It seems paradoxical that the smartphone generation recoils from the tools of the future. To understand this resistance means understanding the mentalité that shaped it—what Braudel’s successors in the Annales school called the collective psychology formed through lived experience.9. For Gen Z, that formative experience was network culture—both a successor to postmodernism and a form of collective psychology I did not fully understand at the time. Writing on network culture in 2008, it seemed to me that social media promised connection; instead, it brought division.10. The networked self was indeed constituted through networks, not merely isolated in postmodern fragmentation, but the fragmentation was now collective. Networked publics built barriers against one another, creating what Robert Putnam called cyberbalkanization: retreat into a comfortable niche among people just like oneself, views merely reinforcing views.11. Identity wars and mimetic conflict flared across filter bubbles that amplified outrage and tribal scapegoating as both MAGA and wokism built toxic online cultures. QAnon and a thousand other conspiracy theories propagated through Facebook groups and YouTube recommendations. Young men drifted into incel communities where loneliness became ideology and livestreaming mass shootings was celebrated. Influencers built their empires on hatred—Hasan Piker framed Hamas’s October 7 massacre as anticolonial resistance while Nick Fuentes celebrated mass shooters as vanguards of race war and civilizational collapse.

Nor did this just fragment culture—it exacted a massive psychic toll, as social contagion spread new forms of self-harm and mental illness. During the pandemic, teenage girls began presenting tic-like behaviors—not Tourette’s syndrome, but something researchers termed “mass social media-induced illness,”12. spread by TikTok videos about Tourette’s rather than any actual disease. The pattern was unprecedented but not unique. Eating disorders spread through thinspiration hashtags. Self-harm tutorials circulated on Instagram. The platforms that were supposed to bring us together instead spread desires, disorders, and identities through pure social contagion—and with them, violence and polarization. A generation that grew up inside this experiment—that watched it reshape their peers’ bodies, minds, and identities—is right to be skeptical of the next technological promise.

In 2010, it seemed like network culture had a good chance of becoming understood as the successor to postmodernism. Bruce Sterling and I were engaged in a kind of dialogue about it online. He predicted that network culture would last “about a decade before something else comes along.”13. And he was right, as I acknowledged in my 2020 Year in Review. By then, network culture was exhausted, and with the Covidean break, it seemed time for something new. In 2023, I taught a course at the New Centre for Research & Practice to try to broadly sketch the emerging era. It’s still early and hard to fathom, like trying to understand postmodernism in 1971 or network culture in 1998, but it’s clear that if postmodernism was underwritten by the explosion of mass media, network culture by the Internet, social media, and the smartphone, then the current era is shaped by AI.

But if Gen Z, scarred by the effects of social media, has been reacting with deep fear and anxiety, Sterling how epitmozes the other reaction, dismissal. In the most recent State of the World, for example, he derides AI-generated content as “desiccated bullshit that can’t even bother to lie.” He compares the vibe-coding atmosphere to an acid trip, mocking the professionals who utter “mindblown stuff” like “we may be solving all of software” and “I have godlike powers now.” For Sterling, AI can produce nothing but slop. Now Bruce has always had a healthy skepticism toward tech claims, but I can’t help but think of Johannes Trithemius, the fifteenth-century abbot who wrote De Laude Scriptorum just as Gutenberg’s press was spreading across Europe—defending the scriptorium against a technology he could not see would remake the world.

There are even deeper, more existential fears, and I’ve spent the past year addressing them on my blog, in the process laying the foundation for a book on the topic: AI as plagiarism machine; AI as hallucination engine; AI as stochastic parrot, mindlessly repeating what it has ingested (Sterling’s critique); and AI as uncanny double, too close to us for comfort. As I explain, the discomfort arises not from the machine’s otherness but from its likeness: a mirror held up to processes we preferred to believe were uniquely ours.

It’s no accident that I published these essays on my blog. As far as my personal year in review goes, this was very much the year of the blog. I have no plans to ever publish in an academic journal again. Why would I? Who would read it? Why would I want to publish something paywalled, reinforcing the walled gardens of inequality that academia is so desperate to maintain—even as it proclaims itself the champion of open inquiry and democratized knowledge? Academia has become the realm of what Peter Sloterdijk called cynical reason: rehearsing the tropes of ideology critique while knowing the game is empty and playing it anyway. This revolts me.

But for almost ten years now, since the shutting down of the labs at Columbia’s architecture school, I have been content to write from the position of the outsider, something I reflected on in “On the Golden Age of Blogging”. That essay was prompted by a strange comment from Scott Alexander, who lamented on Dwarkesh Patel’s podcast that he had personally made a strategic error in not blogging during what he called the “golden age,” imagining that “the people from that era all founded news organizations or something.” The golden age he remembers is a fiction, as golden ages often are—and he gets the stakes entirely wrong. Evan Williams founded Blogger in 1999, sold it to Google, co-founded Twitter, then created Medium, which convinced hapless readers pay to read slop long before AI slop was ever a thing. The early bloggers who sought professionalization found themselves absorbed into the worst of the worst, writing for BuzzFeed, peddling nostalgia listicles that rotted psyches.

There was, however, a golden age for me, and I miss it: the architecture blogging community circa 2007—Owen Hatherley, Geoff Manaugh, Enrique Ramirez, Fred Scharmen, Sam Jacob, Mimi Zeiger (whose Loud Paper was less a blog and more a zine, but a key part of the culture), and others. We inherited from zine culture an informal, conversational tone and the will to stand outside architectural spectacle. But ArchDaily and Dezeen commercialized the form, shifting from independent critique to marketing and product. Startup culture absorbed architectural talent.

Blogging was powerful precisely because we had no stakes in it—we owned and controlled our means of intellectual production. The golden age of blogging is not in the past; it is now. After years of proclaiming I would blog more, in 2025, I really did. I wrote over 83,700 words on varnelis.net and the Florilegium—essay-length pieces on landscape, native plants, AI and art, architecture, infrastructure, politics, and tourism. My only regret is that my presidency at the Native Plant Society of New Jersey consumes so much of my thinking about native plants that little remains for writing. But the time will come, and if nothing else, my investigation of the Japanese garden aesthetic should point in the future direction for my writing on landscape.

I also continued to make AI art, or to be more precise, what I called stochastic histories. A major project was a substantial reworking of The Lost Canals of Vilnius, a counterfactual history in which, after the Great Fire of 1610, Voivode Mikalojus Radvila Našlaitėlis rebuilt the city with Venetian-style canals, complete with gondoliers, water processions, and a hybrid “Vilnius Venetian” architecture. As research, I used Gemini to translate Radvila’s sixteenth-century Latin pilgrimage narrative. AI, like photography or film, is what you make of it. Film is perhaps the better analogy—anyone can make a video. Making something worthwhile is another matter entirely. In December, I also completed East Coast/West Coast: After Bob and Nancy, a generative restaging of Nancy Holt and Robert Smithson’s 1969 video dialogue using two AI speakers.

There were other substantial essays, too. In “Oversaturation: On Tourism and the Image”, I finally put down on paper something I had wanted the Netlab to address while at Columbia, but that proved too dangerous for the school to support. Universities cannot critique the very systems of overproduction they depend upon for survival. Publish or perish and endless symposia nobody is interested in are the academic versions of overproduction, but more than that, any architecture school claiming global currency cannot afford to offend either other institutions, like museums, that give it legitimacy, or, for that matter, the trustees that fund both. As I point out, tourism has always been mediated by imagery; take Piranesi’s vedute or the Claude Glass. Grand Tourists always had representations at hand to interpret their direct experience—but a new crisis point has been reached with both overtourism and the overproduction of images. Algorithmic logic now reorganizes cultural geography around “most Instagrammable spots,” making historical significance secondary to content potential. The Fushimi Inari shrine in Kyoto is the case in point—a 1,300-year-old shrine that Instagram made famous and that has now ceased to serve as a religious site due to the influx of visitors. The Japanese have a term for this: kankō kōgai, tourism pollution. Tourism has become the paradigm of contemporary experience—the production of imagery without cultural meaning; everything feeds the same algorithmic mill. Even strategies of resistance get metabolized—slow travel becomes a hashtag, psychogeography becomes an Instagram guide.

The Bilbao effect, which was a major driver of oversaturation, was itself a product of globalization. Hans Ibelings coined “supermodernism” in 1998 to refer to the architectural expression of Marc Augé’s “non-places,” an architecture optimized for the perpetual circulation of bodies and capital. It was the architecture of network culture, of the Concorde and the Internet. Koolhaas diagnosed its endgame in his 2002 “Junkspace“—”Regurgitation is the new creativity”—and then, tellingly, stopped writing. Today, network culture is long gone; nationalism is on the rise. The Internet is a dark forest now14. while the disconnected life is on the rise.15 The most exclusive resorts now advertise no Wi-Fi, no cell service, no addresses—only coordinates. Disconnection has become the ultimate luxury, sold back to the same people who built the infrastructure of connection. More cities are alarmed by the effects of overtourism than desire to attract tourists. In the US, new architectural proposals appeal to a retardataire aesthetic—Trump displaying models of a triumphal arch inspired by Albert Speer and marking a triumph of nothing in particular in models in three sizes (“I happen to think the large looks the best“), a four-hundred-million-dollar ballroom modeled on Mar-a-Lago, an executive order mandating classical architecture for federal buildings that Stephen Miller explicitly framed as culture war.

Yet both Bilbao and MAGA are spectacle, architecture-as-branding. But the Bilbao effect is imploding. No city believes anymore that a signature building by a starchitect will transform its fortunes. The parametricists have nothing left to say. Parametric design promised formal liberation—responsive, site-specific, computationally derived—but what it delivered was the most efficient, ugliest box. If the promise was the blob, the reality is the “5-over-1”: wood-frame residential floors stacked on a concrete podium with ground-floor retail, wrapped in a pastiche of brick veneer, fiber cement panels, and that obligatory conical turret element meant to signal “we thought about this corner.” As for AI-generated architecture, it is merely boring—giant sequoias hollowed out as apartment buildings, white concrete towers with impossible cantilevers, and lush vegetation sprouting from every surface—the same utopian fantasy rendered a thousand times over. These are renders of renders: AI trained on architectural visualization produces visualizations that are utterly disconnected from any tectonic reality. A new generation may emerge in response to new needs, but for now, the discipline has lost its cultural purchase. Architecture, for us, is a thing of the past.

The art world, too, has slowed. Museums are putting on fewer shows, shifting from aggressive schedules to longer, more deliberate exhibitions—or simply cutting programming as budgets tighten.16. The frantic pace of the Biennale circuit has exhausted dealers and collectors alike; smaller fairs are folding, and even the major ones feel like obligations rather than events. Galleries that survived the pandemic are now closing quietly, without the drama of a market crash—just a slow bleed of foot traffic, sales, and cultural attention. There is no new movement, no emergent critical framework, no sense of direction. The market churns on—auction prices for blue-chip artists remain high, collectors still speculate, art advisors still advise—but the sense of cultural mission has dissipated. What remains is commerce without conviction, a field that has forgotten why it exists beyond the perpetuation of its own economy. The institutions that trained artists for this field are collapsing alongside it.

As enrollment dwindles, design schools are collapsing—not merely contracting, but ceasing to exist. Most recently, the California College of the Arts announced in January 2026 that it would close after the 2026–27 academic year17., the last remaining independent art and design school in the Bay Area. It follows a grim procession: the San Francisco Art Institute (2020), Mills College (2022), the Pennsylvania Academy of the Fine Arts (2023), and Woodbury University’s acquisition by Redlands and subsequent adjunctification—a fate that has methodically undone so many schools as faculty become contingent labor and institutions into hollow administrative structures run by well-paid, cost-optimizing consultants.

There is personal resonance for me in this. Simon’s Rock College of Bard, which shuttered its Great Barrington campus in 2025, was where I studied for my first two years before transferring to Cornell—a pioneer of early college education that offered a radical pedagogical experiment in what learning could be beyond conventional schooling. I arrived there straight from high school, as did my good friend and colleague Ed Keller; clearly, something interesting was in the water back then. Simon’s Rock made the development of young minds its central mission rather than an incidental focus of brand management or endowment growth, and its alumni list is impressive for such a small school. It has an afterlife at Bard, but it’s an echo at best.

The difference between these institutional deaths and simple market failure is this: they are not being replaced. When a retail business fails, another may open elsewhere. When a school closes, there is no succession. The market offers no alternative. Instead, what remains are the corporate university satellites—for-profit programs nested within larger institutions (like Woodbury’s absorption into Redlands), stripped of autonomy, their faculty reduced to precariat, their curricula bent toward what can be measured and marketed. The art schools that survive do so by transforming into something else: luxury finishing schools for wealthy families or research appendages to larger universities, where “design thinking” becomes another management consultant’s tool. The pedagogical mission—to create conditions where students might develop serious aesthetic judgment, where they might encounter genuine problems and be forced to think through them—is not merely challenged but impossible. The closure of these schools does not signal a failure of art education; it signals that the very idea of art education as something valuable in itself has been liquidated.

This hollowing out of cultural institutions is not incidental to the political moment—it is one of its hallmarks. Politically, most people have checked out. This is not 2017, when each provocation demanded a response; the outrage cycle has given way to numbness. In “National Populism as a Transitional Mode of Regulation”, I argued that Trump, Orbán, Meloni, and their ilk represent not a return to fascism but something new: the authoritarian management of declining expectations. National Populism correctly identifies that neoliberalism’s promise of shared prosperity has failed, but it channels legitimate grievances toward scapegoats rather than addressing the technological displacement actually causing them. This is its tragic irony: the National Populist base—workers made obsolete by neoliberalism and unable to participate in AI Capitalism—finds its legitimate anger directed into a movement that accelerates the very forces rendering them superfluous. Their value to capital lies in political disruption rather than economic production; they are consumers and voters, but no longer needed as workers. National Populist leaders offer psychological compensation—dignity, recognition, transgressive identity politics—rather than material improvement. The apocalyptic tenor of populist culture, its end-times thinking and conspiracy theories, provides a framework for populations sensing their own economic redundancy.

The alliance between tech billionaires and populist leaders is unstable. AI Capitalism requires borderless computation and global talent flows; nationalist protectionism contradicts these at every turn. Musk, Thiel, and Andreessen have aligned with the movement to dismantle the regulatory state, not because they share its vision but because populism serves as a useful battering ram against institutional constraints. Once those barriers fall, the movement and its human-centric concerns can be discarded. National Populism, as I conclude, is not the future—it is a political interlude, a transitional mode that will not survive contact with the economic forces it has helped unleash.

If National Populism is transitional, is there a positive vision that can replace it? In “After the Infrastructural City”, I responded to Ezra Klein and Derek Thompson’s book Abundance, perhaps the most influential book of 2025, which argues that America’s inability to build is a political choice, not a technical constraint. Their solution: streamline regulation, invest boldly, build more. It’s a compelling vision—and a necessary corrective to decades of paralysis. But Abundance shares a curious blindspot with Muskian pronatalism: both assume we need more people. Musk preaches that declining birthrates spell civilizational collapse; Klein and Thompson build their vision on populations that will mysteriously arrive to fill what’s built, perhaps by immigration. Neither accounts for the possibility that AI changes the equation entirely—that a smaller population, augmented by intelligent systems, might not be a crisis at all. Populations are already shrinking across much of the developed world. What I call “actually-existing degrowth”—not the voluntary eco-leftist kind, but the unplanned demographic contraction now underway in Japan, Korea, and much of Europe—is coming for the United States too. Declining birth rates, aging populations, and regional depopulation: these are not future scenarios but present facts.

This doesn’t invalidate the Abundance agenda; it redefines it. Abundance cannot mean building more for populations that will not arrive. It must mean building better, adaptive, intelligent infrastructure for smaller, older societies. AI, rather than merely destroying jobs, can help navigate this transition: smart grids, autonomous transit, predictive healthcare. The opportunity is real. Managed shrinkage, done well, can mean more livable cities, restored ecosystems, higher quality of life. The question is whether political leaders can articulate a vision of flourishing within limits—or whether nostalgia for growth will leave us building for a future that never comes.

Against the exhaustion of institutions, against the hollowing out of architecture and art, against the closure of the schools that trained people to imagine, the blog remains. It may not be much, but it is one independent voice outside the collapsing structures around me. I wrote over 83,000 words this year. I made art. I thought through problems that matter to me with the help of AI, which provided me with tools I could only have dreamt of merely a year ago. Today, I uploaded hundreds of thousands of words from my essays to a directory in Obsidian so that Claude could draw connections between them (see here for just how one can set this up).

The future is already here—it just isn’t evenly distributed. Some are afraid or are still pretending AI isn’t happening. Phase transitions are uncomfortable. They are also where the interesting work gets done. One makes of one’s time what one makes.

1. William Gibson, quoted in Scott Rosenberg, “Virtual Reality Check Digital Daydreams, Cyberspace Nightmares,” San Francisco Examiner, April 19, 1992, Style section, C1. This is the earliest verified print citation, unearthed by Fred Shapiro, editor of the Yale Book of Quotations.

2. Fernand Braudel, The Mediterranean and the Mediterranean World in the Age of Philip II, trans. Siân Reynolds (New York: Harper & Row, 1972), 21.

3. Braudel, The Mediterranean, 901.

4. Fernand Braudel, “Personal Testimony,” Journal of Modern History 44, no. 4 (December 1972): 448–67.

5. Paule Braudel, “Les origines intellectuelles de Fernand Braudel: un témoignage,” Annales: Histoire, Sciences Sociales 47, no. 1 (1992): 237–44.

6. Howard Caygill, “Braudel’s Prison Notebooks,” History Workshop Journal 57 (Spring 2004): 151–60.

7. Vannevar Bush, “As We May Think,” The Atlantic Monthly 176, no. 1 (July 1945): 101–8.

8. Civiqs, “Do you think that the increasing use of artificial intelligence, or AI, is a good thing or a bad thing?,” January 2026, https://civiqs.com/results/ai_good_or_bad.

9. The concept of mentalités emerged from studies of phenomena like the witch trials, where beliefs and fears spread through communities in ways that could not be reduced to individual irrationality. For an overview of mentalités as a historiographical concept, see Jacques Le Goff, “Mentalities: A History of Ambiguities,” in Constructing the Past: Essays in Historical Methodology, ed. Jacques Le Goff and Pierre Nora (Cambridge: Cambridge University Press, 1985), 166–180.

10. Kazys Varnelis, “The Rise of Network Culture,” in Networked Publics (Cambridge: MIT Press, 2008), 145–160.

11. Robert Putnam, “The Other Pin Drops,” Inc., May 16, 2000.

12. Kirsten R. Müller-Vahl et al., “Stop That! It’s Not Tourette’s but a New Type of Mass Sociogenic Illness,” Brain 145, no. 2 (August 2021): 476–480, https://pubmed.ncbi.nlm.nih.gov/34424292/.

13. Bruce Sterling, “Atemporality for the Creative Artist,” keynote address, Transmediale 10, Berlin, February 6, 2010.

14. Yancey Strickler, “The Dark Forest Theory of the Internet,” 2019, https://www.ystrickler.com/the-dark-forest-theory-of-the-internet/. See also The Dark Forest Anthology of the Internet (Metalabel, 2024).

15. “Trend: Not Just Digital Detox, But Analog Travel,” Global Wellness Summit, 2025, https://www.globalwellnesssummit.com/blog/trend-not-just-digital-detox-but-analog-travel/.

16. “The Big Slowdown: Why Museums and Galleries Are Putting on Fewer Shows,” The Art Newspaper, March 10, 2025, https://www.theartnewspaper.com/2025/03/10/the-big-slowdown-why-museums-and-galleries-are-putting-on-fewer-shows.

17. California College of the Arts, the last remaining private art and design school in the Bay Area, announced in January 2026 that it would close after the 2026–27 academic year. See “‘Nowhere Left to Go’: As California College of the Arts Closes, So Does a Pathway for Bay Area Artists,” KQED, January 13, 2026, https://www.kqed.org/news/12070453/nowhere-left-to-go-as-california-college-of-the-arts-closes-so-does-a-pathway-for-bay-area-artists.

Infrastructural City, New Jersey Style

Although the final nail hasn’t been hammered into the coffin, New Jersey governor Christopher Christie has unilaterally cancelled ARC (Access to the Region’s Core), new tunnel to connect New York City to New Jersey.

Now, ARC itself is a damaged project. Instead of ending in Penn Station or having any hope of exiting in a future Moynihan Station (the plan to reconstruct the Beaux-Arts post office across the street into a 21st century version of the glorious old Penn Station that used to greet travelers prior to the 1960s). But instead, due to politics and complexities of existing infrastructure, ARC was to terminate off-site and deep underground, making arrival at Moynihan station impossible and complicating connections to other rail lines. 

The Infrastructural City‘s lesson is that, if you give constituents and politicians enough power and you build a complex enough civilization in which notions of civil society are replaced by ideas of property rights, you are going to bring future growth to a crashing halt. So Los Angeles strangles on itself.    

The creative destruction of the New York City of consensus and big projects by a succession of mayors since Ed Koch certainly helped its recover. Finance has done very well and the city has become a playground for the wealthy even as manufacturing and the middle class have been eviscerated. But for now, the city is still unsustainable without the large numbers of commuters that work in the towers throughout Manhattan. This is a dirty secret that Manhattanites—including all too many architects and urbanists—don’t want to admit. I haven’t found a comprehensive source of statistics this morning, so my figures are a little cobbled together, still, at least 900,000 commuters enter into Manhattan every day via New Jersey Transit, Long Island Rail Road, the Port Authority rail lines, and the buses that go in and out of Port Authority. In contrast, only some 628,000 workers from Manhattan work on the island (what do all the rest of the 1.2 million people do?) and some 880,000 workers from the other boroughs commute in. Now again, don’t rely on these figures too much, but still they seem to be roughly on target in suggesting that the majority of community into the city comes from the suburbs.

But infrastructure in and out to the suburbs is at a breaking point. Amtrak has been starved of funds for decades and its tracks and tunnels are in a horrific state of disrepair. Since New Jersey Transit has to share the Amtrak train lines in and out of the city, it has to face congestion caused by constant technical glitches on the aged, overstressed Amtrak lines. But since Amtrak owns the lines, it gets priority when only one of two tunnels is running in and out of the city.    

Now Christie’s constituency is residents who don’t commute to New York. On paper, his motivation is the opportunity to use ARC funding for highway repairs. Still, he’s a Republican and when they’re involved its hard not to imagine conspiracy theories. In particular, its plausible that part of the economic mess the country is in is due to the "Starve the Beast" policies of a generation of conservatives. Using profligate tax cuts, stave the beast was meant to create fiscal conditions that would force massive cuts in government services. The impossible situation that we face today is arguably the result. No matter how utterly incompetent the Obama administration has been, there is little question that their hands have been tied by the massive deficit and debt incurred by the Bush administration. If one applied this sort of reasoning to Christie’s move, its plausible to imagine that it’s an anti-city project, aimed to make commuting in and out of the city so much more difficult, thus forcing workers and—more importantly—corporations to either move into the city (unlikely, given current demographic flows) or to move further out into exurban areas. These, in turn, have historically been more conservative in nature (this has a bit to do with the lack of shared infrastructure, roads aside, and the insulation that exurbanites feel from the poor). So, in other words, canceling ARC is a foresighted move that will likely make it impossible for Christie to get re-elected (given the money and votes concentrated in the commuting suburbs) but will make it possible for a shift further rightward in state politics over the next several decades and, in turn, help undermine Manhattan’s future. 

Read the Infrastructural City

I’m delighted to announce that the good people at m.ammoth.us have organized an online reading group to read the Infrastructural City. Find out more at their site

Like Networked Publics, the Infrastructural City has become a long-term project that goes beyond the bounds of Los Angeles. I’m currently immersed in the Network Culture book, but I have some plans for a follow-up article to my introduction in Infrastructural City later this year and maybe even a book some time later. 

Strange Harvest on Infrastructural City

Over at Strange Harvest, Sam Jacobs has a review of the Infrastructural City. The review is great: perceptive as always, Sam gets what we set out to do with the book. Thanks, Sam! In other news, it looks like Amazon will carry the paperback edition for a shade over $20 when it becomes available, making it much more affordable than before, but for some reason the book is still not in stock. Sadly, the infrastructure of books seems to be subject to the same negative conditions we observed in Los Angeles.

The Infrastructural City in Paperback

I am delighted to announce that ACTAR’s reprint of the Infrastructural City is now in the U. S. and available in paperback for $10 less than the hardcover edition. Order yours at your favorite bookstore or at Amazon.

The first printing sold out in just four months, its great to have it back in stock.

In other news, I was sick for most of August, hence the dearth of posts, but I am feeling much better and am excited about the coming fall semester, returning to writing, and to the blog.

Lights out in London

As the summer wears on, it seems like we’ve put all the craziness of earlier this year behind us. Critics are no longer proposing OMA-designed windmills for Marina del Rey. Good thing. It’s time to look carefully at the lessons of the Infrastructural City and think about its conclusions since, well, they aren’t pretty.

Make no mistake, there is no happy ending in the Infrastructural City, no easy recipe for fixing our infrastructural ills. This has puzzled a generation of critics, who’ve seen the book as Marxist, or overly cynical* or confusing. The problem for them is that they grew up in the last decade, in an era where there was always a technological innovation around the corner. But that innovation is about to run aground in a vicious tangle of Actor-Network-Theory.

To be clear, this isn’t a golden opportunity for designers. It’s a crisis that we haven’t seen since the 1980s and its not just in the Los Angeles. The same forces of NIMBYist political stalemate and neoliberalist deregulation that are undoing the Southwest can be found worldwide. How about daily sub-Saharan-Africa-style power shortages in the UK within an decade or two? The Economist has more here.

Meanwhile, the New York Times marks the sixth anniversary of the 2003 New York City blackout with a photo essay. Maybe we’ll have a chance to see more of this in our new bad future.

*Which doesn’t make sense to me. I hold Peter Sloterdijk’s opinion of cynicism, which is knowing that what you are doing is wrong but doing it anyway. Thus, most architecture and most architecture criticism is cynical. Most green projects are cynical. Whole Foods is cynical. How is raising the alarm cynical?

infrastructure, the lives of things, and stimulus

Obviously, technological optimism is common in network culture. It’s only natural: we experience technological improvements everyday. A decade ago I spent $1,500 on my first digital camera. Yesterday I gave my six-year-old daughter a digital camera for her birthday. It was smaller and handily outperformed that original camera for less than 1/15th of the cost. Last year the iPhone 3G came out. Now I’ve stopped plotting out the route to an unknown destination before I get on my way. During the last year I finally got rid of my last desktop machine in favor of a laptop which I set to automatically backup my hard drive over the wireless network whenever I am at home. Of course I’m a bit of a geek by inclination and profession, but if you’re reading this blog I’m sure you’re familiar with this rapid pace of change firsthand. 

So it’s normal to extend our technological optimism beyond the home, to the city for example. But there’s another aspect of network culture that balances out technological optimism: non-human systems have drives of their own. A relatively new branch of sociology, actor-network theory (ANT) tries to make sense of this. Here’s a quote from Ole Hanseth and Eric Montiero’s book Understanding Information Infrastructure that sums up the main point:

The term "actor network", the A and N in ANT, is not very illuminating. It is hardly obvious what the term implies. The idea, however, is fairly simple. When going about doing your business — driving your car or writing a document using a word-processor — there are a lot of things that influence how you do it. For instance, when driving a car, you are influenced by traffic regulations, prior driving experience and the car’s manoeuvring abilities, the use of a word-processor is influenced by earlier experience using it, the functionality of the word-processor and so forth. All of these factors are related or connected to how you act. You do not go about doing your business in a total vacuum but rather under the influence of a wide range of surrounding factors. The act you are carrying out and all of these influencing factors should be considered together. This is exactly what the term actor network accomplishes. An actor network, then, is the act linked together with all of its influencing factors (which again are linked), producing a network.

We all know how frustrating technology can be when by design or by accident it prevents us from doing what we wanted to. You lose your iTunes library on your drive and you can’t copy it back off your iPod or re-download it from the store, a faulty fuel sensor puts your car in limp-home mode, your remote control can’t talk to your DVD player and so on. 

By design The Infrastructural City is intended for a general audience—it’s not unacademic, but I also didn’t want to weigh it down too much with theory—and none of my authors were sociologists so I didn’t ask anyone to address ANT. But, one of the book’s chief lessons—even the main lesson—is that infrastructures themselves are actors. The Los Angeles River is not natural anymore, it’s something else entirely. We are traffic, but because we aren’t going to change our behavior, adding more lanes to freeways isn’t going to work.

Understanding human and non-human systems puts The Infrastructural City in a lineage starting with Anton Wagner’s 1935 Los Angeles: Werden, Leben and Gestalt der Zweimillionenstadt in Sudenkalifornien and extending through Banham’s 1971 Los Angeles: The Architecture of Four Ecologies. 36 years elapsed between the first two books and another 37 years passed before our book came out. For both Wagner and Banham, cities were ecologies. Wagner, sponsored by the Nazi government, saw these quite literally: the Anglo-Saxon settlers in Southern California were shaped by the landscape. If Wagner’s sponsorship and eugenic thesis are repulsive, his idea of understanding both the setting and the settlers together was ground-breaking. Building on Wagner, Banham saw the city as composed of discrete landscapes—ecologies—populated by specific clusters of individuals who gave rise to specific kinds of buildings. 

Inexorably, the man-made has become more important. But acts of human volition—building a work of quality architecture, say, or even spearheading an infrastructural initiative—are fading in favor of complex systems, actors that we have shaped but that have evolved "lives" of their own.     

These resulting "actors" have wills that can get in our way at the least opportune time. As a general rule, the more complex the system, the stronger its will. I’ll give away a further clue that I hid in our book: where possible I tried to show the traces of other infrastructural ecologies in the photographs I illustrated the essays with. Can you find the frankenpine in the opening spread of the essay on the L. A. River? As these "ecologies" or as David Fletcher calls them in his essay on the River, "freakologies" interact and network together, they become much harder to control.     

Another thesis of the book is that many of these systems are invisible and an actor doesn’t have to be visible or formed to have a will of its own. Social structures can also be actors. This is most evident today in the glaring absence of infrastructure from the economic stimulus plan. 

There are a lot of false hopes out there about the plan and I’ve been doing what I can to get the truth out, especially since the LA Times review of our book that got the story about the plan so sadly, painfully wrong. For the real story, take a look at this piece from the Boston Globe: Only 5 percent of $819b plan would go toward infrastructure.

A graphic displays the stark reality.

I quote the Globe: 

The chairman of the transportation panel’s subcommittee on highways and transit, Peter DeFazio of Oregon, became so angry about the reduction in transportation spending that he recently accused Obama’s top economic adviser, Lawrence Summers, of arguing against such funds because he "hates infrastructure."

The Globe piece observes that the Obama administration hints at future funding for infrastructure, but thus far there it has given fans of infrastructure precious little reason to believe in it.

Instead of agreeing with Peter DeFazio and pinning the blame on one nefarious individual, I’d like to suggest an actor-network-theory reason for the failure.

Political systems have a life of their own. Obama’s administration has to fulfill immediate goals like passing the bill and making it seem like the average American is getting relief. Complex infrastructure projects take decades to build, unless you are in China and after last week we know very well what cutting corners will do. For political reasons Obama doesn’t have decades to wait, so even though he gives the impression of being a strong-willed, inspirational individual who wants to up-end the political machine, he is going for the quick fix.

In other words, we’ve created political ecologies that are going to stand in the way of moves to fund infrastructure.

What to do, then? This is the subject for future posts, but I’ll suggest two things. We need to face up to address the underlying political structures that prevent infrastructural spending, no matter that it is impossible to condense these into a sound bite and we need to use advanced technologies to invent new kinds of infrastructures, augmenting existing conditions. Ubiquitous computing is already here, Mike Kuniavsky suggests. How can we use it to overcome the rising problems of life in the city? 

Back to Infrastructure

Christopher Hawthorne has a largely favorable review of The Infrastructural City in the Los Angeles Times today. I was delighted by the attention although disappointed by how he got tripped up in some naïve assumptions. Unfortunately even though ACTAR had sent him my contact info, Hawthorne rushed his article to press, missing his opportunity to think through the book’s main points.   

I laughed out loud at Hawthorne’s opening lines, in which he suggests that the book would have "a tough time steering clear of the remainder bin" if it weren’t for the stimulus package or that I didn’t expect that infrastructure would be trotted out as part of the stimulus plan, that I was taken by surprise.  

It’s true that infrastructure was once the least sexy of topics, a term barely used in English as late as the 1960s, but as Ian Baldwin, my former student at Penn, observed, it spread widely after the publication of America in Ruins, co-authored by economist Pat Choate and Susan Walters. The authors of that report suggested that some $2.5 trillion would be needed just to keep the country’s infrastructure functioning at a constant level into the mid-1990s. Infrastructure, as a concept referring to a bundle of physical service networks, became visible in its collapse. That money was, of course, never allocated.  

Over the next two decades, infrastructure continued to rise in the public eye, in large part because, as our book points out, it is in a state of constant failure. This is something that virtually all of us experience. Angelenos, navigating over-crowded streets and freeways at a snail’s pace, understand it viscerally. We’ve also come to understand that there are going to be no great new projects: only architects and reporters seem to believe otherwise. The immense expense of construction, empty government coffers, and NIMBYism will take care of that. But the New-Bad-Future-right-now of infrastructure defines our cities today, much as the lifestyles of Banham’s ecologies defined the urbanism of his day. As humans and objects interact ever more directly (look at the actor-network theory of Bruno Latour, for example), the lives of these systems become more and more important.     

Architects have long been interested in infrastructure. Starting in the Renaissance, copies of Vitrivius’s Ten Books on Architecture typically had Frontinus’s essay on aqueducts as an appendix. Later on, Piranesi likely drew inspiration from the Cloaca Maxima, the Roman sewer, for his Carceri. Infrastructure is a lost fantasy object, taunting us with the suggestion that those aspects of the city that escaped from architecture could once more be under our purview. In Los Angeles, the avant-garde scene came together at the West Coast Gateway competition of 1989. If the West Coast Gateway project never got built, Gary Paige’s reconstruction of an abandoned railway depot into the downtown SCI_Arc building a decade later is the most inspired large project in the city in decades. In designing the building Paige drew on the theories of Stan Allen, now Dean at Princeton, and an advocate of thinking about architecture and urbanism in infrastructural terms. Infrastructure is hardly a topic for the remainder bins, at least not for architects.    

As for the stimulus plan: there was certainly some rhetoric last fall suggesting that a new era of WPA projects was upon us, but as I’ve pointed out, this is hardly the case if one actually looks at what’s in the plan, as I did here. Hawthorne isn’t a political reporter, so he missed this, but it’s crucial and there’s on excuse for not doing your homework. 

The Obama administration is not spending a significant amount of money on infrastructure. My definition of infrastructure is broad—certainly broader than Hawthorne’s—but this plan does not initate much new funding for infrastructure, not unless you count the construction of hospitals (note to unemployed architects: there will be a bit of work building hospitals) or digitizing health care records. The plan is an amalgam of tax cuts bundled with triage for various government programs that were underfunded during the Bush administration. Those are the facts. Whether Obama changed his mind or infrastructure was an easy-to-understand term he deployed as bait, this is by no means the return of the WPA.  

Perversely, this may not be a bad thing. Take a look at Eric Janszen’s article "The Next Bubble" at Harper’s. Back in February of last year, months before the crisis had revealed its full dimensions to the unwary, long before the rhetoric about the stimulus plan, when Hawthorne was still hunting remainder bins looking for books on infrastructure, Janszen cautioned that infrastructure might be the cause of the next bubble.  

 

But the money spent on last fall’s bailout, together with the funds allocated for the stimulus plan, pretty much ensures that the government is going to have its fiscal hands tied for some time to come. In other words, the stimulus plan will not only fail to fund further large infrastructural initiatives, it will prevent them from being built in the first place. An infrastructural bubble isn’t coming. 

 

Now I do anticipate that in the coming years industry will have some success with getting funds allocated to subsidize construction of supposedly sustainable energy sources. Most of these aren’t sustainable either environmentally or financially, and once the economy shifts again, the results may look something like this abandoned solar energy plant built in the early 1980s, then dismantled and left to rot like a field of dying date palm trees when the financial models failed.

abandoned solar power plant from clui archives

 [Image via CLUI’s Land Use Database]          

 

The real infrastructure to watch will be the network. It’ll continue its growth, the invisible layer of soft infrastructure exerting more and more influence over our lives even as it becomes more distributed and more privatized. As Rick Miller and Ted Kane point out in their chapter on mobile phones, it is by no means positive that the public interest has been placed in the hands of private interests. This is a challenge that the country needs to confront and I see little political will to do so. Hawthorne’s failure to mention it suggests that it’s still beyond the scope of a story in a Sunday paper. That’s a shame.   

 

The end of Hawthorne’s piece is flat-footed. Instead of confronting the consequences of our conclusions, he appeals to the fantasy of infrastructure as the next architectural object, trotting out the idea that architects need to be involved in the design of the new infrastructural America. 

In the unlikely event that somehow a burst of hard infrastructure takes place, I hardly think that this will save architecture. Why would a cash-strapped government pay for design now when it has never done so before? You could say that the great turning point in infrastructure is took place at the George Washington bridge when the engineers decided it looked fine as it was and decided not to give it a stone cladding. But we’re not even brave enough for that today. The palm tree on our cover says it all: a cell phone tower is not designed by Frank Gehry, it is designed to look like a palm tree. Now we’re probably the better for it: the host of architect-designed subway stations in Los Angeles were largely an embarrassment. If by some miracle architects get on board with infrastructure, NIMBYism will make sure that new infrastructure would look even more contextual. Imagine the rise of cell phone trees disguised as mission bells throughout Los Angeles, hardly what any of us want. 

I was happy to see Hawthorne finish his article with a pot-shot at the architect-as-icon. It’s nice that is trickling down. But Hawthorne doesn’t go far enough with his recommendations for the profession. Please save us from OMA-designed off-shore wind farms. Architecture needs to re-invent itself in the face of the challenges of contemporary life. As Hawthorne suggests, architects need to take a page from engineers and embrace anonymity. More than that, they need to apply their tremendous imaginations and skill to reprogramming the world of network culture into something new and fantastic. Go read bldgblog, look back at what Andrea Branzi and Bernard Tschumi wrote decades ago: these guys had it right. Now more than ever commonplace thinking about architecture’s role will be fatal. 

In the coming week, I’ll follow up with a post containing my piece from Volume Magazine’s "bootleg" issue of Urban China in which I draw together the links between the book and the economic stimulus plan and suggest some more directions.

 

 

 

Goodbye Icons; Hello Infrastructure

Blair Kamin is contributes to a growing chorus of voices about the end of the architectural icon, noting that infrastructure is the new focus under the Obama administration. But Kamin is critical: although he advocates infrastructural funding, he observes how little is being spent on it under the Obama administration. Still, he suggests, the very fact that this debate is happening today is positive. Again, Kamin is right. I’m still working on a white paper on the lessons that The Infrastructural City has for us today, but for now I’m convinced more than ever that we need to very carefully think re-envision infrastructure, not just build more of the same or, worst of all, turn it into a new architectural fetish.    

in praise of trees

 

cell phone tree at hunter mountain

I went skiing at Hunter Mountain in upstate New York for two days this week. It was a long-needed break for my wife and myself. We had a great ski instructor, Peter Dunh,am, and after just a couple of hours instruction, were skiing the advanced slopes with confidence. And, just to prove that the Infrastructural City is relevant anywhere, the top of the mountain was marked by a cell phone tree.

Warren Techentin’s essay on our new relationship with trees changed my view of cell phone trees. I’ve stopped thinking of them as cop-outs or disguises. After all, they rarely hide. Inadvertently, perhaps, the cell phone tower has turned from a disguise into something else: whereas the antennas of old symbolized the specialized nature of telecommunications in our lives, cell phone trees celebrate the augmented nature of our reality.