2025-in-review

It’s strange to measure every year against a concept developed by a science fiction writer, but William Gibson’s line “The future is already here—it’s just not evenly distributed”1. has been my north star for my recent year-in-review essays. Gibson meant that the future was unevenly distributed by class: the wealthy receive high-tech healthcare while the world’s poorest live in squalor—though one might ask which of these is really our future. Yet the quote has been repeatedly misread as a claim about time andspace: that the future arrives somewhere first, perhaps unseen, while the rest of the world catches up. But this misreading is more productive than Gibson’s intent. Gibson’s critique of inequality is fair enough, but we all know this, decry it, and go on about our business. The misreading, on the other hand, is a theory of historical change.

With the release of ChatGPT in late 2022, a temporal rift opened, shattering the post-Covidean present. But many tried the early tools, encountered hallucinations, read articles about slop and imminent environmental ruin, and reasonably concluded there was nothing to see. By 2025, a cursory examination of news in AI would have assured them that AI had proved a bust. OpenAI’s long-awaited updates disappointed, and the company flailed, turning to social media with Sora, a TikTok clone for AI. Meta seemed to abandon its efforts to create a competitive AI and instead turned to content generation for Instagram and Facebook, something nobody on earth wanted. Talk of a bubble started among Wall Street pundits. The hype-to-disappointment cycle is familiar, and the dismissals were not unreasonable.

But again, the future isn’t evenly distributed, and if you don’t know where to look, you would be excused for believing it’s all hype. Looking past such failures, 2025 was actually a year of breakneck progress. Anthropic’s Claude emerged as the most capable system for complex tasks, Google’s Gemini became highly competitive, while DeepSeek and Moonshot AI proved that China was not far behind. More significant than any single model was the emergence of agentic AI—systems that can take on multi-step tasks, act, navigate filesystems, write and execute code, and work across documents. Claude Code was the year’s groundbreaking innovation. While “slop” was Merriam-Webster’s word of the year, “vibe coding”—using agents to write programs—was much more important. Not only could programmers use them to accelerate their work, it also became possible for non-programmers to realize their ideas without any knowledge of code, a radical change in access I explored in “What Did Vibe Coding Just Do to the Commons?”.

By any first-world standards, at least, these tools are remarkably democratic and inexpensive. A basic Claude subscription costs about as much as a month of streaming, and even the $200 maximum usage account costs less than a monthly car payment. For many, however, the barrier is not price but something deeper—a resistance approaching revulsion. These tools provoke fear in a way that earlier technologies did not. It’s not the apocalyptic dread of the doomers or the Dark Mountain sensibility that apocalypse is near. Rather, it’s a threat to the sense that thought itself is what makes us distinct. The unevenness of the future is no longer about access; it’s now about willingness to engage.

As a scholar, thinking about the very short term is strange for me. I have always been suspicious of claims that radical change was upon us. I would rather align myself with the French Annales school concept of la longue durée, as defined by the great Fernand Braudel, the long-term structures of geography and climate. Faster than that were the medium-term cycles of economies and states, while he dismissed the short-term événements of rulers and political events as “surface disturbances, crests of foam that the tides of history carry on their strong backs.”2. Events, he wrote elsewhere, “are the ephemera of history; they pass across its stage like fireflies, hardly glimpsed before they settle back into darkness and as often as not into oblivion.”3. The real forces operate beneath, slowly, often imperceptibly.

Curiously, Braudel himself embraced technological change in his own work. In the 1920s and 30s, he adapted an old motion-picture camera to photograph archival documents—2,000 to 3,000 pages per day across Mediterranean archives from Simancas to Dubrovnik. He later claimed to be “the first user of microfilms” for scholarly historical research.4. His wife Paule spent years reading the accumulated reels through what Braudel called “a simple magic lantern.”5. Captured in 1940, he spent five years as a prisoner of war and wrote the entire first draft of The Mediterranean—some 3,000 to 4,000 pages—from memory. Paule, meanwhile, retained access to the microfilm and notes in Paris, and after the war, they reconstructed the text, taking his manuscript, verifying it and adding footnotes and references from the microfilm.6.

In 1945, the same year Braudel was liberated, Vannevar Bush published “As We May Think,” in which he imagined a device he called the “Memex”: a mechanized desk storing a researcher’s entire library, indexed and cross-referenced, expandable through associative trails.7. The vision remained speculative for decades. Now the world’s archives are being digitized; AI systems translate, summarize, and search across them in seconds and can translate any language. To take one example, earlier this year, I used Google’s Gemini to translate the Hierosolymitana Peregrinatio of Mikalojus Kristupas Radvila Našlaitėlis, a sixteenth-century pilgrimage narrative from an online scan of the Latin first edition. The result is not a polished scholarly translation, but a working text that allowed me to gain a good sense of a text that was previously unreadable to anyone without proficiency in Latin or Polish (the only language into which, to my knowledge, it had been translated). The role of the intellectual is being transformed—not replaced, but augmented in ways Bush could only sketch. This feels like something other than foam.

How to account for such a rapid shift? Manuel DeLanda offers one answer in A Thousand Years of Nonlinear History. Working in Braudel’s materialist tradition and drawing on Gilles Deleuze and complexity theory, DeLanda describes how flows—of trade, energy, and information—accumulate and concentrate until they cross a threshold, undergo a phase transition, radically reorganizing into a new stable state. But here is the key insight: intensification is la longue durée. The accumulation of flows that began with the Industrial Revolution—or perhaps with writing, agriculture, or even symbolic representation itself—is the deep structure behind our era. Steam, electricity, computing, the internet: each was a phase transition within a longer arc of intensification. Cities accelerate such processes, as Braudel showed, concentrating capital and labor until new forms of economic organization emerge—Venice, Antwerp, Amsterdam, London, each becoming sites at which the future arrived first. Such conditions are not opposed to la longue durée; they are the moments when intensification crosses a threshold.

The continued pace of change this year underscores that there has been no return to equilibrium. But this has been accompanied by unprecedented resistance to technology, appearing as simultaneous terror at its apocalyptic nature (in jobs, if nothing else) and dismissal as useless, especially in Gen Z. A January 2026 Civiqs survey found that 57 percent of Americans aged 18–34 view AI negatively—more than any other age group. Curiously, the seniors category, which now includes most boomers, was the least resistant to AI, followed by Gen X and older millennials, all groups that grew up seeing radical societal and technological changes.8. It seems paradoxical that the smartphone generation recoils from the tools of the future. To understand this resistance means understanding the mentalité that shaped it—what Braudel’s successors in the Annales school called the collective psychology formed through lived experience.9. For Gen Z, that formative experience was network culture—both a successor to postmodernism and a form of collective psychology I did not fully understand at the time. Writing on network culture in 2008, it seemed to me that social media promised connection; instead, it brought division.10. The networked self was indeed constituted through networks, not merely isolated in postmodern fragmentation, but the fragmentation was now collective. Networked publics built barriers against one another, creating what Robert Putnam called cyberbalkanization: retreat into a comfortable niche among people just like oneself, views merely reinforcing views.11. Identity wars and mimetic conflict flared across filter bubbles that amplified outrage and tribal scapegoating as both MAGA and wokism built toxic online cultures. QAnon and a thousand other conspiracy theories propagated through Facebook groups and YouTube recommendations. Young men drifted into incel communities where loneliness became ideology and livestreaming mass shootings was celebrated. Influencers built their empires on hatred—Hasan Piker framed Hamas’s October 7 massacre as anticolonial resistance while Nick Fuentes celebrated mass shooters as vanguards of race war and civilizational collapse.

Nor did this just fragment culture—it exacted a massive psychic toll, as social contagion spread new forms of self-harm and mental illness. During the pandemic, teenage girls began presenting tic-like behaviors—not Tourette’s syndrome, but something researchers termed “mass social media-induced illness,”12. spread by TikTok videos about Tourette’s rather than any actual disease. The pattern was unprecedented but not unique. Eating disorders spread through thinspiration hashtags. Self-harm tutorials circulated on Instagram. The platforms that were supposed to bring us together instead spread desires, disorders, and identities through pure social contagion—and with them, violence and polarization. A generation that grew up inside this experiment—that watched it reshape their peers’ bodies, minds, and identities—is right to be skeptical of the next technological promise.

In 2010, it seemed like network culture had a good chance of becoming understood as the successor to postmodernism. Bruce Sterling and I were engaged in a kind of dialogue about it online. He predicted that network culture would last “about a decade before something else comes along.”13. And he was right, as I acknowledged in my 2020 Year in Review. By then, network culture was exhausted, and with the Covidean break, it seemed time for something new. In 2023, I taught a course at the New Centre for Research & Practice to try to broadly sketch the emerging era. It’s still early and hard to fathom, like trying to understand postmodernism in 1971 or network culture in 1998, but it’s clear that if postmodernism was underwritten by the explosion of mass media, network culture by the Internet, social media, and the smartphone, then the current era is shaped by AI.

But if Gen Z, scarred by the effects of social media, has been reacting with deep fear and anxiety, Sterling how epitmozes the other reaction, dismissal. In the most recent State of the World, for example, he derides AI-generated content as “desiccated bullshit that can’t even bother to lie.” He compares the vibe-coding atmosphere to an acid trip, mocking the professionals who utter “mindblown stuff” like “we may be solving all of software” and “I have godlike powers now.” For Sterling, AI can produce nothing but slop. Now Bruce has always had a healthy skepticism toward tech claims, but I can’t help but think of Johannes Trithemius, the fifteenth-century abbot who wrote De Laude Scriptorum just as Gutenberg’s press was spreading across Europe—defending the scriptorium against a technology he could not see would remake the world.

There are even deeper, more existential fears, and I’ve spent the past year addressing them on my blog, in the process laying the foundation for a book on the topic: AI as plagiarism machine; AI as hallucination engine; AI as stochastic parrot, mindlessly repeating what it has ingested (Sterling’s critique); and AI as uncanny double, too close to us for comfort. As I explain, the discomfort arises not from the machine’s otherness but from its likeness: a mirror held up to processes we preferred to believe were uniquely ours.

It’s no accident that I published these essays on my blog. As far as my personal year in review goes, this was very much the year of the blog. I have no plans to ever publish in an academic journal again. Why would I? Who would read it? Why would I want to publish something paywalled, reinforcing the walled gardens of inequality that academia is so desperate to maintain—even as it proclaims itself the champion of open inquiry and democratized knowledge? Academia has become the realm of what Peter Sloterdijk called cynical reason: rehearsing the tropes of ideology critique while knowing the game is empty and playing it anyway. This revolts me.

But for almost ten years now, since the shutting down of the labs at Columbia’s architecture school, I have been content to write from the position of the outsider, something I reflected on in “On the Golden Age of Blogging”. That essay was prompted by a strange comment from Scott Alexander, who lamented on Dwarkesh Patel’s podcast that he had personally made a strategic error in not blogging during what he called the “golden age,” imagining that “the people from that era all founded news organizations or something.” The golden age he remembers is a fiction, as golden ages often are—and he gets the stakes entirely wrong. Evan Williams founded Blogger in 1999, sold it to Google, co-founded Twitter, then created Medium, which convinced hapless readers pay to read slop long before AI slop was ever a thing. The early bloggers who sought professionalization found themselves absorbed into the worst of the worst, writing for BuzzFeed, peddling nostalgia listicles that rotted psyches.

There was, however, a golden age for me, and I miss it: the architecture blogging community circa 2007—Owen Hatherley, Geoff Manaugh, Enrique Ramirez, Fred Scharmen, Sam Jacob, Mimi Zeiger (whose Loud Paper was less a blog and more a zine, but a key part of the culture), and others. We inherited from zine culture an informal, conversational tone and the will to stand outside architectural spectacle. But ArchDaily and Dezeen commercialized the form, shifting from independent critique to marketing and product. Startup culture absorbed architectural talent.

Blogging was powerful precisely because we had no stakes in it—we owned and controlled our means of intellectual production. The golden age of blogging is not in the past; it is now. After years of proclaiming I would blog more, in 2025, I really did. I wrote over 83,700 words on varnelis.net and the Florilegium—essay-length pieces on landscape, native plants, AI and art, architecture, infrastructure, politics, and tourism. My only regret is that my presidency at the Native Plant Society of New Jersey consumes so much of my thinking about native plants that little remains for writing. But the time will come, and if nothing else, my investigation of the Japanese garden aesthetic should point in the future direction for my writing on landscape.

I also continued to make AI art, or to be more precise, what I called stochastic histories. A major project was a substantial reworking of The Lost Canals of Vilnius, a counterfactual history in which, after the Great Fire of 1610, Voivode Mikalojus Radvila Našlaitėlis rebuilt the city with Venetian-style canals, complete with gondoliers, water processions, and a hybrid “Vilnius Venetian” architecture. As research, I used Gemini to translate Radvila’s sixteenth-century Latin pilgrimage narrative. AI, like photography or film, is what you make of it. Film is perhaps the better analogy—anyone can make a video. Making something worthwhile is another matter entirely. In December, I also completed East Coast/West Coast: After Bob and Nancy, a generative restaging of Nancy Holt and Robert Smithson’s 1969 video dialogue using two AI speakers.

There were other substantial essays, too. In “Oversaturation: On Tourism and the Image”, I finally put down on paper something I had wanted the Netlab to address while at Columbia, but that proved too dangerous for the school to support. Universities cannot critique the very systems of overproduction they depend upon for survival. Publish or perish and endless symposia nobody is interested in are the academic versions of overproduction, but more than that, any architecture school claiming global currency cannot afford to offend either other institutions, like museums, that give it legitimacy, or, for that matter, the trustees that fund both. As I point out, tourism has always been mediated by imagery; take Piranesi’s vedute or the Claude Glass. Grand Tourists always had representations at hand to interpret their direct experience—but a new crisis point has been reached with both overtourism and the overproduction of images. Algorithmic logic now reorganizes cultural geography around “most Instagrammable spots,” making historical significance secondary to content potential. The Fushimi Inari shrine in Kyoto is the case in point—a 1,300-year-old shrine that Instagram made famous and that has now ceased to serve as a religious site due to the influx of visitors. The Japanese have a term for this: kankō kōgai, tourism pollution. Tourism has become the paradigm of contemporary experience—the production of imagery without cultural meaning; everything feeds the same algorithmic mill. Even strategies of resistance get metabolized—slow travel becomes a hashtag, psychogeography becomes an Instagram guide.

The Bilbao effect, which was a major driver of oversaturation, was itself a product of globalization. Hans Ibelings coined “supermodernism” in 1998 to refer to the architectural expression of Marc Augé’s “non-places,” an architecture optimized for the perpetual circulation of bodies and capital. It was the architecture of network culture, of the Concorde and the Internet. Koolhaas diagnosed its endgame in his 2002 “Junkspace“—”Regurgitation is the new creativity”—and then, tellingly, stopped writing. Today, network culture is long gone; nationalism is on the rise. The Internet is a dark forest now14. while the disconnected life is on the rise.15 The most exclusive resorts now advertise no Wi-Fi, no cell service, no addresses—only coordinates. Disconnection has become the ultimate luxury, sold back to the same people who built the infrastructure of connection. More cities are alarmed by the effects of overtourism than desire to attract tourists. In the US, new architectural proposals appeal to a retardataire aesthetic—Trump displaying models of a triumphal arch inspired by Albert Speer and marking a triumph of nothing in particular in models in three sizes (“I happen to think the large looks the best“), a four-hundred-million-dollar ballroom modeled on Mar-a-Lago, an executive order mandating classical architecture for federal buildings that Stephen Miller explicitly framed as culture war.

Yet both Bilbao and MAGA are spectacle, architecture-as-branding. But the Bilbao effect is imploding. No city believes anymore that a signature building by a starchitect will transform its fortunes. The parametricists have nothing left to say. Parametric design promised formal liberation—responsive, site-specific, computationally derived—but what it delivered was the most efficient, ugliest box. If the promise was the blob, the reality is the “5-over-1”: wood-frame residential floors stacked on a concrete podium with ground-floor retail, wrapped in a pastiche of brick veneer, fiber cement panels, and that obligatory conical turret element meant to signal “we thought about this corner.” As for AI-generated architecture, it is merely boring—giant sequoias hollowed out as apartment buildings, white concrete towers with impossible cantilevers, and lush vegetation sprouting from every surface—the same utopian fantasy rendered a thousand times over. These are renders of renders: AI trained on architectural visualization produces visualizations that are utterly disconnected from any tectonic reality. A new generation may emerge in response to new needs, but for now, the discipline has lost its cultural purchase. Architecture, for us, is a thing of the past.

The art world, too, has slowed. Museums are putting on fewer shows, shifting from aggressive schedules to longer, more deliberate exhibitions—or simply cutting programming as budgets tighten.16. The frantic pace of the Biennale circuit has exhausted dealers and collectors alike; smaller fairs are folding, and even the major ones feel like obligations rather than events. Galleries that survived the pandemic are now closing quietly, without the drama of a market crash—just a slow bleed of foot traffic, sales, and cultural attention. There is no new movement, no emergent critical framework, no sense of direction. The market churns on—auction prices for blue-chip artists remain high, collectors still speculate, art advisors still advise—but the sense of cultural mission has dissipated. What remains is commerce without conviction, a field that has forgotten why it exists beyond the perpetuation of its own economy. The institutions that trained artists for this field are collapsing alongside it.

As enrollment dwindles, design schools are collapsing—not merely contracting, but ceasing to exist. Most recently, the California College of the Arts announced in January 2026 that it would close after the 2026–27 academic year17., the last remaining independent art and design school in the Bay Area. It follows a grim procession: the San Francisco Art Institute (2020), Mills College (2022), the Pennsylvania Academy of the Fine Arts (2023), and Woodbury University’s acquisition by Redlands and subsequent adjunctification—a fate that has methodically undone so many schools as faculty become contingent labor and institutions into hollow administrative structures run by well-paid, cost-optimizing consultants.

There is personal resonance for me in this. Simon’s Rock College of Bard, which shuttered its Great Barrington campus in 2025, was where I studied for my first two years before transferring to Cornell—a pioneer of early college education that offered a radical pedagogical experiment in what learning could be beyond conventional schooling. I arrived there straight from high school, as did my good friend and colleague Ed Keller; clearly, something interesting was in the water back then. Simon’s Rock made the development of young minds its central mission rather than an incidental focus of brand management or endowment growth, and its alumni list is impressive for such a small school. It has an afterlife at Bard, but it’s an echo at best.

The difference between these institutional deaths and simple market failure is this: they are not being replaced. When a retail business fails, another may open elsewhere. When a school closes, there is no succession. The market offers no alternative. Instead, what remains are the corporate university satellites—for-profit programs nested within larger institutions (like Woodbury’s absorption into Redlands), stripped of autonomy, their faculty reduced to precariat, their curricula bent toward what can be measured and marketed. The art schools that survive do so by transforming into something else: luxury finishing schools for wealthy families or research appendages to larger universities, where “design thinking” becomes another management consultant’s tool. The pedagogical mission—to create conditions where students might develop serious aesthetic judgment, where they might encounter genuine problems and be forced to think through them—is not merely challenged but impossible. The closure of these schools does not signal a failure of art education; it signals that the very idea of art education as something valuable in itself has been liquidated.

This hollowing out of cultural institutions is not incidental to the political moment—it is one of its hallmarks. Politically, most people have checked out. This is not 2017, when each provocation demanded a response; the outrage cycle has given way to numbness. In “National Populism as a Transitional Mode of Regulation”, I argued that Trump, Orbán, Meloni, and their ilk represent not a return to fascism but something new: the authoritarian management of declining expectations. National Populism correctly identifies that neoliberalism’s promise of shared prosperity has failed, but it channels legitimate grievances toward scapegoats rather than addressing the technological displacement actually causing them. This is its tragic irony: the National Populist base—workers made obsolete by neoliberalism and unable to participate in AI Capitalism—finds its legitimate anger directed into a movement that accelerates the very forces rendering them superfluous. Their value to capital lies in political disruption rather than economic production; they are consumers and voters, but no longer needed as workers. National Populist leaders offer psychological compensation—dignity, recognition, transgressive identity politics—rather than material improvement. The apocalyptic tenor of populist culture, its end-times thinking and conspiracy theories, provides a framework for populations sensing their own economic redundancy.

The alliance between tech billionaires and populist leaders is unstable. AI Capitalism requires borderless computation and global talent flows; nationalist protectionism contradicts these at every turn. Musk, Thiel, and Andreessen have aligned with the movement to dismantle the regulatory state, not because they share its vision but because populism serves as a useful battering ram against institutional constraints. Once those barriers fall, the movement and its human-centric concerns can be discarded. National Populism, as I conclude, is not the future—it is a political interlude, a transitional mode that will not survive contact with the economic forces it has helped unleash.

If National Populism is transitional, is there a positive vision that can replace it? In “After the Infrastructural City”, I responded to Ezra Klein and Derek Thompson’s book Abundance, perhaps the most influential book of 2025, which argues that America’s inability to build is a political choice, not a technical constraint. Their solution: streamline regulation, invest boldly, build more. It’s a compelling vision—and a necessary corrective to decades of paralysis. But Abundance shares a curious blindspot with Muskian pronatalism: both assume we need more people. Musk preaches that declining birthrates spell civilizational collapse; Klein and Thompson build their vision on populations that will mysteriously arrive to fill what’s built, perhaps by immigration. Neither accounts for the possibility that AI changes the equation entirely—that a smaller population, augmented by intelligent systems, might not be a crisis at all. Populations are already shrinking across much of the developed world. What I call “actually-existing degrowth”—not the voluntary eco-leftist kind, but the unplanned demographic contraction now underway in Japan, Korea, and much of Europe—is coming for the United States too. Declining birth rates, aging populations, and regional depopulation: these are not future scenarios but present facts.

This doesn’t invalidate the Abundance agenda; it redefines it. Abundance cannot mean building more for populations that will not arrive. It must mean building better, adaptive, intelligent infrastructure for smaller, older societies. AI, rather than merely destroying jobs, can help navigate this transition: smart grids, autonomous transit, predictive healthcare. The opportunity is real. Managed shrinkage, done well, can mean more livable cities, restored ecosystems, higher quality of life. The question is whether political leaders can articulate a vision of flourishing within limits—or whether nostalgia for growth will leave us building for a future that never comes.

Against the exhaustion of institutions, against the hollowing out of architecture and art, against the closure of the schools that trained people to imagine, the blog remains. It may not be much, but it is one independent voice outside the collapsing structures around me. I wrote over 83,000 words this year. I made art. I thought through problems that matter to me with the help of AI, which provided me with tools I could only have dreamt of merely a year ago. Today, I uploaded hundreds of thousands of words from my essays to a directory in Obsidian so that Claude could draw connections between them (see here for just how one can set this up).

The future is already here—it just isn’t evenly distributed. Some are afraid or are still pretending AI isn’t happening. Phase transitions are uncomfortable. They are also where the interesting work gets done. One makes of one’s time what one makes.

1. William Gibson, quoted in Scott Rosenberg, “Virtual Reality Check Digital Daydreams, Cyberspace Nightmares,” San Francisco Examiner, April 19, 1992, Style section, C1. This is the earliest verified print citation, unearthed by Fred Shapiro, editor of the Yale Book of Quotations.

2. Fernand Braudel, The Mediterranean and the Mediterranean World in the Age of Philip II, trans. Siân Reynolds (New York: Harper & Row, 1972), 21.

3. Braudel, The Mediterranean, 901.

4. Fernand Braudel, “Personal Testimony,” Journal of Modern History 44, no. 4 (December 1972): 448–67.

5. Paule Braudel, “Les origines intellectuelles de Fernand Braudel: un témoignage,” Annales: Histoire, Sciences Sociales 47, no. 1 (1992): 237–44.

6. Howard Caygill, “Braudel’s Prison Notebooks,” History Workshop Journal 57 (Spring 2004): 151–60.

7. Vannevar Bush, “As We May Think,” The Atlantic Monthly 176, no. 1 (July 1945): 101–8.

8. Civiqs, “Do you think that the increasing use of artificial intelligence, or AI, is a good thing or a bad thing?,” January 2026, https://civiqs.com/results/ai_good_or_bad.

9. The concept of mentalités emerged from studies of phenomena like the witch trials, where beliefs and fears spread through communities in ways that could not be reduced to individual irrationality. For an overview of mentalités as a historiographical concept, see Jacques Le Goff, “Mentalities: A History of Ambiguities,” in Constructing the Past: Essays in Historical Methodology, ed. Jacques Le Goff and Pierre Nora (Cambridge: Cambridge University Press, 1985), 166–180.

10. Kazys Varnelis, “The Rise of Network Culture,” in Networked Publics (Cambridge: MIT Press, 2008), 145–160.

11. Robert Putnam, “The Other Pin Drops,” Inc., May 16, 2000.

12. Kirsten R. Müller-Vahl et al., “Stop That! It’s Not Tourette’s but a New Type of Mass Sociogenic Illness,” Brain 145, no. 2 (August 2021): 476–480, https://pubmed.ncbi.nlm.nih.gov/34424292/.

13. Bruce Sterling, “Atemporality for the Creative Artist,” keynote address, Transmediale 10, Berlin, February 6, 2010.

14. Yancey Strickler, “The Dark Forest Theory of the Internet,” 2019, https://www.ystrickler.com/the-dark-forest-theory-of-the-internet/. See also The Dark Forest Anthology of the Internet (Metalabel, 2024).

15. “Trend: Not Just Digital Detox, But Analog Travel,” Global Wellness Summit, 2025, https://www.globalwellnesssummit.com/blog/trend-not-just-digital-detox-but-analog-travel/.

16. “The Big Slowdown: Why Museums and Galleries Are Putting on Fewer Shows,” The Art Newspaper, March 10, 2025, https://www.theartnewspaper.com/2025/03/10/the-big-slowdown-why-museums-and-galleries-are-putting-on-fewer-shows.

17. California College of the Arts, the last remaining private art and design school in the Bay Area, announced in January 2026 that it would close after the 2026–27 academic year. See “‘Nowhere Left to Go’: As California College of the Arts Closes, So Does a Pathway for Bay Area Artists,” KQED, January 13, 2026, https://www.kqed.org/news/12070453/nowhere-left-to-go-as-california-college-of-the-arts-closes-so-does-a-pathway-for-bay-area-artists.

A New Career in a New Town

I moved Varnelis.net to Kinsta yesterday, widely seen as the best WordPress host around. I also updated the site theme to GeneratePress which I first used at the Native Plant Society of New Jersey where I am the head of advocacy and, to help out, brought the Web site to WordPress. The site struggled after I had serious security issues earlier in the year. Not only was that a crummy experience for you, the backend that I write posts with glitched constantly and it was frustrating for me to enter new content. The new site is a delight for me and, I hope, is interesting for you as well. The theme is ultimately based on Indexhibit, which was admirably minimalist in a way a Lithuanian artist could love but never worked for me as a content management system.

I have lost count of how many times I have said that I will be posting more on this site, so I won’t make promises I can’t keep, but at least the site won’t be an excuse anymore. So what about blogs? Aren’t they dead? Archinect’s blog, aggregat:456, archidose, ballardian, javierist, m.ammoth.us, markasaurus, sit down man you’re a bloody tragedy, strange harvest, subtopia all gone, lgnlgn a record of an aborted restart ten years back. Even bldgblog barely posts more than I do now. But I refuse to go. Loos titled his first collection of essays “Spoken into the Void.” Being untimely may be the strongest position of all.

Now, there’s not that much to say about architecture anymore, but that’s ok. Times change. Architecture is at its lowest point in my lifetime. There is no excitement. When is the last new building that interested you, I ask my friends? Nobody knows. Maybe the Casa da Música, one said. That’s like saying the Ford Foundation building was the last great building in 1981. Not one great building on this list of top ten buildings in the 2010s, not even one good building. The scandal isn’t that there is a scandal, the scandal is that nobody cares and nobody talks about it. Conceptual architecture is dead in the water. Architecture fiction was the last burst of a shooting star deep in the atmosphere before it disappeared. In fairness, I don’t know if either AUDC or the Netlab will do anything again, although I continue my own work in earnest (more on that work another day).

But there is plenty to talk about; we can talk about late network culture and the sorry state it has brought us to, the failure of networked publics. we can talk about art, and we can talk about the environment and the importance of native plants in the landscape. We can even talk about architecture since art forms that seem to be things of the past have an uncanny way of coming back to life. I have a lot to say about these things and, with the end of (native) planting season upon me this week, I may be doing just that. But I won’t be doing that on social media. Sure, you may see these posts on Facebook or Twitter, but I’m not really there much anymore. After logging off Facebook for a year, I found I didn’t want to use it anymore. Facebook doesn’t create a feeling of belonging, it creates anxiety and depression. No wonder young people don’t want to use it anymore. Facebook’s troubles are deepening and it’s ridiculous foray into virtual reality will, we all hope cause its utter demise. Twitter stayed relevant for longer, but I am noticing many fewer posts from my friends there these days. Growth at both of these platforms has ceased, even reversed. So Elon Musk is buying Twitter. That’s the equivalent of buying a new gasoline car today, a dying platform terrible for the environment. Twitter is dying. If Elon brings back the seditious, short-fingered vulgarian now suffering through mid-stage dementia, it will just bring end Twitter to an end and wipe out his ludicrous $44 billion investment. Young people increasingly hate these platforms, regardless of what money-chasing analysts want you to believe. Yes, there are podcasts. I love them, but I worry about the effect of constant voices in my head, perhaps because I read Julian Jaynes many decades ago. There are Medium and Substack, but the endless demand for money is tiresome. You may read this on Substack. Great. But you don’t have to. Read it here instead.

The social media era is over. Long live the blog. My posts may be few and far between, they may be late, they may be bad, you may not read them but they are still something I own. I can say what I want, unbeholden to anyone else and I’m not going anywhere anytime soon.

On Drupal, or Wither Web 2.0?

With the end of the year approaching, I might as well begin my reflections with yet another rote lament for why I don't post enough anymore. Blogging is dead for many and has been dead now for about as long as it thrived. Somehow, I resolve, I'll turn back to blogging one day, but other things come first, like my kids, my project at MoMA, various projects at the Netlab, teaching, articles that I have neglected too long, writing my book, working on the restoration of my house and so on.

But every now and then it turn back to the Web, if not to blogging then to working on the infrastructure beneath my stable of Web sites. In this case, this morning I took the Networked Publics site and converted it to from a live Drupal installation to a static site. Networked Publics ceased to be live years ago as it was the record of a year-long workshop that took place from fall 2005 to fall 2006 and the book that came out of the workshop was published in 2008. Besides me the last log at Networked Publics comes from my late colleague and friend Anne Friedberg some six years, twenty-four weeks ago. I find it sad that the group we formed doesn't stay together virtually, but such, I suppose, is the nature of scholarly collaborations involving individuals from radically disparate fields. Still, as a historian, the record of a year spent by a team of scholars investigating a topic seems worth paying a few dollars to keep registered so I spent a couple of hours to ensure the site wouldn't be tied to an aging Drupal 6 infrastructure.  

Looking back at the low-fi Web 2.0 site and the low-fi videos on it, it already seems like ancient history. But this was the state of the art not 15 or 20 years ago but rather a mere eight years ago. The trends that the Networked Publics group identified—the rise of DIY media in particular—are now not the province of nerds and geeks but rather part of our everyday lives. It's stunning to think back and remember showing the group the first video iPod that I had purchased soon after its release that year. Such, I suppose is the process of aging in the technological future. One gauges oneself as much by the personal milestones one experiences as by the tech one leaves behind.  

For me, development on Drupal has become something to leave behind as well. Last year I concluded my development of Docomomo-us.org, which I had transitioned from outdated custom cgi code to Drupal back in 2006, by having Jochen Hartmann take over as web developer and earlier this year I replaced the Drupal sites for both AUDC and the Netlab with sites driven by Indexhibit. This process of steadily whittling down my Drupal sites means that this remains the only one I have left (minus the seriously neglected Lair of the Chrome Peacock). 

But this isn't a mere status update regard the infrastructure of these sites. Changes in infrastructure, as my readers should know, are never innocent, but rather embody ideological and social changes. When I first came to Drupal back in 2005, I was encouraged by the ease of extending the system and its Open Source development. For a time I was active in the community at Drupal. Not being much of a coder anymore, I asked questions, gave suggestions, and helped out with some problems people had on the forums, but it became clear to me that most people on Drupal's communty site fell into three categories. Those just starting out, those trying to help out as they could (and usually fleeing when they felt overwhelmed… this typically happened after they had submitted a new module or theme), and those who were either dedicated hobbyists or worked with Drupal for a living. Not being part of the latter two, I wound up retreating.

As a designer, I had this foolish idea that my site should look the way I want it to look so I spent a ridiculous amount of time tweaking these sites by building themes for them and outfitting them with extensions called "modules." Unfortunately in an effort to optimize its code base, the developers of Drupal have adopted a mantra which states that "the drop is always moving" which simply means that Drupal will actively break any themes and modules during each major point release. The result is that I found myself needing a month of down time to upgrade my sites from Drupal 5 to Drupal 6. For a scholar to do this is preposterously difficult. For a scholar with kids to do this is virtually impossible. 

Drupal 7 came out a while back, but lacking any compelling features, I chose not to upgrade. After all, a month of down time just to get back to where I was is hardly attractive. Now Drupal 8 promises adaptive themes that will appropriately react to the mobile platforms that increasingly drive Web traffic so I am likely to go to it, but even though new development was frozen in the system a year ago, it seems far from prime time. I spent more than half an hour today looking for a release date for the first beta and couldn't find anything but long-outdated information. If this site is to be believed, there are more critical bugs in Drupal 8 today than a year ago. 

Therein lies the trouble with Drupal and modern coding: immense complexity (see my comments on complexity at Triple Canopy). Projects of this size become impossible to manage, impossible to code, and impossible for users to work with. My front page is aging, an artifact from an era in which laptops commonly had screens with a resolution of 1024 X 768 not 1920 X 1200 (as my current one does) but to redo when it will only break again soon seems ludicrous. Perhaps I'll use another system like WordPress to run this site or maybe I'll pickle it and fork off to another platform. Any of this is possible, but I'll hardly recommend Drupal to anyone again or do anything but build the most minimal theme I can for it.  

Beyond a stern caution about the complexity that Open Source projects can generate and that can choke them, as Drupal has been choked, for all of the technological maturation that we've seen over the years since Networked Publics, the one thing that we've drifted away from is Web presence. If the static Web marked the 1990s, Web 2.0's dynamic Web sites dominated the time in which we wrote Networked Publics. Bringing varnelis.net back to life with Drupal in 2005, I envisioned it as part of an interlinked ecology of sites, both local (AUDC, DoCoMoMo-US, the Netlab, etc.) but also global, interlinking to other sites through RSS feeds and commenting systems. This hasn't happened, to this site or any other. Web 2.0's strongest links such as social bookmarking (repeated problems with Delicious at the hands of Yahoo! and AVOS and the meltdown at ma.gnolia) and RSS suffered a similar fate after Google Reader shut down this summer. As Open Source withers when it becomes over-complex, struggling corporations like Yahoo! and Google undo matters in their binge and purge cycles, buying up whatever they can in hopes of monetizing the Web and then wiping out communities when they turn out to be too hard to profit from.    

Instead of the open Web then, we have apps and the privatized, Balkanized world they promise. It's hard not to be gloomy about this, hard to find a happy face to put on all this. Perhaps that is my wont, but sometimes there isn't one. The problems of cooperation, collaboration, and democratic decision-making remain the thorniest of problems for Networked Publics. 

Blogitecture at MIT HTC Forum Video

The video for the Blogitecture presentations that Javier Arbona and I gave at the MIT HTC Forum, together with the discussion we had with Mark Jarzombek and the MIT audience is now up at Vimeo.  

Our talks worked quite well together, I think, as we both addressed the political and disciplinary implications of blogs in architecture. 

Blogitecture at MIT HTC Forum from kazys Varnelis on Vimeo.

a note on blogs

A video of the lectures that Javier Arbona and I gave at MIT on blogs and the discussion we had with Mark Jarzombek will be up soon, but until then I thought I’d put up a few notes that I ran out of time for in my talk.

I think that we need to look at blogs not as something that will transform architecture or architecture criticism per se, but rather as phenomena of network culture. What follows is a brief set of observations about the importance of blogs to architecture, and to network culture.

Blogs are not temporal. The chronological nature of posts is a ruse. That’s not how we read blogs. Chronology doesn’t accrete in the blog. Our sense of time is being redefined.

Blogs are symptomatic of a redefinition of the individual. What matters to bloggers are the links into their blogs. A blogger only exists as a function of the links into their site. An unknown blog is a scream in the forest. Instead of an authorial voice, the blogger is an aggregator, a switching machine that remixes content. The blog is a transition away from the old notion of individuality. In many ways, this is a return to pre-modern ideas of the self.

Blogs blend the public and the private and have no space for high and low. We’re in a new flattened field of nobrow. As Alan Liu writes "No more beauty, sublimity, tragedy, grace, or evil: only cool or not cool." Instead of distinction we have linkbait. Say something outrageous and you get more readers. Topless architecture!   

Blogs embrace the niche. Blogs appeal to idiosyncratic, niche audiences. For a blogger finds it is better to have 100 fanatical followers than 10,000 lukewarm fans. If today there are bloggers who are more well-known than their professors, will there come a time when bloggers will be hired by universities (am I the first in architecture)?

The wealth of blogs is a great question mark. During this economic crisis, we a massive decapitalization of knowledge work in favor of free labor. Not only does Open Source software drive most of the Web today, but news bloggers are effectively replacing newspapers. If the best architecture criticism is now on blogs, how does this culture of free actually function anymore? Is there any room for anyone who doesn’t have a trust fund or access to lots of credit cards to contribute to culture?   

MIT HTC Forum 2009

See me at the MIT HTC Forum next month.

Javier Arbona, Mark Jarzombek, and Kazys Varnelis
Blogitecture: Architecture on the Internet
The state and influence of architectural criticism in an age of digital networks

Tuesday, April 7
6:30 pm
Room 3-133

 "Has a blog actually had a significant impact on a building in the process of being designed or built? What was the outcome? …But even if this were the case, I’m not sure that blogs have actually changed much of the way theory is written or performed." 
-Javier Arbona, Javierest (https://javier.est.pr/)

"Blogs have, thus far been both anti-theory and anti-history. I think they’ve played a role in that regard." 
-Kazys Varnelis (https://varnelis.net)

Mark Jarzombek will moderate a discussion between bloggers Javier Arbona and Kazys Varnelis on the state and influence of architectural criticism in an age of digital networks, from their respective positions as producers of criticism and scholars of architecture. 

 
Javier Arbona is a PhD candidate in geography at UC Berkeley and a former chief editor at Archinect.com. He blogs at https://javier.est.pr/.

Kazys Varnelis, PhD, is Director of the Network Architecture Lab at Columbia University’s Graduate School of Architecture, Planning, and Preservation. He blogs at  https://varnelis.net.

Mark Jarzombek, Professor of the History and Theory of Architecture and Associate Dean of the School of Architecture and Planning at MIT, will moderate the discussion.

________________________________

The lecture will be at 6:30pm in 3-133 at  MIT, 77 Mass Ave. Cambridge, MA 02139, see https://whereis.mit.edu 

htc forum 2009 poster 

On Owls, Starchitects, Papers & Growth Machines

When philosophy paints its gray in gray, then has a shape of life grown old. By philosophy’s gray in gray it cannot be rejuvenated but only understood. The owl of Minerva spreads its wings only with the falling of the dusk.

In perhaps his most eloquent moment, Hegel was referring to the way that philosophy came to an understanding of topics precisely at the moment that they were no longer relevant.

An example of this would be the explosion of visual studies in the 1990s just at the moment when two centuries of the visual being a cultural dominant were being eclipsed by the rise of the non-visual, by the code and procotols of network culture. Nobody talks much about visual studies anymore.

But it isn’t just philosophy and theory that operate this way. It’s a phenomenon we see in culture over and over. Milton Friedman (and Time Magazine) declared We are all Keynesians now just as the long postwar boom expired.

Or look at how stores like Barnes and Noble appeared, carrying huge amounts of books and magazines just as print began its terminal decline. Or the appearance of the SUV right before peak oil (I have friends who bought those things and used them for everyday driving…crazy!).

So what about Starchitects? There has certainly never been an explosion of interest in Starchitects like there has been today. But when the economy recovers (and I think that will be a long, long time from nunless the government comes up with another unhealthy quick fix), I’m not so sure we’ll have starchitects anymore.

The reason is simple: newspapers made starchitects. It’s common knowledge that recent construction by major cultural institutions was driven by the desire to make it to the front page of the New York Times. This could only be guaranteed if the architect was Gehry, Herzog and de Meuron, Koolhaas, Hadid, Nouvel, and Foster (some of these names may change a little, a second tier includes Piano, Morphosis, Sejima, Ito, and I’m sure a couple of others that I forgot). I have friends who work with such institutions and they were commonly told that the project had to be on the front page.

This is not surprising. Newspapers are key institutions for the growth machine (see more here). They seek to drive growth, making it seem natural and promoting it, generally regardless of the cost. They are where the growth machine sees itself and celebrates itself.

But now, eviscerated by bad financial models and online publications, newspapers are dying. Certainly blogs have encouraged Starchitecture a bit, but in many cases—such as at Archinect—they did so in part because they are in the business of linking to content from newspapers. In many cases bloggers are more critical of starchitecture than newspaper critics are. Blogs are bottom-up, newspapers are top-down. Thus blogs are snarky, newspapers are proper. Blogs also have comments so when a blogger gets something wrong, a reader can call it out.

As you may read on twitter, the media is dying. As big papers start to shut down or go to online-only formats in the coming years, will starchitects disappear as well? I can’t imagine that the heads of major cultural institutions will insist on architects who will ensure their buildings be mentioned on Archinect.

If they do, what will take their place, a Warholian YouTube-style culture of young architects being famous for 15 minutes? Or will architects begin to specialize toward niche audiences, much as blogs do?

A Modest Proposal for Social Networks or, How This Could be the Next Facebook

I’m still trying to catch up with my big blog post (maybe a white paper?) on the research we did on Networked Publics and the Infrastructural City, so bear with me. In the meantime, how about some pie-in-the-sky ideas about Web 3.0 (so sorry)?  

A couple of weeks ago, Traction Software’s Jordan Frank wrote an intelligently-written post titled "Wither Web 2.0 Social Networking? My 2 Cents." Jordan begins with a series of gloomy links on the failure of social networking technology to monetize. It’s pretty obvious to those of you on Twitter or on Facebook…we use these sites all the time. Some 150 million people subscribe to Facebook and half of them use it every day. It costs a lot of money to run Facebook’s servers (the photo below is of some of the over 10,000 servers Facebook uses) and back in 2007, Fishtrain calculated that the server cost alone was around $1.05 a user and of course there are employees, office space, and so on.

In other words, that’s crazy money and for social networks to stay afloat, they are going to have to make some real cash fast. Facebook could well be racing the New York Times for which one will shut its doors first.

facebook's server room

Advertising is the hitch here. Social networks, search engines, and of course newspapers and magazines have long relied on advertising to fund their businesses, but as advertisers are able to see results more directly than ever before, they find that perhaps ads—especially the sort of relatively unobtrusive ads that appear on social networks…but that users still hate—aren’t really generating the kind of results they want.

Remember "it’s all about eyeballs?" I remember doe-eyed business school graduates telling me that a decade ago and look how far that went…

User fees are certainly possible but extremely unlikely, in my opinion, to succeed.

Instead, here’s a thought experiment. With millions of blogs and content-management-driven Web sites out there (like this one, but also online user communities), what if social networks left the corporate-owned ghetto? What if a set of tools were developed—OpenId being only the first one—to allow all the goodies of social networking sites—meeting friends, posting profiles, tracking online actions, sending dumb gifts, unfriending people, posting kid photos, poking—to spread across the Web? How different would this be than losing America Online, Compuserve, and the various online services of the 1980s and early 1990s? What if all this social networking stuff just went into the cloud—not a cloud owned by Amazon or Google—but a cloud owned by everyone? A few new tools and Drupal 9.0 could certainly do this, I think. 

Surely some important technological breakthroughs would have to be made to make this a reality, but really, why not? 

prss release

For those of you who don’t subscribe to blogs via RSS and even for those who do, Prss Release aggregates the contents of a number of architecture blogs into an elegant, downloadable weekly PDF. More confirmation of my suggestion that 2008 will be the year that blogs stop looking like blogs.

As blogs mature, I expect we will be seeing more experiments like this.