2025-in-review

It’s strange to measure every year against a concept developed by a science fiction writer, but William Gibson’s line “The future is already here—it’s just not evenly distributed”1. has been my north star for my recent year-in-review essays. Gibson meant that the future was unevenly distributed by class: the wealthy receive high-tech healthcare while the world’s poorest live in squalor—though one might ask which of these is really our future. Yet the quote has been repeatedly misread as a claim about time andspace: that the future arrives somewhere first, perhaps unseen, while the rest of the world catches up. But this misreading is more productive than Gibson’s intent. Gibson’s critique of inequality is fair enough, but we all know this, decry it, and go on about our business. The misreading, on the other hand, is a theory of historical change.

With the release of ChatGPT in late 2022, a temporal rift opened, shattering the post-Covidean present. But many tried the early tools, encountered hallucinations, read articles about slop and imminent environmental ruin, and reasonably concluded there was nothing to see. By 2025, a cursory examination of news in AI would have assured them that AI had proved a bust. OpenAI’s long-awaited updates disappointed, and the company flailed, turning to social media with Sora, a TikTok clone for AI. Meta seemed to abandon its efforts to create a competitive AI and instead turned to content generation for Instagram and Facebook, something nobody on earth wanted. Talk of a bubble started among Wall Street pundits. The hype-to-disappointment cycle is familiar, and the dismissals were not unreasonable.

But again, the future isn’t evenly distributed, and if you don’t know where to look, you would be excused for believing it’s all hype. Looking past such failures, 2025 was actually a year of breakneck progress. Anthropic’s Claude emerged as the most capable system for complex tasks, Google’s Gemini became highly competitive, while DeepSeek and Moonshot AI proved that China was not far behind. More significant than any single model was the emergence of agentic AI—systems that can take on multi-step tasks, act, navigate filesystems, write and execute code, and work across documents. Claude Code was the year’s groundbreaking innovation. While “slop” was Merriam-Webster’s word of the year, “vibe coding”—using agents to write programs—was much more important. Not only could programmers use them to accelerate their work, it also became possible for non-programmers to realize their ideas without any knowledge of code, a radical change in access I explored in “What Did Vibe Coding Just Do to the Commons?”.

By any first-world standards, at least, these tools are remarkably democratic and inexpensive. A basic Claude subscription costs about as much as a month of streaming, and even the $200 maximum usage account costs less than a monthly car payment. For many, however, the barrier is not price but something deeper—a resistance approaching revulsion. These tools provoke fear in a way that earlier technologies did not. It’s not the apocalyptic dread of the doomers or the Dark Mountain sensibility that apocalypse is near. Rather, it’s a threat to the sense that thought itself is what makes us distinct. The unevenness of the future is no longer about access; it’s now about willingness to engage.

As a scholar, thinking about the very short term is strange for me. I have always been suspicious of claims that radical change was upon us. I would rather align myself with the French Annales school concept of la longue durée, as defined by the great Fernand Braudel, the long-term structures of geography and climate. Faster than that were the medium-term cycles of economies and states, while he dismissed the short-term événements of rulers and political events as “surface disturbances, crests of foam that the tides of history carry on their strong backs.”2. Events, he wrote elsewhere, “are the ephemera of history; they pass across its stage like fireflies, hardly glimpsed before they settle back into darkness and as often as not into oblivion.”3. The real forces operate beneath, slowly, often imperceptibly.

Curiously, Braudel himself embraced technological change in his own work. In the 1920s and 30s, he adapted an old motion-picture camera to photograph archival documents—2,000 to 3,000 pages per day across Mediterranean archives from Simancas to Dubrovnik. He later claimed to be “the first user of microfilms” for scholarly historical research.4. His wife Paule spent years reading the accumulated reels through what Braudel called “a simple magic lantern.”5. Captured in 1940, he spent five years as a prisoner of war and wrote the entire first draft of The Mediterranean—some 3,000 to 4,000 pages—from memory. Paule, meanwhile, retained access to the microfilm and notes in Paris, and after the war, they reconstructed the text, taking his manuscript, verifying it and adding footnotes and references from the microfilm.6.

In 1945, the same year Braudel was liberated, Vannevar Bush published “As We May Think,” in which he imagined a device he called the “Memex”: a mechanized desk storing a researcher’s entire library, indexed and cross-referenced, expandable through associative trails.7. The vision remained speculative for decades. Now the world’s archives are being digitized; AI systems translate, summarize, and search across them in seconds and can translate any language. To take one example, earlier this year, I used Google’s Gemini to translate the Hierosolymitana Peregrinatio of Mikalojus Kristupas Radvila Našlaitėlis, a sixteenth-century pilgrimage narrative from an online scan of the Latin first edition. The result is not a polished scholarly translation, but a working text that allowed me to gain a good sense of a text that was previously unreadable to anyone without proficiency in Latin or Polish (the only language into which, to my knowledge, it had been translated). The role of the intellectual is being transformed—not replaced, but augmented in ways Bush could only sketch. This feels like something other than foam.

How to account for such a rapid shift? Manuel DeLanda offers one answer in A Thousand Years of Nonlinear History. Working in Braudel’s materialist tradition and drawing on Gilles Deleuze and complexity theory, DeLanda describes how flows—of trade, energy, and information—accumulate and concentrate until they cross a threshold, undergo a phase transition, radically reorganizing into a new stable state. But here is the key insight: intensification is la longue durée. The accumulation of flows that began with the Industrial Revolution—or perhaps with writing, agriculture, or even symbolic representation itself—is the deep structure behind our era. Steam, electricity, computing, the internet: each was a phase transition within a longer arc of intensification. Cities accelerate such processes, as Braudel showed, concentrating capital and labor until new forms of economic organization emerge—Venice, Antwerp, Amsterdam, London, each becoming sites at which the future arrived first. Such conditions are not opposed to la longue durée; they are the moments when intensification crosses a threshold.

The continued pace of change this year underscores that there has been no return to equilibrium. But this has been accompanied by unprecedented resistance to technology, appearing as simultaneous terror at its apocalyptic nature (in jobs, if nothing else) and dismissal as useless, especially in Gen Z. A January 2026 Civiqs survey found that 57 percent of Americans aged 18–34 view AI negatively—more than any other age group. Curiously, the seniors category, which now includes most boomers, was the least resistant to AI, followed by Gen X and older millennials, all groups that grew up seeing radical societal and technological changes.8. It seems paradoxical that the smartphone generation recoils from the tools of the future. To understand this resistance means understanding the mentalité that shaped it—what Braudel’s successors in the Annales school called the collective psychology formed through lived experience.9. For Gen Z, that formative experience was network culture—both a successor to postmodernism and a form of collective psychology I did not fully understand at the time. Writing on network culture in 2008, it seemed to me that social media promised connection; instead, it brought division.10. The networked self was indeed constituted through networks, not merely isolated in postmodern fragmentation, but the fragmentation was now collective. Networked publics built barriers against one another, creating what Robert Putnam called cyberbalkanization: retreat into a comfortable niche among people just like oneself, views merely reinforcing views.11. Identity wars and mimetic conflict flared across filter bubbles that amplified outrage and tribal scapegoating as both MAGA and wokism built toxic online cultures. QAnon and a thousand other conspiracy theories propagated through Facebook groups and YouTube recommendations. Young men drifted into incel communities where loneliness became ideology and livestreaming mass shootings was celebrated. Influencers built their empires on hatred—Hasan Piker framed Hamas’s October 7 massacre as anticolonial resistance while Nick Fuentes celebrated mass shooters as vanguards of race war and civilizational collapse.

Nor did this just fragment culture—it exacted a massive psychic toll, as social contagion spread new forms of self-harm and mental illness. During the pandemic, teenage girls began presenting tic-like behaviors—not Tourette’s syndrome, but something researchers termed “mass social media-induced illness,”12. spread by TikTok videos about Tourette’s rather than any actual disease. The pattern was unprecedented but not unique. Eating disorders spread through thinspiration hashtags. Self-harm tutorials circulated on Instagram. The platforms that were supposed to bring us together instead spread desires, disorders, and identities through pure social contagion—and with them, violence and polarization. A generation that grew up inside this experiment—that watched it reshape their peers’ bodies, minds, and identities—is right to be skeptical of the next technological promise.

In 2010, it seemed like network culture had a good chance of becoming understood as the successor to postmodernism. Bruce Sterling and I were engaged in a kind of dialogue about it online. He predicted that network culture would last “about a decade before something else comes along.”13. And he was right, as I acknowledged in my 2020 Year in Review. By then, network culture was exhausted, and with the Covidean break, it seemed time for something new. In 2023, I taught a course at the New Centre for Research & Practice to try to broadly sketch the emerging era. It’s still early and hard to fathom, like trying to understand postmodernism in 1971 or network culture in 1998, but it’s clear that if postmodernism was underwritten by the explosion of mass media, network culture by the Internet, social media, and the smartphone, then the current era is shaped by AI.

But if Gen Z, scarred by the effects of social media, has been reacting with deep fear and anxiety, Sterling how epitmozes the other reaction, dismissal. In the most recent State of the World, for example, he derides AI-generated content as “desiccated bullshit that can’t even bother to lie.” He compares the vibe-coding atmosphere to an acid trip, mocking the professionals who utter “mindblown stuff” like “we may be solving all of software” and “I have godlike powers now.” For Sterling, AI can produce nothing but slop. Now Bruce has always had a healthy skepticism toward tech claims, but I can’t help but think of Johannes Trithemius, the fifteenth-century abbot who wrote De Laude Scriptorum just as Gutenberg’s press was spreading across Europe—defending the scriptorium against a technology he could not see would remake the world.

There are even deeper, more existential fears, and I’ve spent the past year addressing them on my blog, in the process laying the foundation for a book on the topic: AI as plagiarism machine; AI as hallucination engine; AI as stochastic parrot, mindlessly repeating what it has ingested (Sterling’s critique); and AI as uncanny double, too close to us for comfort. As I explain, the discomfort arises not from the machine’s otherness but from its likeness: a mirror held up to processes we preferred to believe were uniquely ours.

It’s no accident that I published these essays on my blog. As far as my personal year in review goes, this was very much the year of the blog. I have no plans to ever publish in an academic journal again. Why would I? Who would read it? Why would I want to publish something paywalled, reinforcing the walled gardens of inequality that academia is so desperate to maintain—even as it proclaims itself the champion of open inquiry and democratized knowledge? Academia has become the realm of what Peter Sloterdijk called cynical reason: rehearsing the tropes of ideology critique while knowing the game is empty and playing it anyway. This revolts me.

But for almost ten years now, since the shutting down of the labs at Columbia’s architecture school, I have been content to write from the position of the outsider, something I reflected on in “On the Golden Age of Blogging”. That essay was prompted by a strange comment from Scott Alexander, who lamented on Dwarkesh Patel’s podcast that he had personally made a strategic error in not blogging during what he called the “golden age,” imagining that “the people from that era all founded news organizations or something.” The golden age he remembers is a fiction, as golden ages often are—and he gets the stakes entirely wrong. Evan Williams founded Blogger in 1999, sold it to Google, co-founded Twitter, then created Medium, which convinced hapless readers pay to read slop long before AI slop was ever a thing. The early bloggers who sought professionalization found themselves absorbed into the worst of the worst, writing for BuzzFeed, peddling nostalgia listicles that rotted psyches.

There was, however, a golden age for me, and I miss it: the architecture blogging community circa 2007—Owen Hatherley, Geoff Manaugh, Enrique Ramirez, Fred Scharmen, Sam Jacob, Mimi Zeiger (whose Loud Paper was less a blog and more a zine, but a key part of the culture), and others. We inherited from zine culture an informal, conversational tone and the will to stand outside architectural spectacle. But ArchDaily and Dezeen commercialized the form, shifting from independent critique to marketing and product. Startup culture absorbed architectural talent.

Blogging was powerful precisely because we had no stakes in it—we owned and controlled our means of intellectual production. The golden age of blogging is not in the past; it is now. After years of proclaiming I would blog more, in 2025, I really did. I wrote over 83,700 words on varnelis.net and the Florilegium—essay-length pieces on landscape, native plants, AI and art, architecture, infrastructure, politics, and tourism. My only regret is that my presidency at the Native Plant Society of New Jersey consumes so much of my thinking about native plants that little remains for writing. But the time will come, and if nothing else, my investigation of the Japanese garden aesthetic should point in the future direction for my writing on landscape.

I also continued to make AI art, or to be more precise, what I called stochastic histories. A major project was a substantial reworking of The Lost Canals of Vilnius, a counterfactual history in which, after the Great Fire of 1610, Voivode Mikalojus Radvila Našlaitėlis rebuilt the city with Venetian-style canals, complete with gondoliers, water processions, and a hybrid “Vilnius Venetian” architecture. As research, I used Gemini to translate Radvila’s sixteenth-century Latin pilgrimage narrative. AI, like photography or film, is what you make of it. Film is perhaps the better analogy—anyone can make a video. Making something worthwhile is another matter entirely. In December, I also completed East Coast/West Coast: After Bob and Nancy, a generative restaging of Nancy Holt and Robert Smithson’s 1969 video dialogue using two AI speakers.

There were other substantial essays, too. In “Oversaturation: On Tourism and the Image”, I finally put down on paper something I had wanted the Netlab to address while at Columbia, but that proved too dangerous for the school to support. Universities cannot critique the very systems of overproduction they depend upon for survival. Publish or perish and endless symposia nobody is interested in are the academic versions of overproduction, but more than that, any architecture school claiming global currency cannot afford to offend either other institutions, like museums, that give it legitimacy, or, for that matter, the trustees that fund both. As I point out, tourism has always been mediated by imagery; take Piranesi’s vedute or the Claude Glass. Grand Tourists always had representations at hand to interpret their direct experience—but a new crisis point has been reached with both overtourism and the overproduction of images. Algorithmic logic now reorganizes cultural geography around “most Instagrammable spots,” making historical significance secondary to content potential. The Fushimi Inari shrine in Kyoto is the case in point—a 1,300-year-old shrine that Instagram made famous and that has now ceased to serve as a religious site due to the influx of visitors. The Japanese have a term for this: kankō kōgai, tourism pollution. Tourism has become the paradigm of contemporary experience—the production of imagery without cultural meaning; everything feeds the same algorithmic mill. Even strategies of resistance get metabolized—slow travel becomes a hashtag, psychogeography becomes an Instagram guide.

The Bilbao effect, which was a major driver of oversaturation, was itself a product of globalization. Hans Ibelings coined “supermodernism” in 1998 to refer to the architectural expression of Marc Augé’s “non-places,” an architecture optimized for the perpetual circulation of bodies and capital. It was the architecture of network culture, of the Concorde and the Internet. Koolhaas diagnosed its endgame in his 2002 “Junkspace“—”Regurgitation is the new creativity”—and then, tellingly, stopped writing. Today, network culture is long gone; nationalism is on the rise. The Internet is a dark forest now14. while the disconnected life is on the rise.15 The most exclusive resorts now advertise no Wi-Fi, no cell service, no addresses—only coordinates. Disconnection has become the ultimate luxury, sold back to the same people who built the infrastructure of connection. More cities are alarmed by the effects of overtourism than desire to attract tourists. In the US, new architectural proposals appeal to a retardataire aesthetic—Trump displaying models of a triumphal arch inspired by Albert Speer and marking a triumph of nothing in particular in models in three sizes (“I happen to think the large looks the best“), a four-hundred-million-dollar ballroom modeled on Mar-a-Lago, an executive order mandating classical architecture for federal buildings that Stephen Miller explicitly framed as culture war.

Yet both Bilbao and MAGA are spectacle, architecture-as-branding. But the Bilbao effect is imploding. No city believes anymore that a signature building by a starchitect will transform its fortunes. The parametricists have nothing left to say. Parametric design promised formal liberation—responsive, site-specific, computationally derived—but what it delivered was the most efficient, ugliest box. If the promise was the blob, the reality is the “5-over-1”: wood-frame residential floors stacked on a concrete podium with ground-floor retail, wrapped in a pastiche of brick veneer, fiber cement panels, and that obligatory conical turret element meant to signal “we thought about this corner.” As for AI-generated architecture, it is merely boring—giant sequoias hollowed out as apartment buildings, white concrete towers with impossible cantilevers, and lush vegetation sprouting from every surface—the same utopian fantasy rendered a thousand times over. These are renders of renders: AI trained on architectural visualization produces visualizations that are utterly disconnected from any tectonic reality. A new generation may emerge in response to new needs, but for now, the discipline has lost its cultural purchase. Architecture, for us, is a thing of the past.

The art world, too, has slowed. Museums are putting on fewer shows, shifting from aggressive schedules to longer, more deliberate exhibitions—or simply cutting programming as budgets tighten.16. The frantic pace of the Biennale circuit has exhausted dealers and collectors alike; smaller fairs are folding, and even the major ones feel like obligations rather than events. Galleries that survived the pandemic are now closing quietly, without the drama of a market crash—just a slow bleed of foot traffic, sales, and cultural attention. There is no new movement, no emergent critical framework, no sense of direction. The market churns on—auction prices for blue-chip artists remain high, collectors still speculate, art advisors still advise—but the sense of cultural mission has dissipated. What remains is commerce without conviction, a field that has forgotten why it exists beyond the perpetuation of its own economy. The institutions that trained artists for this field are collapsing alongside it.

As enrollment dwindles, design schools are collapsing—not merely contracting, but ceasing to exist. Most recently, the California College of the Arts announced in January 2026 that it would close after the 2026–27 academic year17., the last remaining independent art and design school in the Bay Area. It follows a grim procession: the San Francisco Art Institute (2020), Mills College (2022), the Pennsylvania Academy of the Fine Arts (2023), and Woodbury University’s acquisition by Redlands and subsequent adjunctification—a fate that has methodically undone so many schools as faculty become contingent labor and institutions into hollow administrative structures run by well-paid, cost-optimizing consultants.

There is personal resonance for me in this. Simon’s Rock College of Bard, which shuttered its Great Barrington campus in 2025, was where I studied for my first two years before transferring to Cornell—a pioneer of early college education that offered a radical pedagogical experiment in what learning could be beyond conventional schooling. I arrived there straight from high school, as did my good friend and colleague Ed Keller; clearly, something interesting was in the water back then. Simon’s Rock made the development of young minds its central mission rather than an incidental focus of brand management or endowment growth, and its alumni list is impressive for such a small school. It has an afterlife at Bard, but it’s an echo at best.

The difference between these institutional deaths and simple market failure is this: they are not being replaced. When a retail business fails, another may open elsewhere. When a school closes, there is no succession. The market offers no alternative. Instead, what remains are the corporate university satellites—for-profit programs nested within larger institutions (like Woodbury’s absorption into Redlands), stripped of autonomy, their faculty reduced to precariat, their curricula bent toward what can be measured and marketed. The art schools that survive do so by transforming into something else: luxury finishing schools for wealthy families or research appendages to larger universities, where “design thinking” becomes another management consultant’s tool. The pedagogical mission—to create conditions where students might develop serious aesthetic judgment, where they might encounter genuine problems and be forced to think through them—is not merely challenged but impossible. The closure of these schools does not signal a failure of art education; it signals that the very idea of art education as something valuable in itself has been liquidated.

This hollowing out of cultural institutions is not incidental to the political moment—it is one of its hallmarks. Politically, most people have checked out. This is not 2017, when each provocation demanded a response; the outrage cycle has given way to numbness. In “National Populism as a Transitional Mode of Regulation”, I argued that Trump, Orbán, Meloni, and their ilk represent not a return to fascism but something new: the authoritarian management of declining expectations. National Populism correctly identifies that neoliberalism’s promise of shared prosperity has failed, but it channels legitimate grievances toward scapegoats rather than addressing the technological displacement actually causing them. This is its tragic irony: the National Populist base—workers made obsolete by neoliberalism and unable to participate in AI Capitalism—finds its legitimate anger directed into a movement that accelerates the very forces rendering them superfluous. Their value to capital lies in political disruption rather than economic production; they are consumers and voters, but no longer needed as workers. National Populist leaders offer psychological compensation—dignity, recognition, transgressive identity politics—rather than material improvement. The apocalyptic tenor of populist culture, its end-times thinking and conspiracy theories, provides a framework for populations sensing their own economic redundancy.

The alliance between tech billionaires and populist leaders is unstable. AI Capitalism requires borderless computation and global talent flows; nationalist protectionism contradicts these at every turn. Musk, Thiel, and Andreessen have aligned with the movement to dismantle the regulatory state, not because they share its vision but because populism serves as a useful battering ram against institutional constraints. Once those barriers fall, the movement and its human-centric concerns can be discarded. National Populism, as I conclude, is not the future—it is a political interlude, a transitional mode that will not survive contact with the economic forces it has helped unleash.

If National Populism is transitional, is there a positive vision that can replace it? In “After the Infrastructural City”, I responded to Ezra Klein and Derek Thompson’s book Abundance, perhaps the most influential book of 2025, which argues that America’s inability to build is a political choice, not a technical constraint. Their solution: streamline regulation, invest boldly, build more. It’s a compelling vision—and a necessary corrective to decades of paralysis. But Abundance shares a curious blindspot with Muskian pronatalism: both assume we need more people. Musk preaches that declining birthrates spell civilizational collapse; Klein and Thompson build their vision on populations that will mysteriously arrive to fill what’s built, perhaps by immigration. Neither accounts for the possibility that AI changes the equation entirely—that a smaller population, augmented by intelligent systems, might not be a crisis at all. Populations are already shrinking across much of the developed world. What I call “actually-existing degrowth”—not the voluntary eco-leftist kind, but the unplanned demographic contraction now underway in Japan, Korea, and much of Europe—is coming for the United States too. Declining birth rates, aging populations, and regional depopulation: these are not future scenarios but present facts.

This doesn’t invalidate the Abundance agenda; it redefines it. Abundance cannot mean building more for populations that will not arrive. It must mean building better, adaptive, intelligent infrastructure for smaller, older societies. AI, rather than merely destroying jobs, can help navigate this transition: smart grids, autonomous transit, predictive healthcare. The opportunity is real. Managed shrinkage, done well, can mean more livable cities, restored ecosystems, higher quality of life. The question is whether political leaders can articulate a vision of flourishing within limits—or whether nostalgia for growth will leave us building for a future that never comes.

Against the exhaustion of institutions, against the hollowing out of architecture and art, against the closure of the schools that trained people to imagine, the blog remains. It may not be much, but it is one independent voice outside the collapsing structures around me. I wrote over 83,000 words this year. I made art. I thought through problems that matter to me with the help of AI, which provided me with tools I could only have dreamt of merely a year ago. Today, I uploaded hundreds of thousands of words from my essays to a directory in Obsidian so that Claude could draw connections between them (see here for just how one can set this up).

The future is already here—it just isn’t evenly distributed. Some are afraid or are still pretending AI isn’t happening. Phase transitions are uncomfortable. They are also where the interesting work gets done. One makes of one’s time what one makes.

1. William Gibson, quoted in Scott Rosenberg, “Virtual Reality Check Digital Daydreams, Cyberspace Nightmares,” San Francisco Examiner, April 19, 1992, Style section, C1. This is the earliest verified print citation, unearthed by Fred Shapiro, editor of the Yale Book of Quotations.

2. Fernand Braudel, The Mediterranean and the Mediterranean World in the Age of Philip II, trans. Siân Reynolds (New York: Harper & Row, 1972), 21.

3. Braudel, The Mediterranean, 901.

4. Fernand Braudel, “Personal Testimony,” Journal of Modern History 44, no. 4 (December 1972): 448–67.

5. Paule Braudel, “Les origines intellectuelles de Fernand Braudel: un témoignage,” Annales: Histoire, Sciences Sociales 47, no. 1 (1992): 237–44.

6. Howard Caygill, “Braudel’s Prison Notebooks,” History Workshop Journal 57 (Spring 2004): 151–60.

7. Vannevar Bush, “As We May Think,” The Atlantic Monthly 176, no. 1 (July 1945): 101–8.

8. Civiqs, “Do you think that the increasing use of artificial intelligence, or AI, is a good thing or a bad thing?,” January 2026, https://civiqs.com/results/ai_good_or_bad.

9. The concept of mentalités emerged from studies of phenomena like the witch trials, where beliefs and fears spread through communities in ways that could not be reduced to individual irrationality. For an overview of mentalités as a historiographical concept, see Jacques Le Goff, “Mentalities: A History of Ambiguities,” in Constructing the Past: Essays in Historical Methodology, ed. Jacques Le Goff and Pierre Nora (Cambridge: Cambridge University Press, 1985), 166–180.

10. Kazys Varnelis, “The Rise of Network Culture,” in Networked Publics (Cambridge: MIT Press, 2008), 145–160.

11. Robert Putnam, “The Other Pin Drops,” Inc., May 16, 2000.

12. Kirsten R. Müller-Vahl et al., “Stop That! It’s Not Tourette’s but a New Type of Mass Sociogenic Illness,” Brain 145, no. 2 (August 2021): 476–480, https://pubmed.ncbi.nlm.nih.gov/34424292/.

13. Bruce Sterling, “Atemporality for the Creative Artist,” keynote address, Transmediale 10, Berlin, February 6, 2010.

14. Yancey Strickler, “The Dark Forest Theory of the Internet,” 2019, https://www.ystrickler.com/the-dark-forest-theory-of-the-internet/. See also The Dark Forest Anthology of the Internet (Metalabel, 2024).

15. “Trend: Not Just Digital Detox, But Analog Travel,” Global Wellness Summit, 2025, https://www.globalwellnesssummit.com/blog/trend-not-just-digital-detox-but-analog-travel/.

16. “The Big Slowdown: Why Museums and Galleries Are Putting on Fewer Shows,” The Art Newspaper, March 10, 2025, https://www.theartnewspaper.com/2025/03/10/the-big-slowdown-why-museums-and-galleries-are-putting-on-fewer-shows.

17. California College of the Arts, the last remaining private art and design school in the Bay Area, announced in January 2026 that it would close after the 2026–27 academic year. See “‘Nowhere Left to Go’: As California College of the Arts Closes, So Does a Pathway for Bay Area Artists,” KQED, January 13, 2026, https://www.kqed.org/news/12070453/nowhere-left-to-go-as-california-college-of-the-arts-closes-so-does-a-pathway-for-bay-area-artists.

2024 in review

I have been writing “Years in Review” for some time. I often wonder if it’s worth it. I don’t get as much feedback as on my other posts and they take time away from other work. Still, these are useful for me to look at over the years, so for this year at least, I decided to write another.

It’s deep into February now, indeed nearly March, but years, like centuries, have periods of overlap and drift, in which various loose ends are tied up even as other themes emerge that define the next year.

First, my own year. Many of my readers know that I am passionate about the importance of native plants. In 2024, I found myself elected President of the Native Plant Society of New Jersey. Back in 2016 or so, when I left architecture and academia behind, it felt that somehow things were unwinding in those realms. In retrospect, I couldn’t have been more right. Architecture, which was revitalized with the modernist revival of the 1990s, now seems exhausted again—caught between spectacle, greenwashing, and the banality of developer-led projects. Academia has fared no better, suffocated by bureaucratization, infighting, and a slavish devotion to pseudo-leftist political commentary that left little room for real inquiry. My friends in academia have either quit or don’t enjoy teaching anymore. Meanwhile, landscape, long dismissed as secondary to architecture, has become a key site of innovation. But rather than innovative research taking place in the university, it is happening with individuals outside academia working with native plants. In academia, landscape still suffers from architecture envy and advocates reshaping the land violently using earth-moving machines before burying it under concrete. One can graduate virtually any landscape architecture program in this country without any real understanding of botany or plants. It’s as if architects had learned nothing from the reckoning the field faced in the 1960s and 1970s when its social failures and the consequences of object-fixation at the expense of context were laid bare. So instead, we reach out to individuals. A talk I gave in November about designing with woodland plants had over 400 in-person attendees and has generated over 2,600 views in the three months it has been on YouTube. That’s already better than any talk I ever gave on the history of architecture or network culture. I’ll take that as a start.

A vernal pool at the Great Swamp Outdoor Education Center, Chatham, New Jersey.

I continued to write entries in the Florilegium, many of them essay length. Walls in the Landscape examined the cultural and ecological role of dry-stacked stone walls, reflecting on how they shape and structure the land while allowing nature to inhabit them. Vernal Pools at the Great Swamp explored seasonal wetlands in northern New Jersey, their importance for amphibians, and the growing threats posed by habitat destruction and climate change. A Trip to Lithuania and the Baltics documented my travels in the Baltics and engagement with Lithuanian native plant scientists and activists, examining the bizarre global trade in invasive species and the parallels between Eastern European and Northeastern American forest ecologies. We Went for a Walk on Turkey Mountain reflected on a hike through the New Jersey Highlands, using it as a way to think through geology, land use history, and native plant communities while drawing connections to Robert Smithson and conceptual art. A friend asked why I am writing these lengthy essays on landscape. Perhaps I am planning a book? Indeed, that’s the plan. That said, I also realize that essays can be a lot for people to take in all at once. Although I usually fail with these resolutions, I do intend to add more shorter pieces this year.

Beyond landscape, I continued my research with AI and AI image generation. It dismays me to see otherwise intelligent people so swiftly denigrate AI as plagiarism machines or as completely unreliable. AI, as I’ve stated before, is the biggest technological revolution of our lifetime. In my 2023 Year in Review, I suggested that “If potent but wildly hallucinating AIs marked 2022, the rise of GPT-4 as a useful and dependable everyday assistant marked 2023.” This continued in 2024. Although there have been no great new developments in AI—no Singularity, no Skynet, no AGI—and we are still using GPT-4 (GPT-4.5 is reportedly dropping this week), yet steady advances have continued. Setbacks made the news as well. In its usual fashion, Apple utterly mishandled the rollout of the unfortunately-named “Apple Intelligence.” Inappropriate summaries, lack of power, and a Siri that is every bit as dumb as it was when it was released in 2011 led to widespread disappointment. And yet, AI advanced steadily throughout 2024, becoming more deeply integrated into software development and research. Legal battles over training data and copyright raged on, but practical applications marched forward. AI-assisted coding through tools like Cursor and Github Copilot became more commonly used, and AI-powered search engines like Perplexity AI reshaped how we retrieved information. Through ChatGPT, I have an assistant that can do a more-than-serviceable job in translation to Lithuanian at a moment’s notice and can write various forms of code (I wrote a WordPress plugin for my site in AI this year and used it to teach me how to program an Arduino). With “deep research,” ChatGPT can search the Web, cite sources to confirm its accuracy, and produce a coherent research paper—not a full literature search and lacking original insights, but still an impressive overview. I’ve used Google Notebook LLM to generate podcasts about books that I don’t have time to read and even to understand manuals (see this Mylar Melodies video for how and why one might do this). I used Perplexity AI to plan a trip to France in October and it did an excellent job, down to recommending hotels and restaurants. I find it hard to imagine I could have found a travel agent who would have responded to my idiosyncratic requests so well. Specialist apps that use machine learning are everywhere now. Through iNaturalist, I use AI to identify plants in my garden and in the wild and with the Cornell Bird app, I can identify the hooting outside my house as a Great Horned Owl. Machine learning led researchers to decode entire passages from scrolls burned in the eruption of Mt. Vesuvius. AI is ubiquitous now, at least for some of us. As William Gibson said, “The future is already here—it’s just not evenly distributed.”

Like all technologies, it can be misused, but it is also transformative. From Leonardo da Vinci’s embrace of new painting technologies and geometric projection, Albrecht Dürer’s revolutionary use of the printing press to Eadweard Muybridge’s pioneering motion studies, László Moholy-Nagy’s creation of a painting by dictating its appearance over a telephone, Nam June Paik’s work with video, and John Cage’s explorations in electronic sound, artists have continually explored new technologies. The use of these technologies can be naïve, simplistic, or harmful, but it also advances knowledge. Our own time is now different. As a critic, I wrote a bit about this during the last year. My own interest has been in the visual unconscious and the questions it raises about authenticity and reproduction. I started with California Forever, or The Aesthetics of AI Images, in which I critiqued the AI-generated promotional imagery for the new city in Solano County for its failure to imagine the future and the uncanny similarity of not just the Solano images, but much of AI image generation to the paintings of Thomas Kinkade. I followed this with On the Pictures Generation and AI Art, where I explored how AI-generated images raised questions about the visual unconscious, the mechanics of cultural memory and hauntology, and how the boundaries between the authentic and the synthetic have shifted, contrasting AI art to the Pictures Generation of the 1970s and 1980s. Later in the year, I turned toward more art production itself, updating The Witching Cats of New Jersey in terms of both imagery and text, expanding the historical accounts while further analyzing folkloric and occult traditions to explore the intersection of myth and representation. I further examined fakery in the occult, particularly the parallels between spirit photography and AI-generated images—both technologies that blur the line between documentation and invention, creating spectral presences that challenge our perception of authenticity. I ended this year’s work with AI imagery with my essay Speculative Architectures: The Radical Legacy where I drew connections to the radical architecture movements of the 1960s. I find contemporary AI-driven architectural practice so boring, merely accelerating existing tendencies toward formal excess and doing nothing more. Instead, I was interested in how AI and automation intersect with architectural discourse in deeper ways, particularly through the lens of radical architecture movements of the 1960s and how groups like Archizoom and Superstudio used speculative design to exaggerate and expose the contradictions of late modernism—collaborating with AI to produce both images and texts. To push these ideas further, I co-created 7 Fables of Accelerationism with two AIs (ChatGPT 4o and Claude 3.5 Sonnet), producing a collection of speculative fiction pieces exploring AI, automation, and the dissolution of human agency in a world shaped by machine intelligence. These fables reflect both the utopian and dystopian possibilities embedded in technological acceleration, tracing the shifting relationship between architecture, labor, and meaning in a post-work society.

The Terminal Highway. From 7 Fables of Accelerationism

The final essay, Oversaturation: On Tourism and the Image came out last month but was really a product of 2024. Here, I examined the effects of overtourism and cultural overproduction, drawing connections between the Bilbao Effect, Instagram-driven travel, the ease of photography today, and the exhaustion of once-iconic destinations. At the heart of the essay is the concept of oversaturation—the point at which an excess of images, experiences, and cultural output dulls their impact, leaving audiences numb and places drained of significance. In an age when images are endlessly replicated and consumed, the relentless circulation of visuals flattens experience, reducing places to mere backdrops devoid of context or meaning. This commodification of place, fueled by social media’s demand for shareable moments, has led to a kind of cultural burnout. Tourism, in its current form, seems to have reached a point of diminishing returns. How long can it be sustained before the spectacle collapses under its own weight?

Oversaturation is the defining mood of 2024. With major cultural institutions competing to churn out new exhibitions and blockbuster shows, the traditional rhythms that once governed artistic production feel sped up as if on amphetamines. Every season brings another round of high-profile openings and all-too-many biennials, fueling a frantic chase for novelty. The obsession with simplistic politics in the art world has burned out, but without any substitute. Institutions have been left rudderless. For too long, writers and curators have defined movements that only last as long as a single show: sound art, tactical urbanism, post-Internet art, zombie formalism, NFTs, the covidean, parametricism, “the new aesthetic”, dimes square/indie sleaze revival, and so on. Nobody cares anymore, except maybe some art school graduates out for bad wine and parties. In architecture, movements have been less prone to such rapid obsolescence, but the energy similarly has been lost. Where “starchitecture” used to captured headlines, such celebrations of elite wealth are ubiquitous in cities now and there is no difference between starchitecture and junkspace. Thomas Heatherwick’s the Vessel is the punctum at the end of starchitecture: a structure whose highest purpose seems to be to overwhelm visitors with despair until they fling themselves over its side. Nor is there room for an alternative: once subversive, blogs, zines, and architecture fiction have faded, abandoned by a generation more concerned with profit.

The same goes for the news. After years of constant crises and hyperbole, the public has reached a point of fatigue and skepticism. The endless drumbeat of dire warnings from all corners no longer commanded them. Where once Americans took to the streets at a moment’s notice, now people who identified with the Resistance of 2016 seem worn out. Instead of galvanizing new mass protests, the news cycle spawns shrugs and eye-rolls. It’s not outright hostility, just exhaustion. Our sensorium simply can’t take constant screaming anymore. In The Week, Justin Klawans calls 2024 “a year of reckoning for the fourth estate.” Indeed it was. While the Right is taking advantage of this in the US at the moment, I have all the confidence that they will experience a similar overload. The endless churn of the news cycle during the Trumpenjahre is going to take its toll. Indeed, Klawans ended with the following sentence “The ‘legacy media is dead. Hollywood is done. Truth-telling is in. No more complaining about the media,’ right-wing activist James O’Keefe said on X. ‘You are the media.'” But social media is equally ill.

Engadget editor Cheyenne Macdonald writes “It’s never been more exhausting to be online than in 2024. While it’s been clear for some time that monetization has shifted social media into a different beast, this year in particular felt like a tipping point. Faced with the endless streams of content that’s formulated to trap viewers’ gazes, shoppable ads at every turn, AI and the unrelenting opinions of strangers, it struck me recently that despite my habitual use of these apps, I’m not actually having fun on any of them anymore.” Too many ads and badly written algorithms have crushed content. Desperate to wring engagement from already tired users, social media firms compounded this with frantic moves that often backfired. Many people left Twitter when Elon Musk purchased it; many more left in the subsequent months. Meta’s repeated attempts to replace Instagram’s photo sharing with video reels and the addition of new algorithm tweaks there and on Facebook led to further user drift and confusion. Frustration mounted with links being demoted, smaller creators seeing their reach throttled, and online communities splintering all contribute to a general sense of retreat from the clamor. I notice that friends leave for BlueSky, which leans left-wing, containing as much extreme and violent language, if not more, than Twitter has now (calls for the death of someone disliked by leftist radicals are common), and then they fall silent. TikTok was briefly banned in the US, then restored, but there is bipartisan support against it and the platform’s future is in doubt. Group chats are also dying, a decline captured in Tony Tulathimutte’s story “Pics” from his 2024 collection. By now, anyone who has been on Discord for a while sees a total mess, with far too many servers and no coherency. The overall narrative is one of people stepping back rather than diving in. With everyone shouting to be heard, most are simply tuning out. Yet a handful of dedicated readers still seek out independently produced content wherever it can be found, perhaps the last outposts of genuine engagement in a sea of hype and oversaturation.

15 years ago, I suggested that postmodernism was dead and network culture was upon us. Now, it seems that a new era is being born, its outlines as yet unknown. AI is going to be as much a part of this as the Internet was for network culture and the televisual, the photocopier, and the personal computer were for postmodernism. But if postmodernism was, in Fredric Jameson’s famous line, “the culture of late capitalism,” it strikes me that something different is underway in contemporary culture. For the first time, population growth has ceased in the developed world. From China to the US to the EU, population growth is declining faster than experts had predicted even a decade ago. Since 2017’s Year In Review, I have observed the parallels between our world and “the Jackpot,” the slow-motion collapse first introduced by William Gibson in his novel The Peripheral (2014). Instead of the comet strikes or nuclear annihilation imagined by Hollywood, Gibson’s Jackpot is a series of cascading crises: climate change, resource depletion, biodiversity collapse, and social upheaval, exacerbated by the very technologies that sustain modern life. For Gibson, the Jackpot signifies an ongoing collapse punctuated by moments of technological innovation—innovations that serve the privileged few while leaving the vast majority to suffer and scramble for survival. Gibson’s vision is compelling and grim. He portrays a world where survival is a lottery of wealth and sheer luck, with the richest securing their future through technology and the poorest left behind in failure zones. In his fiction, the Jackpot is defined by stark inequality, unrelenting violence, and scientific advances that, while transformative, fail to offset the broader disintegration of society and ecology. Yet, the real Jackpot diverges in key ways.

I see the Jackpot less as a singular dystopia and more as a chronic condition, simultaneously an enduring state of polycrisis and a slow improvement in impact on scarce resources due to declining population growth. Lower birth rates and aging populations are rapidly accelerating worldwide just as artificial intelligence and automation promise to upend labor markets. The Right—from Putin and Xi Jinping to Musk—has raised alarms about declining birth rates, yet even by adding cash payouts for births (a move popular with liberals as well), it has been unable to change matters. But with AI, the global economy seems poised to pivot away from population growth as its primary driver. At the same time, population decline is necessary—we already exceed the carrying capacity of the Earth and with less resource consumption and less pollution, this Jackpot may yet create a significantly better world. This seems like an essential point of our contemporary culture: we are seeing the beginning of an age of population contraction.

And if AI is poised as a potential solution to the end of population growth and the inauguration of an age of limits—assuming much goes well— it also deepens inequalities, concentrating power and productivity in fewer hands. The uneven distribution of this future is already stratifying societies. For those with access to cutting-edge tools and the drive to use them, 2024 was a year of acceleration—a leap in productivity. For others, it was a year of stagnation or retreat, defined by fear of change more than the inability to participate in this transformation. The Jackpot is not just about access, it is also about the growing divide between those who can adapt and those who cannot.

Understanding the Jackpot means grappling with this unevenness. It is not the apocalypse, but it is a reckoning. It demands that we rethink what progress looks like when (population) growth is no longer the default. Breakthroughs, breakdowns—or more likely both—we are all already living in the Jackpot. Whether it is a slow-motion end or a new beginning depends on how we, individually and collectively, choose to play our hand.

2022 in review

I missed the year in review for 2021 entirely. The end of 2021 was stressful: it wasn’t a terrible year, the way the Trumpenjahren were, but it was bad. I ran out of steam and never pulled the post together. Not so this year. I’ve posted once a month on average, which is the most posts since 2016. Most of these were quite long opinion pieces and some, like the Critical AI Art projects took weeks of work to produce. Moreover, posting really began in earnest later in the year after I switched to my new server at Kinsta and my new theme powered by GeneratePress (see here). Not only is the site faster for you, dear reader, but it is also much faster for me to work on. For the first time in years, the future of this blog looks bright.

This post is comprised of four parts: The End of the Covidian, Geopolitical Transitions, Network Culture RIP, the Age of Desiring Machines

I. The End of the Covidian

2022 has been a good year and, although I know some of my readers will disagree, at least half of it felt like we left the pandemic behind. Goodbye to the Covidian era. As I write this, we are off skiing in northern Vermont and while some things still aren’t open and there are longer lines due to staffing shortages, it feels like COVID is over. Everyone in the family except me has had COVID during the last year and nobody got as seriously ill as our youngest did from the flu last month. Now, I’m far from an extremist on this: I have all my vaccines, all the boosters, and have taken reasonable precautions throughout the pandemic. But looking at the statistics, it’s clear that vaccines and herd immunity are here. Pretty much every article I see reposted to raise alarm about the new wave of COVID coming “any day now” declares that we need to watch out for a “troubling new variant,” but there is no troubling new variant, it’s just clickbait. Once there is a troubling new variant, then I’ll worry. In the meantime, this is the new normal. You’ll either be wearing a mask forever—which may be good if you are seriously immunocompromised—or not. Epidemiologists are pretty much always in a constant state of panic about diseases, it’s their training to do so; I’d probably be in a constant state of panic if I knew what they know. Instead, I’ll choose to live my life, which is what most people have done now.

It was always naïve or disingenuous of Dr. Fauci and others to claim that vaccines would utterly eliminate COVID the way Ebola was eliminated, COVID was already too widespread and contagious. But if COVID is, as claimed, a novel coronavirus, the odds are that once the massive and tragic initial impact is over, while it would never disappear, once we achieved a degree of immunity to it, it would be something we could live with in an endemic state, like existing coronaviruses, the new COVID normality. What about long COVID? Sure, it’s real, although many of the studies on long COVID seem quite poor, and instead of fretting about it, maybe we should pay attention to the long-term consequences of all viral infections? I have been struggling with IBS which began after a bad cold forty years ago, and Epstein-Barr, which causes mononucleosis, appears to cause multiple sclerosis. That’s pretty bad right there and while I have immense sympathy for anyone affected by any long viral disease, isolation, and constant masking have very real consequences on human life, particularly on child and adolescent development. We’re done with it and, unless and until something horrible appears, we’ll be living life in most ways as we did before March 2020. The COVID-induced supply chain crisis is largely over. New challenges are emerging, but the Covidian era is (likely) history.

II. Geopolitical Transitions

The biggest news of 2023 was, of course, the invasion of Ukraine. There has been huge suffering for Ukraine in the single largest violation of territorial sovereignty by a foreign power in Europe since World War II. But the Russian Bear stumbled and got badly bloodied. For centuries, Russia has been an awful neighbor, a bully, not a country that plays by the rule of law. Built on kleptocracy and theft at home, the state model for foreign relations is to invade, rape, kill, and exploit ethnic minorities and their sovereign lands to make up for the shortcomings of the kleptocratic Russian economy. As a result, leaders in the Baltics, Poland, and even Ukraine realized that post-cold War Russia was a threat and looked westward, where they might seek protection. Putin might have had a chance to counter this had he struck a decade ago, but for some inexplicable reason, his first excursion into Ukraine was halfhearted and he didn’t complete the task when he had a frightened lapdog as US President.

Corruption, incompetence, and an utter lack of strategic thinking undid the initial Russian thrust and, with help from the US and NATO, Ukraine is not only holding its own, it’s beating back Russian aggression. Russia is resorting to its usual tactics of massive bombardment of civilian positions from a safe distance, but with Ukraine, they’ve encountered a country not only fighting back on its own territory but also lobbing missiles back at Russian bases deep in their territory even as unknown saboteurs are destroying Russian infrastructure. It’s still unclear what the outcome will be or when: the result may simply be a question of who runs out of ammunition first, but Putin’s colossal miscalculation means there is a remarkably high chance it will be in the collapse of the criminal regime of Vladimir Putin. I am zero optimism that the result will be a new, more democratic, peace-loving regime in Russia. On the contrary, the collapse of the remaining empire will lead to a series of internal disputes and civil wars and a decline into a general ungovernability of the sort that has taken over much of the Middle East. Doubtless, China and other smaller powers will also make incursions into Russian territory, whittling away administrative regions for their own purposes. In Moscow and St. Petersburg, the next two decades may feel much more like cyberpunk dystopian versions of their 1990s selves: barely governable cities where the mafia and oligarchs take even more control while ordinary individuals resort to unprecedented measures to survive.

I’ve been to Europe a few times since the invasion and to Lithuania twice. Germany made a tremendous miscalculation under Merkel, allying itself with Russia and drinking deeply of its energy, but that route is going away forever and so is its role as leader of NATO and the EU. France and the UK have also been weakened by their complacency. These are economic empires in decline and—especially if Ukraine wins—a new center for Europe is going to emerge in the East, stretching from the Baltics down through Poland—which will be the most dominant force in this Europe—and into Ukraine. Turkey is already proving a powerhouse, but it is less likely to be a threat than an ally with this new democratic East bloc. This is where the energy in Europe is now: nations rejuvenated by existential threats frequently roar back as mighty powers, just the way Germany and Japan did after WWII. One last word about Russia: it re-introduced nuclear threats into East-West relations, but it did so poorly by repeatedly drawing lines that have been crossed. There has been no real escalation in readiness on the Russian side. While it certainly remains possible, it’s the silent bear you need to worry about, not the grunting one.

Although China hasn’t suffered the same humiliation that Russia has, it seems to be past its peak as well. The Zero COVID policy was an economic and social disaster that led to mass unrest and its end was utterly mismanaged. With Russia’s failure in Ukraine, Qi is forced to question his prospects for invading Taiwan while the West’s turn away from China has become even more urgent as its troubles with COVID cement the idea of China as an unreliable trading partner. Worse still, China has finally turned the corner to the other side of its demographic bubble and its population began contracting in 2022. It will be many generations before it is on the upswing again.

I don’t feel like I know enough about the global south, so I’ll skip that. But all this indicates that the 2020s are going to be very different than the 2010s. The Eastern European nations and Turkey will become increasingly important as Russia, Western Europe, and China are spent. It’s still unclear to me what countries outside of Europe will replace the BRICs, but no doubt there will be some surprising times afoot in this coming decade. Even if everyone may throw up their arms at this, the US—disregarding all its troubles—is likely to come out of the decade in a position of strength simply because of resources, population, a lack of real threats on its borders, and the existing geopolitical order. Much of this was foretold in geopolitical forecaster George Friedman’s 2011 book The Next 100 Years. Crucially, he repeatedly points out that no matter how violent disagreements between parties within the US really seem, the underlying policy doesn’t shift as much as it might appear it would, so notwithstanding Putin’s useful idiot in the White House, the US not only didn’t leave NATO, they left it stronger by forcing smaller countries to increase their defense spending; likewise, when Democrats took power in 2022, the US’s newly aggressive policy toward China didn’t really change. If you are interested in geopolitics, it’s worth a look.

III. Network Culture, RIP

Even as life is recovering and momentum is returning, there has been a renewed economic crisis throughout much of the world. Some of it is thanks to larger macroeconomic factors, e.g olb War, but much of it has to do with mistakes in economic policy—goosing of the market for far too long with loose monetary policy, quantitative easing, the misguided 2017 tax cuts, and too much pandemic relief. But the real cause is the end of a technological and economic cycle that began 20 or 30 years ago (depending on how we measure it) and had its heyday in the 2010s with the vaunted FAANG stocks (Facebook, Amazon, Apple, Netflix, Google) growing about ten times faster than the rest of the market and driving equity markets to new highs. Over 2022, Facebook/Meta is down roughly 65%, Amazon and Netflix are down over 51%, Google/Alphabet is down 40%, Apple is down 29%, and FAANG adjacent stock Tesla is down 68%. Bitcoin, itself, which isn’t a stock but rather a Ponzi scheme, is down 64%, the S&P Cryptocurrency Large Cap index is down a massive 69%, and we all know how things ended for Sam Bankman-Fried and SBF. Compare this to the Vanguard Consumer Staples Index Fund, which never ran up as high, but is down a mere 3.69%.

This terrible tech performance, particularly in cryptocurrency, is indicative of a speculative bubble deflating, but it also points to a generational shift in technology. I am not an absolute believer in Kondratieff waves—long economic waves based on technological development that writers from Carlota Perez to Fredric Jameson have embraced—they seem too deterministic to me, but there is also some macroeconomic sense to them. New technologies drive speculative investment, which results in returns that seek more investments of a similar kind. After a while, overinvestment leads to bloat, the bubble bursts, and the economic system declines precipitously. The sharing economy, Web 2.0, and that branding abomination, “Web3” are finished. And with the end of this system, so is its cultural logic, network culture.

I first wrote about network culture in the mid-2000s and my first piece on the topic came out in our book Networked Publics. You can read the original version here and a revised version here. This piece has been translated into numerous languages: Lithuanian, Hungarian, Spanish, Chinese, and others (I’ve lost track at this point). I started a book on the topic immediately thereafter but I wasn’t able to finish it due to external factors beyond my control and a debacle at the publisher. You can read various spin-offs in “Forced Exposure. Networks and the Poetics of Reality,” in Jo-Anne Greene, Networked. A Networked Book about Networked Art on turbulence.org, “History After the End. Network Culture and Atemporality,” Cornell Journal of Architecture 8, spring 2011, “Simultaneous Environments,” in Mark Shepard’s, Sentient City: Ubiquitous Computing, Architecture, and the Future of Urban Space, and in “Architecture of Financialization,” Perspecta 47, 2014 (in the coming days, I will post all these pieces to my site).

The basic idea of network culture came out of my frustration that academics were still using Fredric Jameson’s “Postmodernism, or the Logic of Late Capitalism” over twenty years after it had been published, long after that epoch was finished. Jameson was never able to see past this, but a number of us did. Some other words were thrown around like metamodernism and post-postmodernism, but “network culture” made sense to me, indicating that there was a new cultural logic that was now based on relationships and connections primarily mediated by the Internet. As I wrote then, “Increasingly, the immaterial production of information and its distribution through the network is the dominant organizational principle for the global economy.” As Manuel Castells concluded in The Rise of the Network Society networks now supplant hierarchies and the production of information and the transmission of that information on networks is the key organizing factor in the world economy today. On a territorial and even geopolitical scale, Saskia Sassen pointed out in The Global City, megolpolises dominated, linked together by high-speed telecommunications networks, producing the financial and media operations that made the network economy thrive.

Network society was a globalizing society, what Michael Hardt and Antonio Negri called “Empire” and network culture was a global culture: subcultures and local undergrounds began to decline. In an economy dominated by sharing, cultural mixing, and rapid wealth generation, the idea of the artist as a cultural elite was largely replaced by an interest in participation and remix. And yet, art could also be tremendously valuable as venture capital relentlessly sought new outlets. NFTs were the logical outcome of all this, removing artistic merit in favor of pure speculation—especially from people who didn’t know what they were doing with art or investment—led to the creation of an utterly bogus $11 billion market of which over $800 million is stupid looking apes that look like they are waiting to audition for a Gorillaz video game.

NFTs and the Boring Ape Club were, however, the last gasp of network culture, a decadent last spurt that only proved the system was spent. The signs of cultural change are around us. Network culture is dying. Social media is not coming back, not in its traditional form. Just 7% of teenagers say they use Facebook constantly. https://www.pewresearch.org/internet/2022/08/10/teens-social-media-and-technology-2022/ Nobody on Earth, besides Mark Zuckerberg, wants to wear a VR headset to have a meeting in a virtual office full of amputated refugees from a rip-off of a Pixar movie. TikTok is popular, but I would be surprised if there aren’t massive restrictions or even an outright ban put in place by governments by the end of 2023. Twitter is in freefall. The world’s richest man has proven to be the world’s biggest idiot by spending a staggering $44 billion dollars on a site that was already in trouble and cementing it’s demise by acting like an idiot. These sites are not coming back. The one site that seems to absorb the attention of youth today—TikTok is much more like Youtube—a platform for consumption—rather than a traditional social media site and is under constant threat by Western regulators. The beginning of the end really happened in 2016. If on the one hand, Trump rose to power due to network culture—heavily employing social media and viral memes to mobilize followers—he also embodied the discontent with globalization that had always been there, but that had achieved a new fever pitch as the system spent itself.

Back in 2010 Bruce Sterling (in “Atemporality for the Creative Artist”) and I (“The Decade Ahead“) predicted that this epoch he and I called network culture would last at least ten years. I wrote: “Toward the end of the decade, there will be signs of the end of network culture. It’ll have had a good run of 30 years: the length of one generation. It’s at that stage that everything solid will melt into air again, but just how, I have no idea.” COVID was the break, and, after those ten years became “the decade of shit,” nobody is going to miss network culture. In retrospect, the 2000s were the decade of excitement, of forging new connections across age-old boundaries while finding old friends, of a world that had promise and was still imbued with the utopic promises of the early Internet and open source culture. The 2010s showed us just how toxic network culture could get, as both right (and left!) sought to squash dissent and get their minions in line. Hitler and Stalin would be proud of their descendants. A medium designed for utopic levels of human connectivity hurtled us toward civilizational collapse. That a disease spread by globalization and exacerbated by lies on social media (e.g. the anti-vax movement) ended all this is not surprising. We are lucky it wasn’t something worse.

IV. The Age of Desiring Machines

What comes after? My writing on network culture came a good way into that cultural epoch. Writing about this early is guaranteed to fail, but there were interesting if still premature, signs in 2022. First, there is the rise of Mastodon and decentralized communication. I have outlined my thoughts on this topic earlier, but suffice it to say, something new is in the works, a form of social media that hues closer to the original intent of the Internet. It may be that Mastodon always remains a small player in the net, but smallness is its strength. We need to bring back undergrounds and subcultures, not giant corporate meeting places that spread toxicity.

2022 has also been marked by the rise of “Artificial Intelligences” capable of producing text and images. I have explored these extensively and continue to do so. Ignore the horrific kitsch you see produced by these things, or better yet, don’t: the world of Deviantart and Artstation is bad, a byproduct of network culture permeated by simplistic online fan culture, NFTs were always stupid, now anybody can make things like that and this stuff is valueless. Good!

But calling ChatGPT, Midjourney, or Dall-E “intelligent” is wrong. These platforms have no ability to comprehend what they are doing. But might they be desiring machines? In the Deleuzean sense, a “desiring machine” is formed out of connections: every machine (or entity) is connected to another machine and in turn to another. This desire is not just about wanting something, but also about the process of becoming and creating through connections and that Is exactly what these platforms do. Responses to our prompts are based on the machine’s prediction of what a correct response would be. In other words, these systems are characterized by their desire to fulfill our desires. This is all very far from artificial general intelligence—although a baby crying for food is also far from a scientist or even a toddler in its ability to reason—but it is something new. There are a lot of unknowns here: we may already be at the end of the rapidly rising part of the S curve for these systems, or we may only be at the beginning. Either way, there is a reckoning in store for cultural producers and mid-level professionals producing banal work that will cause massive disruption.

There has been a lot of useless noise about the ability of these platforms to create fakes and I’ve played with that in my art, but where did we go wrong as educators? What happened to the idea that we should think critically? Wasn’t art history, as codified by Wölfflin, literally a matter of finding out how to authenticate something? Isn’t that what we learned in high school? Who are these people who have forgotten that “critical thinking” doesn’t mean blindly accepting whatever you see but rather that it means taking a critical distance from a text or an image?

Disruption is the key for the next few years, during which the outlines of a new cultural logic will begin to become apparent. The future is likely to be in terms of exacerbating the dictum attributed to cyberpunk writer William Gibson: “The future is already here, it’s just unevenly distributed.” Don’t expect a utopic condition from new technologies and don’t expect the rise of socialism. Everyone is out to grab what they can. I am older and less optimistic than I was ten years ago, less prone to see the spectre of capital behind everything but also less prone to think anything can change that much. As the second season of White Lotus just emphasized, the upcoming generation is as confused, toxic, and prone to gaslighting and self-deceit as the previous ones. Colossal numbers of kids are being medicated, and while some small percentage need it, the amount of medication psychiatrists dispense needlessly is staggering. Russia’s barbaric invasion of Ukraine, the American Right’s refusal to condemn an insurrection that imperiled democracy, and the thoroughgoing denial of climate change (this essay was written during downtime from a ski vacation that is ending in a massive rainstorm) prove that too many humans are still bastards.

In the meantime, I’ll keep working on the design of my 1/2 acre (1/5 hectare) native plant garden, my art, and my writing—most especially on this blog, where the only thing that can impede my publishing is me. I’d love to get new commissions, but if not, there’ll be more to come on this site. Let’s hope 2023 is a better one. I think it will be, but I don’t expect miracles. Let’s hope the new cultural logic is at least as interesting, but less toxic than network culture. That would be quite an accomplishment right there.

2020 in Review

Grayness in Maine
Grayness on Mount Desert Island, 2020.

According to conventional chronological schemas, 2020—not 2019—is the last year of the 2010s.* This is convenient since, as I pointed out in last year’s premature review of the last decade, the 2010s were “the decade of shit” and 2021 is a stinking pile of shit. The worst decade since World War II ended with the worst year since 1945.

My “year in review” posts are usually almost as late as my taxes and when I finished last year’s post on February 12, we were all well aware that COVID was out there. Now, no question that I missed the severity of the pandemic back then, but I was on the money about its psychic effects. For all of the horror of COVID, it isn’t horrible enough. COVID is banal. Instead of bleeding out through all of our orifices as with Ebola, COVID is “a bad case of the flu” that leaves people dead or with debilitating cardiovascular and neurological ailments. But how different is my diagnosis, really, from what happened?

Now sure, this year [2020] we’ve already had firestorms the size of Austria ravaging Australia, a rain of rockets in Baghdad, Ukrainian jetliners getting shot out of the sky, a deadly pandemic in China caused by people eating creatures that they really shouldn’t, and the failure of the Senate to uphold the rule of law, but the banality of it all is crushing. While the Dark Mountain set drinks wine around a campfire, gets henna tattoos, and sings along to songs about the end of nature, for the rest of us, it’s just an exhausting, daily slog through the unrelentingly alarming headlines.

COVID brought us yet more crushing banality. The Idiot Tyrant is gone, but we are trying his impeachment yet again. Everything changes, but nothing changes. We were all the Dark Mountain set this year, sitting around our campfires, singing songs about the End. It was another atemporal slog, one day bleeding into another, every day a Sunday in a country where everything is closed on Sundays and there is nothing to do, every day stranger and more disconnected than the last, something captured in comedienne Julie Nolke’s series of videos entitled “Explaining the Pandemic to My Past Self.”

Amidst the disconnection, the Jackpot—or William Gibson’s term for a slow-motion apocalypse—cranked up a couple of notches. Just surviving the year was an accomplishment. The balance of life has been thoroughly disrupted and that disruption isn’t going away any time soon. It’s not just COVID: we now feel certain that there will be more pandemics, more massive wildfires, and more superstorms in our future. The Earth isn’t dying (sorry, climate doomers), but there will be huge losses of species worldwide, human population decline is well underway in advanced societies (the US is finally on the bandwagon here), and massive deaths will take place across the planet until the population comes back to a sustainable level decades from now.

But the premise of the Jackpot is that it isn’t a final apocalypse: there will be another side. In his Twitter feed (@GreatDismal), even Gibson focuses on the horrific and unjust nature of the Jackpot, but there will be winners, selected on the basis of wealth and sheer dumb luck. What might this say about the US election and the fact that 46% of Americans voted for a cretin? Now, there is nothing particularly new about melding Tourette’s and dementia into a public speaking style, there are plenty of lunatics sitting on their porches screaming obscenities at their lawn ornaments. Everybody knows that Uncle Scam’s persona as a billionaire—or rather the King of Debt (his own term!)—is an act. The man with the golden toilet is not a successful businessman. He is weak, a loser who can’t stay married or stay out of bankruptcy court. Four years of misrule ended in abject failure: defeat in both electoral and popular votes, being banned from social media and, with his businesses failing, being forced out of office in shame to face an unprecedented second impeachment, an array of civil litigation as well as criminal indictments for fraud, tax evasion, incitement to riot, and rape. But this—not a misguided notion of him as a success—is the real point of his appeal. The short-fingered vulgarian is a life-long loser, a reverse Midas whose every touch turn gold to lead. But in the face of the Gibsonian Jackpot, his appeal was not as a stupid version of Homer Simpson, grabbing whatever scraps he can and, when that failed, LARPing as President, destabilizing society, and just blowing everything up.

LARPing was big in 2020, which saw the attempted kidnapping of Michigan Governor Gretchen Witmer by wingnut idiots, various insane protests by COVID deniers, the attempted coup of the Capitol Insurrection, and the riots developing after the Black Lives Matter protests. BLM was the standout among these, not only a good, just cause, but also because the majority of the protests themselves were peaceful—such as the one in our town of Montclair, New Jersey. None of that was LARPing, but the riots that accompanied it were. For the most part, this was less people with genuine greivances and more Proud Boys, Boogaloos, anarchists, and grifters who came in to loot and burn whatever they could down. Although there were kooky moments on the Left like the Capital Hill Automonous Zone, Antifa, for however much it exists, didn’t do much, certainly proving to be far less trouble than white supermacist-infiltrated police forces in paramilitary gear. Still, the widely-vaunted second Civil War never came about and the arteriosclerotic LARPers on the Right limped off the field in defeat after their they got a spanking at the January putsch.

A number of observers at both the Capitol Insurrection and CHAZ —including some of the idiots who took part in it—noted that these events felt much like a game, specifically an Alternate Reality Game (ARG). In a typical ARG, players look for clues both online—think of the QAnon drops, the Trumpentweets, or the disinformation dished out by the skells at 4chan, 8chan, and so on—as well as out in the world. Jon Lebkowsky, in a post at the Well’s State of the World and Clive Thompson over at Wired compare QAnon to an ARG. Indeed, gaming is taking the place of religion (whichever grifter figures out how to meld this with Jesus and his pet dinosaurs will get very rich indeed), with the false promise that playing the game and winning will deliver one to the other side of the Jackpot. Somewhere, I read that when asked what he would do differently if he had made Blade Runner a decade later, Ridley Scott replied that he would be able to skip the elaborate sets and just point the camera down the streets of 1990s Los Angeles. Today, the same could be said for the Hunger Games today.

But not everything was LARPing. If Cheeto Jesus is an icon for LARPing losers, Biden was elected on the premise of staving off the Jackpot by returning adults to the White House. This is not a bad thing, we might as well try. Still, from the perspective of Jackpot culture, the most interesting political development of the year was the candidacy of Andrew Yang whose cheery advocacy of Universal Basic Income (aka the Freedom Dividend) masked the dark, Jackpot-like nature of his predictions. Let’s quote Yang’s campaign site on this: “In the next 12 years, 1 out of 3 American workers are at risk of losing their jobs to new technologies—and unlike with previous waves of automation, this time new jobs will not appear quickly enough in large enough numbers to make up for it.” No matter how friendly Yang’s delivery, there is a grim realism to his politics, an acceptance that things will never be better for a massive sector of the population. Certainly some individuals will find ways to use their $1,000 a month freedom dividend as a subsidy to do something new and amazing, but 95% will not. Rather, they will form a new and permanent underclass as they fade into extinction. Again, the point of Yang’s candidacy isn’t the cheerleading for math and STEM, it’s the frank acknowledgement that the Jackpot is already here.

On the other hand, toward the end of the year, Tyler Cowen suggested that we might be nearing the end of the Great Stagnation (he is, of course, the author of an influential pamphlet on the topic) and you can find a good summary of the thinking, pro and con by Cowen’s student Eli Dourado here. In this view, advances such as the mRNA vaccine, the spread of electric, somewhat self-driving vehicles, the pandemic-induced rise of remote work, and huge drop in the cost of spaceflight are changing things radically and could lead to a real rise in Total Factor Productivity from the low level it has been stuck at since 2005. Is this a sign of the end of the Jackpot? Unlikely. That won’t come until a series of more massive technological breaks, probably (but not necessarily) involving breakthroughs in health (the end of cancer, heart disease, and dementia), the reversal of climate change, working nanotechnology, and artificial general intelligence. But still, there are signs that early inflection points are at hand.

Personally, we experienced one of these inflection points when we replaced our aging (and aged) BMWs with Teslas. I wound up getting a used Tesla Model S last January and then immediately turned around and ordered a brand new Model Y that we received in June. No more trip to the gas station, and while “Full-self driving” is both expensive and nowhere near fully self driving, it is a big change. Longer road trips—which under the pandemic have been to nurseries on either side of the Pennsylvania border to buy native plants—have become much easier, even if I still have to keep my hands on the wheel and fiddle with it constantly to prevent self-driving from disengaging. But harping on too much about the incomplete nature of self-driving is poor sport: in the last year, Tesla added stop light recognition to self-driving and a new update in beta promises to make city streets fully navigable. Less than a decade ago, self driving was only a theoretical project. Now I use it for 90% of my highway driving. That’s a sizable revolution right there. Also, the all-electric and connected nature of these cars makes getting takeout and sitting in climate-controlled comfort in my vehicle when on the road a delight. Electric vehicles were a big success this year and in our neighborhood which is a bellwether for the adoption of future technology (when I saw iPhones replace Blackberries on the bus and train into the city, I bought a bit of Apple stock and made a small fortune) and Teslas have replaced BMWs as the most common vehicle in driveways.

Back to the pandemic, which accelerated a sizable shift in habitation patterns. Throughout the summer, there was a lot of nonsense from neoliberal journalists and urban boosters about how cities are going to come back booming, but with more bike lanes, wider sidewalks, less traffic, and more awesome tactical urbanist projects to appeal to millennials. Lately, however, those voices have fallen silent and with good reason. In this suburb the commuter train platforms are still bare in the mornings and the bus into the city, once packed to standing room only levels every evening, hasn’t run in five months. A friend who works in commercial real estate says that occupancy in New York City offices is at 15% of pre-pandemic levels. Business air travel is still off a cliff. Remote work isn’t ideal for everyone and every job, but neither was going into the office. For sure, the dystopian open offices, co-working spaces, and offices as “fun” zones are done and finished. People are renovating their houses, or upsizing, to better live in a post-pandemic world of remote work. Another friend who works for a large ad agency told me that they did not renew their lease for office space and do not plan to ever go back to in person work, at least for the vast majority of the staff. When employees gain over two hours a day from not commuting and corporations save vast fortunes on rent, remote work seems a lot more appealing. Retail sales here and in the surrounding towns have gone through the roof, just as they have in many suburbs.

But it isn’t just suburbia that has prospered at the expense of the city, exurbia has returned too. Way back in 1955Auguste Comte Spectorsky identified a growing American cultural class that he dubbed “the exurbanites” made up of “symbol manipulators” such as advertisers, musicians, artists, and other members of what we today call the creative class. Spectorsky observed that many of these individuals eventually tended to drift back to the city. This time may be different. After two decades in the city, the creative class is turning to places outside the city with attractive older houses and midcentury modern properties, walkable neighborhoods (virtually all of Montclair, for example, has sidewalks), good schools (which generally mean high property taxes but are an indicator for a smarter, engaged populace), amenities like parks and places to hike, decent bandwidth, as well as independent restaurants, shops, and cultural attractions. There will always be variations in taste: some people really do want to eat at Cheesecake Factory and live in a Toll Brothers McMansion, but these will appeal to relatively few of the people fleeing cities at this point. Thus, the Hudson Valley—full of older, more interesting architecture, great natural resources and quirky towns—is booming. I predict some reversion to toward the mean after the pandemic ends and some of the people who fled to the country realize they aren’t suited to a place without Soulcycle, but this will be only a partial and temporary reversion.

I predict that even after the pandemic ends, there will be a greater interest in self-sufficiency among young people who move to suburbia and exurbia. Manicured laws will be less important than vegetable gardens. Homesteading, permaculture, and a drive back to the land not seen since the 1960s are under way. It would be a very good thing if the next generation was more in touch with their land and less prone to hiring “landscapers” who treat properties as sites subject to industrial interventions such as chemical fertilizer for lawns, a phalanx of gas-powered lawn mowers and leaf blowers to remove any stray biological matter.

As far as cities go, the pandemic is triggering a necessary contraction. The massive annihilation of real estate value it has caused should go a long way to undo the foolish notion that urban real estate is always a great investment. It’s not, just ask anyone who bought a house in Detroit in 1965. Real estate in first and second tier global cities has become wildly expensive, disconnected from the underlying fundamentals. When individuals are paying rents that absorb over 30% of their salaries to investor-owners who are not covering their mortgage with those rents, something is very wrong. This broken system has been able to function due to the perceived hedonic value of restaurants, bars, and cultural events, but these things too have been failing over recent years. Long prior to the pandemic, the cost of rent decimated independently-owned restaurants and retailers, with the latter also hurt by on-line shopping. The golden age of dining out (if it really was the golden age… I would say that better food could have been had in other, less copycat eras) was already declared over in 2019. “High-rent blight,” in which entire streets’ worth of storefronts were empty due to ludicrous rents, has been common for some time now. Tourists made up more and more of the street crowds while loss-leader flagship stores for chains like Nike and Victoria’s Secret replaced local businesses. With the hedonic argument for staying in the city rapidly disappearing, it was only a matter of time before individuals began departing and, in New York, population had begun to drop by 2018 (see more on all of this in Kevin Baker’s piece for the Atlantic, “Affluence Killed New York, Not the Pandemic”). Perversely, this is a good thing as it will likely lead to a bust in commercial real estate prices and a decline in unoccupied or AirBNB’d apartments, thus making global cities like New York places that have potential again. Moreover, many second tier cities such as St. Louis, Kansas City, and Cleveland are experiencing new growth as individuals able to work remotely are looking for places that are less expensive—and thus have more potential—than New York or San Francisco.

These shifts are huge and for the better. As I tried to tell my colleagues at the university, there is no housing crisis, at least not in the US and Europe, there is only an appearance of one because of the uneven distribution of housing: a glut in some areas, a shortfall in others. The pandemic has likely undone this a bit. Of course, places that are too politically Red, too full of chains, too full of copycat McMansions are unlikely to come back anytime soon, if ever. The Jackpot continues.

Still, I’m observing a perversely rosy future for the urban (and suburban and exurban) environment is the Biden administration’s interest in infrastructure. Back in 2008, I shocked design critics when I stated that there would be no progress in infrastructure for the foreseeable future. “But, Obama,” they complained. “But, Obama,” I clapped back, “just appointed Larry Summers as his chief economic advisor and Summers will bail out the banks, not fund infrastructure.” I expect the opposite from Biden who has adopted a “nothing left to lose” position as purportedly one-term President, is a devotee of train travel and is eager to make great progress on climate change. Appointing Pete Buttigieg, one of his two smartest opponents in the primary (the other being Andrew Yang, of course), to Secretary of Transportation is a key move. This will be Buttigieg’s opportunity to prove himself on the national stage and he will fight hard to do that, just as Biden expects. Expect more electrification across the board and, I suspect, more advances with self-driving vehicles. Although certain measures—such as, in the New York City area alone, the Gateway Tunnel between New Jersey and New York, now delayed over a decade thanks to Chris Christie and Donald Trump’s vindictiveness against commuter communities that would not vote for them and the reconstruction of Port Authority Bus Terminal—will help cities, again, I predict more emphasis on decentralization and activity outside the city.

All this may have salutary cultural implications. The global city is played out. Little of interest happens in New York, San Francisco, London, Paris, or Barcelona. These cities are too expensive for the sort of experimentation that made them great cultural centers and the diffusive nature of the Internet, capitalism, and overtourism have made them all the same. Residents of cities that have been victims of overtourism have seen this as an opportunity to reset, while the physical isolation of cities is going to increase reliance on local institutions. With some luck, all this leads to a new underground, with greater difference creating greater diversity and potential. Of fashion, Bruce Sterling writes, “Fashion will re-appear, and some new style will dominate the 2020s, but the longer it takes to emerge from its morgue-like shadow, the more radically different it will look.” The same could be true of all culture. Globalization was an incredibly powerful force but has been played out. I don’t agree with the protectionist instincts of the Trumpenproles but today culture’s hope is to thrive on the basis of the difference between places and cultures, not on greater sameness. Architecture has been very slow to react to all of this, in part because many intelligent young people have drifted into other fields, like startups, but I am optimistic that we might soon get past the ubiquitous white-painted brick walls and wood common table (the architecture of the least effort possible, to match fashion and food driven by the least effort possible), the tired old Bilbao-effect, and quirky development pseudo-modernism.

So much optimism on my part! Even I am shocked that I am so positive. But why not? The end to this exhausted first phase of network culture is overdue. Time for a new decade, at last.

*The reason for this is that there is no Year Zero. 31 December 1BC is followed by 1 January 1AD.

Year in Review 2018

The Year in Review 2018

I let six years go by without a Year in Review post, restarting the tradition last year. Not this time, although, with the frenetic pace of news this year, it seems like we have all aged six years in 2018.

Things are in a profound state of in-between. On the one hand, the Trumpian kleptocracy is accelerating. With Kelly and Mattis leaving in December, the “adult day care center” has closed, leaving only a pre-school version of Lord of the Flies in the White House. And yet, the end seems to draw near for this vexed time. Voters gave a resounding rebuke to Republicans in Congress, one that may ultimately be generational in nature and that gives Democrats subpoena power. Expect action soon. What’s in those tax returns? How much crony capital have Jared and Donald received over the years? By this time next year, we should know. Moreover, the Mueller investigation is accelerating, drawing closer and closer to the great kleptocrat’s inner circles even as we are left guessing at what sort of revelations we will learn in the months to come.

But that said, massive global instability is the price we pay for Trump. Authoritarian forces are on the rise throughout the world. It would be easy enough to say that these forces have been there all long, but its more accurate to say that the actions of individual players still matter. Trump was a colossal misfire, an eruption of senile admirers of fascism who think that a country of coal miners, machine guns in every classroom, and Christian sharia law will bring Jesus back, no doubt riding on a dinosaur. But with the markets on a rolled coaster ride that ultimately ended down in almost all sectors worldwide, we have to wonder how long business will find the radical Right palatable. Constant turmoil and increased tariffs are making CEOs wonder how useful Trump really is. It’s time to take gramps out of the White House and put him in a nursing home.

Beyond the rise of authoritarian power, 2018 was the year in which the rapid pace of climate change became obvious to anyone with a pulse. I am not a big fan of Alexandra Ocasio-Cortez (democratic socialism is a ticket to another right-wing victory), but her Green New Deal just makes sense. The US has spent trillions upon trillions subsidizing oil in various ways (from outright subsidies to the construction of roads which are, of course, paved in oil) and fighting wars in the Middle East to safeguard fossil matter, why shouldn’t we treat this as energy independence as matter of national security? There are 50,000 coal miners in the United States, less than the 89,000 employees of Sears who will lose their jobs this year’s and far less than the 1.6 million university faculty in the USve. If the Democrats want to win in 2020, running of a platform of stopping the rise in temperatures worldwide and the ballooning national debt while restoring basic rights and freedoms taken away during the Trumpic regime would be a good place to start (given that the GOP has forgotten about the deficit now).

As for architecture. What is there left to say about it anymore? Starchitecture has faded, nobody gets excited about cool forms anymore. How can we be surprised? No starchitect is making interesting buildings, in fact the whole movement has been something of a bust. Second, architecture is no longer the profession that shapes space, digital technology is. Failing to recognize this dooms the profession to irrelevance, like heraldry in the days of mustard gas.

But architecture isn’t the only institution without purpose. Silicon Valley, it seems, has finally met a time in which nobody cares about what it makes or promises. People are not only tired of big tech, they are tired of startups that promise the world when their only business plan is to be acquired as soon and possible. In fact, for all its promises,startup culture was a bust and it is far smaller than it was two decades ago. Apple made its best products ever (I am typing this on one of the amazing third generation iPad Pros that I bought), and was punished for it by a massive drop in its stock price.

If any tech became widely accepted by the mainstream in 2018, it was the Internet of Things and the Smart Home. Amazon’s Alexa, Nest and Ring’s video doorbell, and Lutron’s Caseta system were among the winners in this transformation of our interior lives. There is nothing terribly radical about the smart home and, frankly, a lot of the panic about surveillance with the hardware is silly (as if smart phones don’t already do this). But embedded technology is everywhere now.

Still, it’s odd how art (and architecture) misses this change. For want of anything else, we are still in the era of post-Internet art, an idea which, unfortunately, I am somewhat to blame for. If there was some merit to thinking about how network culture permeated art in 2011, talking about “post-Internet art” now simply is about as useful as talking about Abstract Expressionism as “post-automobile” art. Art, like architecture, has lost any purpose or drive forward. Technology and art have drifted apart again and only a few of us hack away at the intersection of the two. Still, art and architecture are always falling into ruin and being reborn. Perhaps this time will be no different and the work we are doing will lead to a rebirth?

The academy is sick as well. Years of poor management practices and bloated administrations have gutted the arts and humanities as faculty were forced to take on heavy teaching loads and real research has been eliminated (in case you wondered, I left Columbia when the new Dean did away with the entire research arm of the school to appease the finance office). Two decades ago, I decried “staff-ism” in schools, but now that is all that’s left.

I left teaching completely this year, resigning from my position at University of Limerick, Ireland after thirteen years and bringing nearly thirty years of teaching to end. In large part, it was the basic inability of universities to function that drove me away. What good is it for me to waste my time trying to jump through hoops to get paid when there are people in finance offices whose job literally is to ensure that faculty don’t get paid (I’ve been told this point blank)? And teaching itself isn’t much fun anymore. Students, for their part, are more interested in looking at their instagram feeds than in listening to what I have to say. It’s the opposite of the 1960s when students proclaimed the irrelevance of their teachers. Now, faculty proclaim the irrelevance of their students. Bah. It’s not worth it. It was a mistake to keep going over the last couple of years. I may come back to education one day—I have many great memories that come from my students and many of them remain my friends to this day—but now is a time when the university is very much irrelevant. Independence is what we need, not sick institutions.

Speaking of sick institutions, there is welcome news this year regarding Facebook: we saw the first signs of that hated enterprise starting to implode. Zuckerberg’s pathetic attempt to get a date by building a Web site has wound up doing tremendous damage to the Internet with its reduction of all content to a general level of idiocracy. Older forms of Internet communication such as blogs, email-mailing lists and Internet forums are dying and since nobody reads books or magazines anymore, we communicate less than we did thirty years ago. Instead, we don’t even get FarmVille, we get social diarrhea. Nobody likes Facebook. Independent voices are needed on the net again. It’s not up to someone else to provide them, it’s up to us.

I rebuilt my Web site last week in hopes of returning to being an independent voice in the field. I finished the last year in review with a similar resolution, maybe this year, I’m getting cranky enough that’ll actually happen.

2017 in Review (and more)

I begin writing this post to usher in 2018 with a large purring cat on my lap, glad that we are home from skiing in Vermont and doing her best to prevent me from typing. Until 2012 it was a yearly tradition for me to take stock of the state of things, personally, professionally, and in the world at large. The last five years have been something of a whirlwind and even though I have tried, I haven’t returned to this tradition.

Looking back at the last entry, from January 2013, I began with the words:

With the second inauguration of Barack Obama as President of the United States, we also breathe a guarded sign of relief. The eight years of Republican rule at the start of the millennium were enough to discredit that party for the rest of the millennium, but it also came with a certain weariness. This time around Obama did not run on a platform of hope. And how could he have? He squandered that platform within a month of assuming office the first time around, appointing a boys’ club of advisors that made the early comparisons to Kennedy’s Camelot seem all too prescient. The first Obama administration, backing finance over building infrastructure and helping the poor, turned to the expediency of drone strikes over the messiness of peaceful resolutions, dismissed both single-payer and government options for national healthcare, and stayed quiet about climate change.

Well, those seemed like pretty good times, all things considered. This inauguration was met with shock and dismay by anyone with any wit whatsoever as we start with a President mired in dementia and crippled by narcissistic personality disorder, surrounded by glad-handers, hangers-on and family members more concerned with convincing him into doing their bidding than governing. The bane of the academy, neoliberalism, is gone for now, replaced by outright kleptocracy.

The core of this dysfunction, ultimately, is the infrastructural stalemate of government. By this, I am referring to the condition that I outlined in the introduction to the Infrastructural City, in which competing infrastructures and groups of stakeholders face off against each other, rabidly defending their turf even at the risk of the collapse of the system as a whole. Systems that cannot survive it (for example, AT&T and the Bell System) die or are radically restructured (in that case, the development of competing infrastructures of telematics), even if that process can take decades. By 2012, we could see infrastructural stalemate permeate Congress. Even as the Republican majority not only refused to work with the Democrats, it was internally hamstrung by the uncompromising demands of the Tea Party movement. The ensuing deadlock over budget priorities and the threat of repeated government shutdowns over rising debt led Congress to institute budget sequestration, a series of automatic spending cuts that, once set in motion, operated without the need for direct intervention. Sequestration was the sort of solution that managers of infrastructure often resort to: a jury-rigged system that everyone hates, but that everyone hates less than what competing stakeholders might propose.

In this light, the Republican victory of 2016 could be seen as infrastructural. Recall that in 2008 Obama had promised he would be known as “the Infrastructure President” only to quickly swerve away from these promises in favor of propping up big banks and the financial system. Two terms later, not only had the more familiar infrastructures of roads, bridges, rail, and air continued marching toward collapse, the same sicknesses that affect them had infected government. But, you may ask what about the blatant racism, calls to violence, and other horrors of the 2016 election? What do those have to do with infrastructure? Simple: fascism. The strong man comes in and promises that all that needs to be done is grab the lazy sods by the ears and bash their heads until they listen. Recall that Mussolini bragged he got the trains to run on time while Hitler’s great pride was the world’s first integrated highway system, the autobahn. Similarly, in Turkey Erdogan is undertaking a series of “crazy projects” such as a tunnel under the Bosporus, a new canal connecting the Black Sea with the Sea of Marmara to minimize traffic in the Istanbul Strait, a massive new airport in Istanbul, and so on. It’s this identical impetus that animates our current President as he claims he will impose order to the government one way or another.

Apart from his own failings, most notably the collusion with the Russian government during his election campaign, a crime already sinking his presidency, as well as his utter inability to focus or process information, the President faces challenges from the checks and balances that the founding fathers built into the US system; infrastructural stalemate as a Constitutional strategy. Unlikely to be able to turn it into the outright authoritarian rule that he desperately craves, the President faces a political dead end. In the meantime, he and those closest to him have turned to the best option they have: kleptocracy. In the name of infrastructure, the administration does what it can to remove regulations on pet industries (fossil fuel, mining, big manufacturing, e.g. all the old industries that find it difficult to be viable today) and while we hear constant promises about a big infrastructure act coming, I am convinced that it will largely be yet more concessions to specific supporters and, whenever possible, his own failing business ventures.

The one saving grace is that so far, I don’t see an asset bubble of the sort that crippled the markets in 2008. Certainly, some stocks, such as Facebook, are worth far more than they should be, but if on the whole assets are high, liquidity is as well. As markets crashed overnight following Brexit and the 2016 election, banks and other investors bought up the temporarily devalued assets to make a healthy profit. From the perspective of the markets, this is a good thing, as one danger to the Federal Reserve’s timidity about raising interest rates is that it leaves them with precious margin to restart the economy during a correction. Still, there is some likelihood that by increasing corporate profits, the Tax Reform plan (read: redistribution of wealth to benefit the oligarchs plan) will keep the economy going longer without a correction and overheat it, thus leading to Stagflation MK 2, but for now, I am thinking we will be in a similar economic condition a year from now. The markets may be up or down a bit, but likely no great change. Unless, of course, some very bad decision is made about North Korea or Iran or …

Throughout it, the importance of network culture becomes clearer and clearer. If YouTube made ISIS, Twitter (with a little help from Russians running bots and hacking into DNC servers) elected this President. The last two years are entirely unimagineable without social media and e-mail. But alas, any dreams of an online Jeffersonian democracy are long gone. How we will dig ourselves out of this hole is beyond me.

Worse yet, the US retreat from the Paris Accords, the attacks on Affordable Health Care, and the spread of oligarchy suggest that we are now firmly in the early phases of what William Gibson terms “the Jackpot.” Dark Accelerationism is here, with Steve Bannon (Steve Bannon! I mean think about it!) still guiding nihilistic forces in the White House from behind the scenes. Still, humans do have a funny capacity to make do and maybe the orange skies of Blade Runner 2049 will remain confined to that alternate time-line.

I finished my last year in review with the words:

So we end with a paradox. 2012 taught us that the stagnant state of network culture isn’t stasis. Instead, it is accompanied by massive, unpredictable change. It’s up to us to figure out how to harness that unpredictability for good and how to use extreme change and extreme proposals work to better society. I hope that in retrospect I will have something more positive to say about 2013.

Let’s hope for more in 2018 then.

Blogging is an enterprise without hope these days, a dead medium, but in that, it remains something of a form of resistance, hosted on my own server, outside of the censoring eye of any institution. I won’t make any promises about whether I will blog more or not it the next year, but it remains part of my practice. As most of my readers know, in the five years since the end of 2012, I left GSAPP after a new Dean shut down Mark Wigley’s labs project and I recognized that, at least for the time being, universities were too big and dinosaur-like to host the kind of work I am pursuing. That isn’t to say that I might continue with some teaching here and there. Maybe I will even go back to teaching full time, but for now it seems like it’s time to side with nocturnal mammals scampering around furtively as dinosaurs lumber around in their last days (last days take a while, and meteorites don’t always come so don’t hold your breath for that one).

Back in 1980, Robert Fripp posted a manifesto on the back of his album “Let the Power Fall.” It has been reposted on the Discipline Global Mobile site although I had to laugh when my first attempt to find it again led to this post, made just as I was transitioning to Columbia and launching the Netlab. Over the two years since I have left full time teaching, I have found much more time and space to pursue the sort of projects that drive me, not only critical writing, but also exhibitions. If there will be some a few more projects involving my father’s work like the exhibit this year, I am hardly his keeper and can only set that research in motion so that others can take it on to give different insights. More important is my work with the Netlab, AUDC, and on my own. Different versions of these institutions will rise and recede, some may vanish and new ones may take the place. Throughout it, however, I set my intention for the next year and next five years to be a time for deepening my own research, radically interrogating the boundaries of space so as to help us come to an understanding of what this thing called network culture really is.

Read more