It’s strange to measure every year against a concept developed by a science fiction writer, but William Gibson’s line “The future is already here—it’s just not evenly distributed”1. has been my north star for my recent year-in-review essays. Gibson meant that the future was unevenly distributed by class: the wealthy receive high-tech healthcare while the world’s poorest live in squalor—though one might ask which of these is really our future. Yet the quote has been repeatedly misread as a claim about time andspace: that the future arrives somewhere first, perhaps unseen, while the rest of the world catches up. But this misreading is more productive than Gibson’s intent. Gibson’s critique of inequality is fair enough, but we all know this, decry it, and go on about our business. The misreading, on the other hand, is a theory of historical change.
With the release of ChatGPT in late 2022, a temporal rift opened, shattering the post-Covidean present. But many tried the early tools, encountered hallucinations, read articles about slop and imminent environmental ruin, and reasonably concluded there was nothing to see. By 2025, a cursory examination of news in AI would have assured them that AI had proved a bust. OpenAI’s long-awaited updates disappointed, and the company flailed, turning to social media with Sora, a TikTok clone for AI. Meta seemed to abandon its efforts to create a competitive AI and instead turned to content generation for Instagram and Facebook, something nobody on earth wanted. Talk of a bubble started among Wall Street pundits. The hype-to-disappointment cycle is familiar, and the dismissals were not unreasonable.
But again, the future isn’t evenly distributed, and if you don’t know where to look, you would be excused for believing it’s all hype. Looking past such failures, 2025 was actually a year of breakneck progress. Anthropic’s Claude emerged as the most capable system for complex tasks, Google’s Gemini became highly competitive, while DeepSeek and Moonshot AI proved that China was not far behind. More significant than any single model was the emergence of agentic AI—systems that can take on multi-step tasks, act, navigate filesystems, write and execute code, and work across documents. Claude Code was the year’s groundbreaking innovation. While “slop” was Merriam-Webster’s word of the year, “vibe coding”—using agents to write programs—was much more important. Not only could programmers use them to accelerate their work, it also became possible for non-programmers to realize their ideas without any knowledge of code, a radical change in access I explored in “What Did Vibe Coding Just Do to the Commons?”.
By any first-world standards, at least, these tools are remarkably democratic and inexpensive. A basic Claude subscription costs about as much as a month of streaming, and even the $200 maximum usage account costs less than a monthly car payment. For many, however, the barrier is not price but something deeper—a resistance approaching revulsion. These tools provoke fear in a way that earlier technologies did not. It’s not the apocalyptic dread of the doomers or the Dark Mountain sensibility that apocalypse is near. Rather, it’s a threat to the sense that thought itself is what makes us distinct. The unevenness of the future is no longer about access; it’s now about willingness to engage.
As a scholar, thinking about the very short term is strange for me. I have always been suspicious of claims that radical change was upon us. I would rather align myself with the French Annales school concept of la longue durée, as defined by the great Fernand Braudel, the long-term structures of geography and climate. Faster than that were the medium-term cycles of economies and states, while he dismissed the short-term événements of rulers and political events as “surface disturbances, crests of foam that the tides of history carry on their strong backs.”2. Events, he wrote elsewhere, “are the ephemera of history; they pass across its stage like fireflies, hardly glimpsed before they settle back into darkness and as often as not into oblivion.”3. The real forces operate beneath, slowly, often imperceptibly.
Curiously, Braudel himself embraced technological change in his own work. In the 1920s and 30s, he adapted an old motion-picture camera to photograph archival documents—2,000 to 3,000 pages per day across Mediterranean archives from Simancas to Dubrovnik. He later claimed to be “the first user of microfilms” for scholarly historical research.4. His wife Paule spent years reading the accumulated reels through what Braudel called “a simple magic lantern.”5. Captured in 1940, he spent five years as a prisoner of war and wrote the entire first draft of The Mediterranean—some 3,000 to 4,000 pages—from memory. Paule, meanwhile, retained access to the microfilm and notes in Paris, and after the war, they reconstructed the text, taking his manuscript, verifying it and adding footnotes and references from the microfilm.6.
In 1945, the same year Braudel was liberated, Vannevar Bush published “As We May Think,” in which he imagined a device he called the “Memex”: a mechanized desk storing a researcher’s entire library, indexed and cross-referenced, expandable through associative trails.7. The vision remained speculative for decades. Now the world’s archives are being digitized; AI systems translate, summarize, and search across them in seconds and can translate any language. To take one example, earlier this year, I used Google’s Gemini to translate the Hierosolymitana Peregrinatio of Mikalojus Kristupas Radvila Našlaitėlis, a sixteenth-century pilgrimage narrative from an online scan of the Latin first edition. The result is not a polished scholarly translation, but a working text that allowed me to gain a good sense of a text that was previously unreadable to anyone without proficiency in Latin or Polish (the only language into which, to my knowledge, it had been translated). The role of the intellectual is being transformed—not replaced, but augmented in ways Bush could only sketch. This feels like something other than foam.
How to account for such a rapid shift? Manuel DeLanda offers one answer in A Thousand Years of Nonlinear History. Working in Braudel’s materialist tradition and drawing on Gilles Deleuze and complexity theory, DeLanda describes how flows—of trade, energy, and information—accumulate and concentrate until they cross a threshold, undergo a phase transition, radically reorganizing into a new stable state. But here is the key insight: intensification is la longue durée. The accumulation of flows that began with the Industrial Revolution—or perhaps with writing, agriculture, or even symbolic representation itself—is the deep structure behind our era. Steam, electricity, computing, the internet: each was a phase transition within a longer arc of intensification. Cities accelerate such processes, as Braudel showed, concentrating capital and labor until new forms of economic organization emerge—Venice, Antwerp, Amsterdam, London, each becoming sites at which the future arrived first. Such conditions are not opposed to la longue durée; they are the moments when intensification crosses a threshold.
The continued pace of change this year underscores that there has been no return to equilibrium. But this has been accompanied by unprecedented resistance to technology, appearing as simultaneous terror at its apocalyptic nature (in jobs, if nothing else) and dismissal as useless, especially in Gen Z. A January 2026 Civiqs survey found that 57 percent of Americans aged 18–34 view AI negatively—more than any other age group. Curiously, the seniors category, which now includes most boomers, was the least resistant to AI, followed by Gen X and older millennials, all groups that grew up seeing radical societal and technological changes.8. It seems paradoxical that the smartphone generation recoils from the tools of the future. To understand this resistance means understanding the mentalité that shaped it—what Braudel’s successors in the Annales school called the collective psychology formed through lived experience.9. For Gen Z, that formative experience was network culture—both a successor to postmodernism and a form of collective psychology I did not fully understand at the time. Writing on network culture in 2008, it seemed to me that social media promised connection; instead, it brought division.10. The networked self was indeed constituted through networks, not merely isolated in postmodern fragmentation, but the fragmentation was now collective. Networked publics built barriers against one another, creating what Robert Putnam called cyberbalkanization: retreat into a comfortable niche among people just like oneself, views merely reinforcing views.11. Identity wars and mimetic conflict flared across filter bubbles that amplified outrage and tribal scapegoating as both MAGA and wokism built toxic online cultures. QAnon and a thousand other conspiracy theories propagated through Facebook groups and YouTube recommendations. Young men drifted into incel communities where loneliness became ideology and livestreaming mass shootings was celebrated. Influencers built their empires on hatred—Hasan Piker framed Hamas’s October 7 massacre as anticolonial resistance while Nick Fuentes celebrated mass shooters as vanguards of race war and civilizational collapse.
Nor did this just fragment culture—it exacted a massive psychic toll, as social contagion spread new forms of self-harm and mental illness. During the pandemic, teenage girls began presenting tic-like behaviors—not Tourette’s syndrome, but something researchers termed “mass social media-induced illness,”12. spread by TikTok videos about Tourette’s rather than any actual disease. The pattern was unprecedented but not unique. Eating disorders spread through thinspiration hashtags. Self-harm tutorials circulated on Instagram. The platforms that were supposed to bring us together instead spread desires, disorders, and identities through pure social contagion—and with them, violence and polarization. A generation that grew up inside this experiment—that watched it reshape their peers’ bodies, minds, and identities—is right to be skeptical of the next technological promise.
In 2010, it seemed like network culture had a good chance of becoming understood as the successor to postmodernism. Bruce Sterling and I were engaged in a kind of dialogue about it online. He predicted that network culture would last “about a decade before something else comes along.”13. And he was right, as I acknowledged in my 2020 Year in Review. By then, network culture was exhausted, and with the Covidean break, it seemed time for something new. In 2023, I taught a course at the New Centre for Research & Practice to try to broadly sketch the emerging era. It’s still early and hard to fathom, like trying to understand postmodernism in 1971 or network culture in 1998, but it’s clear that if postmodernism was underwritten by the explosion of mass media, network culture by the Internet, social media, and the smartphone, then the current era is shaped by AI.
But if Gen Z, scarred by the effects of social media, has been reacting with deep fear and anxiety, Sterling how epitmozes the other reaction, dismissal. In the most recent State of the World, for example, he derides AI-generated content as “desiccated bullshit that can’t even bother to lie.” He compares the vibe-coding atmosphere to an acid trip, mocking the professionals who utter “mindblown stuff” like “we may be solving all of software” and “I have godlike powers now.” For Sterling, AI can produce nothing but slop. Now Bruce has always had a healthy skepticism toward tech claims, but I can’t help but think of Johannes Trithemius, the fifteenth-century abbot who wrote De Laude Scriptorum just as Gutenberg’s press was spreading across Europe—defending the scriptorium against a technology he could not see would remake the world.
There are even deeper, more existential fears, and I’ve spent the past year addressing them on my blog, in the process laying the foundation for a book on the topic: AI as plagiarism machine; AI as hallucination engine; AI as stochastic parrot, mindlessly repeating what it has ingested (Sterling’s critique); and AI as uncanny double, too close to us for comfort. As I explain, the discomfort arises not from the machine’s otherness but from its likeness: a mirror held up to processes we preferred to believe were uniquely ours.
It’s no accident that I published these essays on my blog. As far as my personal year in review goes, this was very much the year of the blog. I have no plans to ever publish in an academic journal again. Why would I? Who would read it? Why would I want to publish something paywalled, reinforcing the walled gardens of inequality that academia is so desperate to maintain—even as it proclaims itself the champion of open inquiry and democratized knowledge? Academia has become the realm of what Peter Sloterdijk called cynical reason: rehearsing the tropes of ideology critique while knowing the game is empty and playing it anyway. This revolts me.
But for almost ten years now, since the shutting down of the labs at Columbia’s architecture school, I have been content to write from the position of the outsider, something I reflected on in “On the Golden Age of Blogging”. That essay was prompted by a strange comment from Scott Alexander, who lamented on Dwarkesh Patel’s podcast that he had personally made a strategic error in not blogging during what he called the “golden age,” imagining that “the people from that era all founded news organizations or something.” The golden age he remembers is a fiction, as golden ages often are—and he gets the stakes entirely wrong. Evan Williams founded Blogger in 1999, sold it to Google, co-founded Twitter, then created Medium, which convinced hapless readers pay to read slop long before AI slop was ever a thing. The early bloggers who sought professionalization found themselves absorbed into the worst of the worst, writing for BuzzFeed, peddling nostalgia listicles that rotted psyches.
There was, however, a golden age for me, and I miss it: the architecture blogging community circa 2007—Owen Hatherley, Geoff Manaugh, Enrique Ramirez, Fred Scharmen, Sam Jacob, Mimi Zeiger (whose Loud Paper was less a blog and more a zine, but a key part of the culture), and others. We inherited from zine culture an informal, conversational tone and the will to stand outside architectural spectacle. But ArchDaily and Dezeen commercialized the form, shifting from independent critique to marketing and product. Startup culture absorbed architectural talent.
Blogging was powerful precisely because we had no stakes in it—we owned and controlled our means of intellectual production. The golden age of blogging is not in the past; it is now. After years of proclaiming I would blog more, in 2025, I really did. I wrote over 83,700 words on varnelis.net and the Florilegium—essay-length pieces on landscape, native plants, AI and art, architecture, infrastructure, politics, and tourism. My only regret is that my presidency at the Native Plant Society of New Jersey consumes so much of my thinking about native plants that little remains for writing. But the time will come, and if nothing else, my investigation of the Japanese garden aesthetic should point in the future direction for my writing on landscape.
I also continued to make AI art, or to be more precise, what I called stochastic histories. A major project was a substantial reworking of The Lost Canals of Vilnius, a counterfactual history in which, after the Great Fire of 1610, Voivode Mikalojus Radvila Našlaitėlis rebuilt the city with Venetian-style canals, complete with gondoliers, water processions, and a hybrid “Vilnius Venetian” architecture. As research, I used Gemini to translate Radvila’s sixteenth-century Latin pilgrimage narrative. AI, like photography or film, is what you make of it. Film is perhaps the better analogy—anyone can make a video. Making something worthwhile is another matter entirely. In December, I also completed East Coast/West Coast: After Bob and Nancy, a generative restaging of Nancy Holt and Robert Smithson’s 1969 video dialogue using two AI speakers.
There were other substantial essays, too. In “Oversaturation: On Tourism and the Image”, I finally put down on paper something I had wanted the Netlab to address while at Columbia, but that proved too dangerous for the school to support. Universities cannot critique the very systems of overproduction they depend upon for survival. Publish or perish and endless symposia nobody is interested in are the academic versions of overproduction, but more than that, any architecture school claiming global currency cannot afford to offend either other institutions, like museums, that give it legitimacy, or, for that matter, the trustees that fund both. As I point out, tourism has always been mediated by imagery; take Piranesi’s vedute or the Claude Glass. Grand Tourists always had representations at hand to interpret their direct experience—but a new crisis point has been reached with both overtourism and the overproduction of images. Algorithmic logic now reorganizes cultural geography around “most Instagrammable spots,” making historical significance secondary to content potential. The Fushimi Inari shrine in Kyoto is the case in point—a 1,300-year-old shrine that Instagram made famous and that has now ceased to serve as a religious site due to the influx of visitors. The Japanese have a term for this: kankō kōgai, tourism pollution. Tourism has become the paradigm of contemporary experience—the production of imagery without cultural meaning; everything feeds the same algorithmic mill. Even strategies of resistance get metabolized—slow travel becomes a hashtag, psychogeography becomes an Instagram guide.
The Bilbao effect, which was a major driver of oversaturation, was itself a product of globalization. Hans Ibelings coined “supermodernism” in 1998 to refer to the architectural expression of Marc Augé’s “non-places,” an architecture optimized for the perpetual circulation of bodies and capital. It was the architecture of network culture, of the Concorde and the Internet. Koolhaas diagnosed its endgame in his 2002 “Junkspace“—”Regurgitation is the new creativity”—and then, tellingly, stopped writing. Today, network culture is long gone; nationalism is on the rise. The Internet is a dark forest now14. while the disconnected life is on the rise.15 The most exclusive resorts now advertise no Wi-Fi, no cell service, no addresses—only coordinates. Disconnection has become the ultimate luxury, sold back to the same people who built the infrastructure of connection. More cities are alarmed by the effects of overtourism than desire to attract tourists. In the US, new architectural proposals appeal to a retardataire aesthetic—Trump displaying models of a triumphal arch inspired by Albert Speer and marking a triumph of nothing in particular in models in three sizes (“I happen to think the large looks the best“), a four-hundred-million-dollar ballroom modeled on Mar-a-Lago, an executive order mandating classical architecture for federal buildings that Stephen Miller explicitly framed as culture war.
Yet both Bilbao and MAGA are spectacle, architecture-as-branding. But the Bilbao effect is imploding. No city believes anymore that a signature building by a starchitect will transform its fortunes. The parametricists have nothing left to say. Parametric design promised formal liberation—responsive, site-specific, computationally derived—but what it delivered was the most efficient, ugliest box. If the promise was the blob, the reality is the “5-over-1”: wood-frame residential floors stacked on a concrete podium with ground-floor retail, wrapped in a pastiche of brick veneer, fiber cement panels, and that obligatory conical turret element meant to signal “we thought about this corner.” As for AI-generated architecture, it is merely boring—giant sequoias hollowed out as apartment buildings, white concrete towers with impossible cantilevers, and lush vegetation sprouting from every surface—the same utopian fantasy rendered a thousand times over. These are renders of renders: AI trained on architectural visualization produces visualizations that are utterly disconnected from any tectonic reality. A new generation may emerge in response to new needs, but for now, the discipline has lost its cultural purchase. Architecture, for us, is a thing of the past.
The art world, too, has slowed. Museums are putting on fewer shows, shifting from aggressive schedules to longer, more deliberate exhibitions—or simply cutting programming as budgets tighten.16. The frantic pace of the Biennale circuit has exhausted dealers and collectors alike; smaller fairs are folding, and even the major ones feel like obligations rather than events. Galleries that survived the pandemic are now closing quietly, without the drama of a market crash—just a slow bleed of foot traffic, sales, and cultural attention. There is no new movement, no emergent critical framework, no sense of direction. The market churns on—auction prices for blue-chip artists remain high, collectors still speculate, art advisors still advise—but the sense of cultural mission has dissipated. What remains is commerce without conviction, a field that has forgotten why it exists beyond the perpetuation of its own economy. The institutions that trained artists for this field are collapsing alongside it.
As enrollment dwindles, design schools are collapsing—not merely contracting, but ceasing to exist. Most recently, the California College of the Arts announced in January 2026 that it would close after the 2026–27 academic year17., the last remaining independent art and design school in the Bay Area. It follows a grim procession: the San Francisco Art Institute (2020), Mills College (2022), the Pennsylvania Academy of the Fine Arts (2023), and Woodbury University’s acquisition by Redlands and subsequent adjunctification—a fate that has methodically undone so many schools as faculty become contingent labor and institutions into hollow administrative structures run by well-paid, cost-optimizing consultants.
There is personal resonance for me in this. Simon’s Rock College of Bard, which shuttered its Great Barrington campus in 2025, was where I studied for my first two years before transferring to Cornell—a pioneer of early college education that offered a radical pedagogical experiment in what learning could be beyond conventional schooling. I arrived there straight from high school, as did my good friend and colleague Ed Keller; clearly, something interesting was in the water back then. Simon’s Rock made the development of young minds its central mission rather than an incidental focus of brand management or endowment growth, and its alumni list is impressive for such a small school. It has an afterlife at Bard, but it’s an echo at best.
The difference between these institutional deaths and simple market failure is this: they are not being replaced. When a retail business fails, another may open elsewhere. When a school closes, there is no succession. The market offers no alternative. Instead, what remains are the corporate university satellites—for-profit programs nested within larger institutions (like Woodbury’s absorption into Redlands), stripped of autonomy, their faculty reduced to precariat, their curricula bent toward what can be measured and marketed. The art schools that survive do so by transforming into something else: luxury finishing schools for wealthy families or research appendages to larger universities, where “design thinking” becomes another management consultant’s tool. The pedagogical mission—to create conditions where students might develop serious aesthetic judgment, where they might encounter genuine problems and be forced to think through them—is not merely challenged but impossible. The closure of these schools does not signal a failure of art education; it signals that the very idea of art education as something valuable in itself has been liquidated.
This hollowing out of cultural institutions is not incidental to the political moment—it is one of its hallmarks. Politically, most people have checked out. This is not 2017, when each provocation demanded a response; the outrage cycle has given way to numbness. In “National Populism as a Transitional Mode of Regulation”, I argued that Trump, Orbán, Meloni, and their ilk represent not a return to fascism but something new: the authoritarian management of declining expectations. National Populism correctly identifies that neoliberalism’s promise of shared prosperity has failed, but it channels legitimate grievances toward scapegoats rather than addressing the technological displacement actually causing them. This is its tragic irony: the National Populist base—workers made obsolete by neoliberalism and unable to participate in AI Capitalism—finds its legitimate anger directed into a movement that accelerates the very forces rendering them superfluous. Their value to capital lies in political disruption rather than economic production; they are consumers and voters, but no longer needed as workers. National Populist leaders offer psychological compensation—dignity, recognition, transgressive identity politics—rather than material improvement. The apocalyptic tenor of populist culture, its end-times thinking and conspiracy theories, provides a framework for populations sensing their own economic redundancy.
The alliance between tech billionaires and populist leaders is unstable. AI Capitalism requires borderless computation and global talent flows; nationalist protectionism contradicts these at every turn. Musk, Thiel, and Andreessen have aligned with the movement to dismantle the regulatory state, not because they share its vision but because populism serves as a useful battering ram against institutional constraints. Once those barriers fall, the movement and its human-centric concerns can be discarded. National Populism, as I conclude, is not the future—it is a political interlude, a transitional mode that will not survive contact with the economic forces it has helped unleash.
If National Populism is transitional, is there a positive vision that can replace it? In “After the Infrastructural City”, I responded to Ezra Klein and Derek Thompson’s book Abundance, perhaps the most influential book of 2025, which argues that America’s inability to build is a political choice, not a technical constraint. Their solution: streamline regulation, invest boldly, build more. It’s a compelling vision—and a necessary corrective to decades of paralysis. But Abundance shares a curious blindspot with Muskian pronatalism: both assume we need more people. Musk preaches that declining birthrates spell civilizational collapse; Klein and Thompson build their vision on populations that will mysteriously arrive to fill what’s built, perhaps by immigration. Neither accounts for the possibility that AI changes the equation entirely—that a smaller population, augmented by intelligent systems, might not be a crisis at all. Populations are already shrinking across much of the developed world. What I call “actually-existing degrowth”—not the voluntary eco-leftist kind, but the unplanned demographic contraction now underway in Japan, Korea, and much of Europe—is coming for the United States too. Declining birth rates, aging populations, and regional depopulation: these are not future scenarios but present facts.
This doesn’t invalidate the Abundance agenda; it redefines it. Abundance cannot mean building more for populations that will not arrive. It must mean building better, adaptive, intelligent infrastructure for smaller, older societies. AI, rather than merely destroying jobs, can help navigate this transition: smart grids, autonomous transit, predictive healthcare. The opportunity is real. Managed shrinkage, done well, can mean more livable cities, restored ecosystems, higher quality of life. The question is whether political leaders can articulate a vision of flourishing within limits—or whether nostalgia for growth will leave us building for a future that never comes.
Against the exhaustion of institutions, against the hollowing out of architecture and art, against the closure of the schools that trained people to imagine, the blog remains. It may not be much, but it is one independent voice outside the collapsing structures around me. I wrote over 83,000 words this year. I made art. I thought through problems that matter to me with the help of AI, which provided me with tools I could only have dreamt of merely a year ago. Today, I uploaded hundreds of thousands of words from my essays to a directory in Obsidian so that Claude could draw connections between them (see here for just how one can set this up).
The future is already here—it just isn’t evenly distributed. Some are afraid or are still pretending AI isn’t happening. Phase transitions are uncomfortable. They are also where the interesting work gets done. One makes of one’s time what one makes.
1. William Gibson, quoted in Scott Rosenberg, “Virtual Reality Check Digital Daydreams, Cyberspace Nightmares,” San Francisco Examiner, April 19, 1992, Style section, C1. This is the earliest verified print citation, unearthed by Fred Shapiro, editor of the Yale Book of Quotations.
2. Fernand Braudel, The Mediterranean and the Mediterranean World in the Age of Philip II, trans. Siân Reynolds (New York: Harper & Row, 1972), 21.
3. Braudel, The Mediterranean, 901.
4. Fernand Braudel, “Personal Testimony,” Journal of Modern History 44, no. 4 (December 1972): 448–67.
5. Paule Braudel, “Les origines intellectuelles de Fernand Braudel: un témoignage,” Annales: Histoire, Sciences Sociales 47, no. 1 (1992): 237–44.
6. Howard Caygill, “Braudel’s Prison Notebooks,” History Workshop Journal 57 (Spring 2004): 151–60.
7. Vannevar Bush, “As We May Think,” The Atlantic Monthly 176, no. 1 (July 1945): 101–8.
8. Civiqs, “Do you think that the increasing use of artificial intelligence, or AI, is a good thing or a bad thing?,” January 2026, https://civiqs.com/results/ai_good_or_bad.
9. The concept of mentalités emerged from studies of phenomena like the witch trials, where beliefs and fears spread through communities in ways that could not be reduced to individual irrationality. For an overview of mentalités as a historiographical concept, see Jacques Le Goff, “Mentalities: A History of Ambiguities,” in Constructing the Past: Essays in Historical Methodology, ed. Jacques Le Goff and Pierre Nora (Cambridge: Cambridge University Press, 1985), 166–180.
10. Kazys Varnelis, “The Rise of Network Culture,” in Networked Publics (Cambridge: MIT Press, 2008), 145–160.
11. Robert Putnam, “The Other Pin Drops,” Inc., May 16, 2000.
12. Kirsten R. Müller-Vahl et al., “Stop That! It’s Not Tourette’s but a New Type of Mass Sociogenic Illness,” Brain 145, no. 2 (August 2021): 476–480, https://pubmed.ncbi.nlm.nih.gov/34424292/.
13. Bruce Sterling, “Atemporality for the Creative Artist,” keynote address, Transmediale 10, Berlin, February 6, 2010.
14. Yancey Strickler, “The Dark Forest Theory of the Internet,” 2019, https://www.ystrickler.com/the-dark-forest-theory-of-the-internet/. See also The Dark Forest Anthology of the Internet (Metalabel, 2024).
15. “Trend: Not Just Digital Detox, But Analog Travel,” Global Wellness Summit, 2025, https://www.globalwellnesssummit.com/blog/trend-not-just-digital-detox-but-analog-travel/.
16. “The Big Slowdown: Why Museums and Galleries Are Putting on Fewer Shows,” The Art Newspaper, March 10, 2025, https://www.theartnewspaper.com/2025/03/10/the-big-slowdown-why-museums-and-galleries-are-putting-on-fewer-shows.
17. California College of the Arts, the last remaining private art and design school in the Bay Area, announced in January 2026 that it would close after the 2026–27 academic year. See “‘Nowhere Left to Go’: As California College of the Arts Closes, So Does a Pathway for Bay Area Artists,” KQED, January 13, 2026, https://www.kqed.org/news/12070453/nowhere-left-to-go-as-california-college-of-the-arts-closes-so-does-a-pathway-for-bay-area-artists.
