2020 in Review

Grayness in Maine
Grayness on Mount Desert Island, 2020.

According to conventional chronological schemas, 2020—not 2019—is the last year of the 2010s.* This is convenient since, as I pointed out in last year’s premature review of the last decade, the 2010s were “the decade of shit” and 2021 is a stinking pile of shit. The worst decade since World War II ended with the worst year since 1945.

My “year in review” posts are usually almost as late as my taxes and when I finished last year’s post on February 12, we were all well aware that COVID was out there. Now, no question that I missed the severity of the pandemic back then, but I was on the money about its psychic effects. For all of the horror of COVID, it isn’t horrible enough. COVID is banal. Instead of bleeding out through all of our orifices as with Ebola, COVID is “a bad case of the flu” that leaves people dead or with debilitating cardiovascular and neurological ailments. But how different is my diagnosis, really, from what happened?

Now sure, this year [2020] we’ve already had firestorms the size of Austria ravaging Australia, a rain of rockets in Baghdad, Ukrainian jetliners getting shot out of the sky, a deadly pandemic in China caused by people eating creatures that they really shouldn’t, and the failure of the Senate to uphold the rule of law, but the banality of it all is crushing. While the Dark Mountain set drinks wine around a campfire, gets henna tattoos, and sings along to songs about the end of nature, for the rest of us, it’s just an exhausting, daily slog through the unrelentingly alarming headlines.

COVID brought us yet more crushing banality. The Idiot Tyrant is gone, but we are trying his impeachment yet again. Everything changes, but nothing changes. We were all the Dark Mountain set this year, sitting around our campfires, singing songs about the End. It was another atemporal slog, one day bleeding into another, every day a Sunday in a country where everything is closed on Sundays and there is nothing to do, every day stranger and more disconnected than the last, something captured in comedienne Julie Nolke’s series of videos entitled “Explaining the Pandemic to My Past Self.”

Amidst the disconnection, the Jackpot—or William Gibson’s term for a slow-motion apocalypse—cranked up a couple of notches. Just surviving the year was an accomplishment. The balance of life has been thoroughly disrupted and that disruption isn’t going away any time soon. It’s not just COVID: we now feel certain that there will be more pandemics, more massive wildfires, and more superstorms in our future. The Earth isn’t dying (sorry, climate doomers), but there will be huge losses of species worldwide, human population decline is well underway in advanced societies (the US is finally on the bandwagon here), and massive deaths will take place across the planet until the population comes back to a sustainable level decades from now.

But the premise of the Jackpot is that it isn’t a final apocalypse: there will be another side. In his Twitter feed (@GreatDismal), even Gibson focuses on the horrific and unjust nature of the Jackpot, but there will be winners, selected on the basis of wealth and sheer dumb luck. What might this say about the US election and the fact that 46% of Americans voted for a cretin? Now, there is nothing particularly new about melding Tourette’s and dementia into a public speaking style, there are plenty of lunatics sitting on their porches screaming obscenities at their lawn ornaments. Everybody knows that Uncle Scam’s persona as a billionaire—or rather the King of Debt (his own term!)—is an act. The man with the golden toilet is not a successful businessman. He is weak, a loser who can’t stay married or stay out of bankruptcy court. Four years of misrule ended in abject failure: defeat in both electoral and popular votes, being banned from social media and, with his businesses failing, being forced out of office in shame to face an unprecedented second impeachment, an array of civil litigation as well as criminal indictments for fraud, tax evasion, incitement to riot, and rape. But this—not a misguided notion of him as a success—is the real point of his appeal. The short-fingered vulgarian is a life-long loser, a reverse Midas whose every touch turn gold to lead. But in the face of the Gibsonian Jackpot, his appeal was not as a stupid version of Homer Simpson, grabbing whatever scraps he can and, when that failed, LARPing as President, destabilizing society, and just blowing everything up.

LARPing was big in 2020, which saw the attempted kidnapping of Michigan Governor Gretchen Witmer by wingnut idiots, various insane protests by COVID deniers, the attempted coup of the Capitol Insurrection, and the riots developing after the Black Lives Matter protests. BLM was the standout among these, not only a good, just cause, but also because the majority of the protests themselves were peaceful—such as the one in our town of Montclair, New Jersey. None of that was LARPing, but the riots that accompanied it were. For the most part, this was less people with genuine greivances and more Proud Boys, Boogaloos, anarchists, and grifters who came in to loot and burn whatever they could down. Although there were kooky moments on the Left like the Capital Hill Automonous Zone, Antifa, for however much it exists, didn’t do much, certainly proving to be far less trouble than white supermacist-infiltrated police forces in paramilitary gear. Still, the widely-vaunted second Civil War never came about and the arteriosclerotic LARPers on the Right limped off the field in defeat after their they got a spanking at the January putsch.

A number of observers at both the Capitol Insurrection and CHAZ —including some of the idiots who took part in it—noted that these events felt much like a game, specifically an Alternate Reality Game (ARG). In a typical ARG, players look for clues both online—think of the QAnon drops, the Trumpentweets, or the disinformation dished out by the skells at 4chan, 8chan, and so on—as well as out in the world. Jon Lebkowsky, in a post at the Well’s State of the World and Clive Thompson over at Wired compare QAnon to an ARG. Indeed, gaming is taking the place of religion (whichever grifter figures out how to meld this with Jesus and his pet dinosaurs will get very rich indeed), with the false promise that playing the game and winning will deliver one to the other side of the Jackpot. Somewhere, I read that when asked what he would do differently if he had made Blade Runner a decade later, Ridley Scott replied that he would be able to skip the elaborate sets and just point the camera down the streets of 1990s Los Angeles. Today, the same could be said for the Hunger Games today.

But not everything was LARPing. If Cheeto Jesus is an icon for LARPing losers, Biden was elected on the premise of staving off the Jackpot by returning adults to the White House. This is not a bad thing, we might as well try. Still, from the perspective of Jackpot culture, the most interesting political development of the year was the candidacy of Andrew Yang whose cheery advocacy of Universal Basic Income (aka the Freedom Dividend) masked the dark, Jackpot-like nature of his predictions. Let’s quote Yang’s campaign site on this: “In the next 12 years, 1 out of 3 American workers are at risk of losing their jobs to new technologies—and unlike with previous waves of automation, this time new jobs will not appear quickly enough in large enough numbers to make up for it.” No matter how friendly Yang’s delivery, there is a grim realism to his politics, an acceptance that things will never be better for a massive sector of the population. Certainly some individuals will find ways to use their $1,000 a month freedom dividend as a subsidy to do something new and amazing, but 95% will not. Rather, they will form a new and permanent underclass as they fade into extinction. Again, the point of Yang’s candidacy isn’t the cheerleading for math and STEM, it’s the frank acknowledgement that the Jackpot is already here.

On the other hand, toward the end of the year, Tyler Cowen suggested that we might be nearing the end of the Great Stagnation (he is, of course, the author of an influential pamphlet on the topic) and you can find a good summary of the thinking, pro and con by Cowen’s student Eli Dourado here. In this view, advances such as the mRNA vaccine, the spread of electric, somewhat self-driving vehicles, the pandemic-induced rise of remote work, and huge drop in the cost of spaceflight are changing things radically and could lead to a real rise in Total Factor Productivity from the low level it has been stuck at since 2005. Is this a sign of the end of the Jackpot? Unlikely. That won’t come until a series of more massive technological breaks, probably (but not necessarily) involving breakthroughs in health (the end of cancer, heart disease, and dementia), the reversal of climate change, working nanotechnology, and artificial general intelligence. But still, there are signs that early inflection points are at hand.

Personally, we experienced one of these inflection points when we replaced our aging (and aged) BMWs with Teslas. I wound up getting a used Tesla Model S last January and then immediately turned around and ordered a brand new Model Y that we received in June. No more trip to the gas station, and while “Full-self driving” is both expensive and nowhere near fully self driving, it is a big change. Longer road trips—which under the pandemic have been to nurseries on either side of the Pennsylvania border to buy native plants—have become much easier, even if I still have to keep my hands on the wheel and fiddle with it constantly to prevent self-driving from disengaging. But harping on too much about the incomplete nature of self-driving is poor sport: in the last year, Tesla added stop light recognition to self-driving and a new update in beta promises to make city streets fully navigable. Less than a decade ago, self driving was only a theoretical project. Now I use it for 90% of my highway driving. That’s a sizable revolution right there. Also, the all-electric and connected nature of these cars makes getting takeout and sitting in climate-controlled comfort in my vehicle when on the road a delight. Electric vehicles were a big success this year and in our neighborhood which is a bellwether for the adoption of future technology (when I saw iPhones replace Blackberries on the bus and train into the city, I bought a bit of Apple stock and made a small fortune) and Teslas have replaced BMWs as the most common vehicle in driveways.

Back to the pandemic, which accelerated a sizable shift in habitation patterns. Throughout the summer, there was a lot of nonsense from neoliberal journalists and urban boosters about how cities are going to come back booming, but with more bike lanes, wider sidewalks, less traffic, and more awesome tactical urbanist projects to appeal to millennials. Lately, however, those voices have fallen silent and with good reason. In this suburb the commuter train platforms are still bare in the mornings and the bus into the city, once packed to standing room only levels every evening, hasn’t run in five months. A friend who works in commercial real estate says that occupancy in New York City offices is at 15% of pre-pandemic levels. Business air travel is still off a cliff. Remote work isn’t ideal for everyone and every job, but neither was going into the office. For sure, the dystopian open offices, co-working spaces, and offices as “fun” zones are done and finished. People are renovating their houses, or upsizing, to better live in a post-pandemic world of remote work. Another friend who works for a large ad agency told me that they did not renew their lease for office space and do not plan to ever go back to in person work, at least for the vast majority of the staff. When employees gain over two hours a day from not commuting and corporations save vast fortunes on rent, remote work seems a lot more appealing. Retail sales here and in the surrounding towns have gone through the roof, just as they have in many suburbs.

But it isn’t just suburbia that has prospered at the expense of the city, exurbia has returned too. Way back in 1955Auguste Comte Spectorsky identified a growing American cultural class that he dubbed “the exurbanites” made up of “symbol manipulators” such as advertisers, musicians, artists, and other members of what we today call the creative class. Spectorsky observed that many of these individuals eventually tended to drift back to the city. This time may be different. After two decades in the city, the creative class is turning to places outside the city with attractive older houses and midcentury modern properties, walkable neighborhoods (virtually all of Montclair, for example, has sidewalks), good schools (which generally mean high property taxes but are an indicator for a smarter, engaged populace), amenities like parks and places to hike, decent bandwidth, as well as independent restaurants, shops, and cultural attractions. There will always be variations in taste: some people really do want to eat at Cheesecake Factory and live in a Toll Brothers McMansion, but these will appeal to relatively few of the people fleeing cities at this point. Thus, the Hudson Valley—full of older, more interesting architecture, great natural resources and quirky towns—is booming. I predict some reversion to toward the mean after the pandemic ends and some of the people who fled to the country realize they aren’t suited to a place without Soulcycle, but this will be only a partial and temporary reversion.

I predict that even after the pandemic ends, there will be a greater interest in self-sufficiency among young people who move to suburbia and exurbia. Manicured laws will be less important than vegetable gardens. Homesteading, permaculture, and a drive back to the land not seen since the 1960s are under way. It would be a very good thing if the next generation was more in touch with their land and less prone to hiring “landscapers” who treat properties as sites subject to industrial interventions such as chemical fertilizer for lawns, a phalanx of gas-powered lawn mowers and leaf blowers to remove any stray biological matter.

As far as cities go, the pandemic is triggering a necessary contraction. The massive annihilation of real estate value it has caused should go a long way to undo the foolish notion that urban real estate is always a great investment. It’s not, just ask anyone who bought a house in Detroit in 1965. Real estate in first and second tier global cities has become wildly expensive, disconnected from the underlying fundamentals. When individuals are paying rents that absorb over 30% of their salaries to investor-owners who are not covering their mortgage with those rents, something is very wrong. This broken system has been able to function due to the perceived hedonic value of restaurants, bars, and cultural events, but these things too have been failing over recent years. Long prior to the pandemic, the cost of rent decimated independently-owned restaurants and retailers, with the latter also hurt by on-line shopping. The golden age of dining out (if it really was the golden age… I would say that better food could have been had in other, less copycat eras) was already declared over in 2019. “High-rent blight,” in which entire streets’ worth of storefronts were empty due to ludicrous rents, has been common for some time now. Tourists made up more and more of the street crowds while loss-leader flagship stores for chains like Nike and Victoria’s Secret replaced local businesses. With the hedonic argument for staying in the city rapidly disappearing, it was only a matter of time before individuals began departing and, in New York, population had begun to drop by 2018 (see more on all of this in Kevin Baker’s piece for the Atlantic, “Affluence Killed New York, Not the Pandemic”). Perversely, this is a good thing as it will likely lead to a bust in commercial real estate prices and a decline in unoccupied or AirBNB’d apartments, thus making global cities like New York places that have potential again. Moreover, many second tier cities such as St. Louis, Kansas City, and Cleveland are experiencing new growth as individuals able to work remotely are looking for places that are less expensive—and thus have more potential—than New York or San Francisco.

These shifts are huge and for the better. As I tried to tell my colleagues at the university, there is no housing crisis, at least not in the US and Europe, there is only an appearance of one because of the uneven distribution of housing: a glut in some areas, a shortfall in others. The pandemic has likely undone this a bit. Of course, places that are too politically Red, too full of chains, too full of copycat McMansions are unlikely to come back anytime soon, if ever. The Jackpot continues.

Still, I’m observing a perversely rosy future for the urban (and suburban and exurban) environment is the Biden administration’s interest in infrastructure. Back in 2008, I shocked design critics when I stated that there would be no progress in infrastructure for the foreseeable future. “But, Obama,” they complained. “But, Obama,” I clapped back, “just appointed Larry Summers as his chief economic advisor and Summers will bail out the banks, not fund infrastructure.” I expect the opposite from Biden who has adopted a “nothing left to lose” position as purportedly one-term President, is a devotee of train travel and is eager to make great progress on climate change. Appointing Pete Buttigieg, one of his two smartest opponents in the primary (the other being Andrew Yang, of course), to Secretary of Transportation is a key move. This will be Buttigieg’s opportunity to prove himself on the national stage and he will fight hard to do that, just as Biden expects. Expect more electrification across the board and, I suspect, more advances with self-driving vehicles. Although certain measures—such as, in the New York City area alone, the Gateway Tunnel between New Jersey and New York, now delayed over a decade thanks to Chris Christie and Donald Trump’s vindictiveness against commuter communities that would not vote for them and the reconstruction of Port Authority Bus Terminal—will help cities, again, I predict more emphasis on decentralization and activity outside the city.

All this may have salutary cultural implications. The global city is played out. Little of interest happens in New York, San Francisco, London, Paris, or Barcelona. These cities are too expensive for the sort of experimentation that made them great cultural centers and the diffusive nature of the Internet, capitalism, and overtourism have made them all the same. Residents of cities that have been victims of overtourism have seen this as an opportunity to reset, while the physical isolation of cities is going to increase reliance on local institutions. With some luck, all this leads to a new underground, with greater difference creating greater diversity and potential. Of fashion, Bruce Sterling writes, “Fashion will re-appear, and some new style will dominate the 2020s, but the longer it takes to emerge from its morgue-like shadow, the more radically different it will look.” The same could be true of all culture. Globalization was an incredibly powerful force but has been played out. I don’t agree with the protectionist instincts of the Trumpenproles but today culture’s hope is to thrive on the basis of the difference between places and cultures, not on greater sameness. Architecture has been very slow to react to all of this, in part because many intelligent young people have drifted into other fields, like startups, but I am optimistic that we might soon get past the ubiquitous white-painted brick walls and wood common table (the architecture of the least effort possible, to match fashion and food driven by the least effort possible), the tired old Bilbao-effect, and quirky development pseudo-modernism.

So much optimism on my part! Even I am shocked that I am so positive. But why not? The end to this exhausted first phase of network culture is overdue. Time for a new decade, at last.

*The reason for this is that there is no Year Zero. 31 December 1BC is followed by 1 January 1AD.

On Natural Philosophies

Recently, a friend mentioned that she believed “natural philosophies” would become more popular in this decade. I responded by pointing out that AUDC’s Blue Monday is subtitled Stories of Absurd Realties and Natural Philosophies. Here is an excerpt from the introduction that gets to the point of what a natural philosophy is and why we employed it.

We are certainly attracted by Deleuze and Guattari’s organizational logics and Hardt and Negri’s tales of Empire as well as Baudrillard’s theories of totalized consumption of the sign and the system of objects, Žižek’s fictionalization of the real, and Jameson’s dissection of late capitalism. These are fascinating explanatory frameworks, but, scandalously, they don’t add up. To weave a coherent whole out of them would be sheer madness. So how to proceed?

During our research into the evolution of art, science, and philosophy we found that these fields were once much more intimately related than they had been in the last century. Prior to the Enlightenment and the development of the scientific method, science was dominated by natural philosophy, a method of studying nature and the physical universe through observation rather than through experimentation. Virtually all contemporary forms of science developed out of natural philosophy, but unlike more modern scientists, natural philosophers like Galileo felt no need to test their ideas in a practical way. On the contrary, they observed the world to derive philosophical conclusions. Taken together, these didn’t necessarily add up. But the lack of metanarrative for natural philosophy is not an obstacle for us, instead it is a strength.

Natural philosophy flourished from the 12th century to the 17th and with it did cabinets of curiosities, sometimes entire rooms, sometimes quite literally elaborate cabinets, filled with strange and wondrous things. These first museums collected seemingly disparate objects of fascination in a specific architectural setting, assigning to each item a place in a larger network of meaning created by the room as a whole. In the cabinet each object would be a macrocosm of the larger world, illustrating the wonder of its divine artifice. Together however, their affinities would become apparent and a syncretic vision of the unity of all things would emerge, as the words Athanasius Kircher inscribed on the ceiling of his museum suggested: “Whosoever perceives the chain that binds the world below to the world above will know the mysteries of nature and achieve miracles.” For the natural philosopher, the cabinet of curiosities possessed a reflexive quality: it was both an exhibition and a source of wonder, a system the natural philosopher built to instruct others but also to coax himself into further thought.[i]

This book, like our web site or an AUDC installation, is a cabinet of curiosities, consisting of a series of conditions that AUDC observes in order to speculate on them in the manner of natural philosophy, extrapolating not theories that could then be applied to architecture but rather philosophies to explain the world. The result is neither the relativist pluralism nor a single monist philosophy, but rather a set of multiple philosophies that almost add up, but being situationally derived, don’t quite.

[i] Patrick Mauries, Cabinets of Curiosities (New York: Thames and Hudson, 2002), 25-26. Some further sources on cabinets of curiosities are Lorraine Daston and Katherine Park, Wonders and the Order of Nature (Cambridge: MIT Press, 1998) and Giorgio Agamben, “The Cabinet of Wonder” in The Man Without Content (Stanford: Stanford University Press, 1999), 28-39.

 

On Art and Social Distance

What of art in this plague year? With our isolation compounded by unprecedented political polarization, is there any hope of exploring the commonality of shared experience through art? As many of us head into our lockdown again, we are all too aware that the implementation of “social distancing” has produced a transformation in our society, unprecedented within our lifetimes; how might art reflect on such processes?kv_2020.04.10_17-44-06

 

Consider the term “social distancing.” This peculiar phrase is currently being used to refer to physical separation between individuals in order to reduce exposure to viruses, but prior to this year it had a long history in sociology, referring to social separation among individuals and groups. Like many sociological concepts, social distancing originates with the foundational writings of sociologist Georg Simmel. In his 1903 “The Metropolis and Mental Life,” Simmel describes the growing distance between individuals as a consequence of modernization and the spatial condition of the city:

 

This mental attitude of metropolitans toward one another we may designate, from a formal point of view, as reserve. If so many inner reactions were responses to the continuous external contacts with innumerable people as are those in the small town, where one knows almost everybody one meets and where one has a positive relation to almost everyone, one would be completely atomized internally and come to an unimaginable psychic state.

 

Social distance, then, develops when physical distance is not possible and serves as a means of protecting the psyche. Nor is it done alone. On the contrary, social distance is frequently experienced communally: modern individuals flock with others of their kind and, in doing so, experience distance from those outside the group, those that they perceive as unlike themselves. For Simmel, the stranger is both near and far from us: close enough to provoke anxiety yet isolated and therefore unknown. Hence, this sociological explanation would lead us to understand racism as a product of social distance. Racism, it follows, is not so much a prejudiced encounter with individuals at distant remove, but rather a violent misperception of individuals in close proximity. The increasing polarization of American politics between the “woke folx” and the “deplorables” is further evidence of this, with both groups existing as a defense against a dangerous Other while providing a mechanism for reinforcing their member’s shattered identities. With the dominance of contemporary life by social media, the wounded self becomes a reposting machine for social media outrages as a means for the self to affirming its identity and existence through repetition, much as in a liturgical response. But being based entirely on the external threat of an Othered group, the result is a malfunction of social distance as psychic defense. Not only does political discourse grind to a halt, the self fails to develop as an individual. Facing increasing reliance on social media for human connection leads us toward a feedback loop of dopaminergic rewards for positive social stimuli and, like a chimpanzee in a lab cage punching buttons for cocaine-laced biscuits, we are led closer toward the “unimaginable psychic state” of the completely atomized dividual.      

 

This sociological concept of social distance initially appears quite different from pandemic quarantine. And yet, it’s no coincidence that political and racial tensions have risen to a fifty-year high in the United States: the social distancing of quarantine and the social distancing of the modern city share similarities. The chance encounters of the street, the park, and the pub have largely ended. Endless Zoom meetings with one’s co-workers or classmates reinforces positions in class structures. Reading the news about groups that egregiously flaunt restrictions angers us and sets us against them. Our isolation leads us to become more addicted to social media and as social media algorithms highlight our posts to those people who “liked” our previous posts, we condition ourselves to the rush of “likes,” of feedback from those people who already give us feedback——creating a false sense of belonging and an experience of growing distance from those unlike ourselves. So, then, back to the original question I posed: is there any hope of exploring the commonality of shared experience through art?        

kv_2020.09.15_14-23-56

 

Early telematic art offers a direction for us. Forty years ago Kit Galloway and Sherrie Rabinowitz, working as the Mobile Image group, created a series of projects entitled “Aesthetic Research in Telecommunications,” both anticipating and surpassing the videoconferencing that has come to be identified with life under quarantine. In Satellite Arts: A Space With No Boundaries (1977), they connected dancers located three thousand miles apart via low-latency satellite links. Although at times the dancers interacted on split screens, as we might see friends interacting on Zoom or Facetime, Galloway and Rabinowitz also composited them together on the screen visible to the audience, thereby creating a third, virtual space. In Hole in Space (1980), they created wormholes to bring together people from opposite coasts. Unlike our screen-based videoconferencing, Hole in Space employed life-size screens displayed in public spaces (Los Angeles’s Century City Shopping Center and New York’s Lincoln Center respectively). Deliberately unadvertised, Hole in Space was as they put it, “a social situation with no rules,” a new condition that, as a documentary of the work shows, was profoundly moving to the people who participated. In a third project, the Electronic Cafe of 1984, they connected five restaurants—in Korean, Hispanic, African-American, and beach communities—with video equipment and drawing tablets so that individuals in one venue (including artists-in-residence) could exchange ideas and images with unknown individuals in another. Throughout, Galloway and Rabinowitz approached electronic technology as a means of overcoming both physical distance and the social distance between individuals. 

It’s possible to imagine what a new telematic art might look like today, outside of the confines of conventional videoconferencing. For example, already yin 2004, AUDC (myself and Robert Sumrell) proposed creating guerrilla versions of the Hole in Space cartable in a backpack that could be deployed at random throughout a network of cities.  

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

At the time we wrote: 

 

Windows on the World is a formulary for a new urbanism that alleviates boredom with the city and encourages communication in public, rather than private settings. It facilitates open, spontaneous, and democratic exchanges between groups while requiring no special skills to operate. Participants share both their differences and similarities through direct interaction, replacing the myth of global hegemony and projected stereotypes with personal experience.

Windows on the World remaps our cities to create a telematic dérive. Guy Debord’s map “the Naked City” is now a map of the world. Each portal becomes what the Situationists called a plaque tournante, a center, a place of exchange, a site where ambiance dominates and planners’ power to control our lives is disrupted. People can wander from portal to portal within their city, experiencing and constructing different situations. Windows on the World changes the experience people have not only of the world, but of their city.

Like the Situationist dérive, Windows on the World operates outside of commerce and planning. There is no advertisement. The project is at its strongest when it is by chance. Some portals are temporary, even hidden. Others are improbable, or not easy to access. In a back alley in Prague, shaded by a Northern exposure, is a portal to a zoo in Sao Paolo. From a dangerous street in the Bronx, a door opens onto the Champs-Elysees. Another portal, in Zurich, looks out onto a busy railroad yard in Rotterdam.

Sixteen years have passed and the widespread adoption of social media during that time has led us to confuse the public and the private. We invite our friends and even strangers into our living rooms and our bedrooms. Formality has fallen away as suit and dress have given way to sweatpants and leggings. But there is no comfort to these forms of sociality or dress. Instead, we understand ourselves to always be on display, to have to perform at all times to receive our dopamine rush both for social interaction and for work, which is ever more subject to bureaucratic projects like “agile frameworks” and “total quality management” offering micro-rewards and micro-punishments for meaningless tasks completed. With this new search for (over) stimulus, there is every chance that a project like Windows on the World will fail as individuals provoke each other like chimps in separate cages at a zoo (or as humans do on Chatroulette).  But thinking through how social connections and an appeal to a universal human connection might be made today—however obliquely—is an imperative task for art and technology.

Work in this vein certainly does not always need to provide an affirmative alternative space. Returning to my own work at the Contemporary Art Centre in Vilnius, Lithuania in 2016, the Perkūnas project aims to make us sonically aware of the social distancing produced by network culture. Visually, this structure is both abstract—a simple architectural form—and a harbinger from another world, hence the name Perkūnas [the Lithuanian name for (the God of) Thunder]. Built of large ventilation ducting commonly found (and heard) in art museums and other large public structures, Perkūnas’s role isn’t to condition the air in the environment, rather it is to produce sound in accordance with the number of Wi-Fi enabled devices in the room.       

A microprocessor monitors the space for active Wi-Fi-enabled electronic gadgets, increasing or decreasing the amount of air flowing through the duct and hence, the amount of noise in the room based on the number of gadgets. If there are no gadgets present or if everyone’s Wi-Fi is disabled, the duct makes little or no sound. With a couple of gadgets, it make a louder sound. The more gadgets, the more sound. Our ability to communicate verbally is directly affected by the amount of gadgets. If we leave our gadgets behind or turn them off, Perkūnas stays quiet. But people don’t read wall text, they look at their phones, distracted by whatever algorithm is guiding them along their day and Perkūnas howls at them, their distraction drowning out their ability to communicate verbally.

Perkūnas doesn’t bring people together, but it suggests one direction for an art that can speak frankly to us about our social media-obsessed world without resorting to parody or false political promises about participation. Finding new ways for art to negotiate the space between social media and social distancing is a task at hand.

Art Strike is Over

After the election of the Tyrant, a strange thing happened to me. I found my drive to make art and to write about  culture evaporate. This didn’t happened all at once and, for a time, I kept up a certain level of production, but I noticed my disdain for these activities rising. Somehow, producing for a failed culture seemed wrong. Instead, as I’ve chronicled here, I turned inward, taking on a variety of projects at our house, but also turning to radical gardening with native plants as a process of healing a small piece of land much damaged by human disturbance (notably construction of this property forty years ago) together with the violence of contemporary landscaping. It’s been incredibly valuable to practice radical gardening but it’s also a matter of temporal displacement: the work I have done won’t be visible for years to come: it’s an investment in a better future, optimism that this nightmare would end.

As for art, I continued research as a kind of secret activity, but I’ve definitely felt the need to retreat. At some point, I decided that I’d consciously go on an unannounced art strike while the Tyrant was in office. I couldn’t find a way to return to work that has been, at its basis, a sustained inquiry into how we behave to each other in the public realm.  I’ve talked to a few friends about this and apparently this isn’t uncommon at all.

Even if I don’t expect the political polarization crippling the US and the world to stop, with the Tyrant in his twilight, playing golf as performance art and flailing aimlessly about, there’s no question that a certain weight is gone. The world is still damaged, but it’s easier to sleep at night again, there’s room again to breath. If there’s another lockdown—and there should be—I’ll be here, taking notes and experimenting.

Writing and research are both in the works and you’ll be seeing evidence soon.

Architecture and Rage

This month of rage has been building for a while: the last few weeks, the last three years, are nothing new, neither for the country nor for myself. A tyrant sits on his throne, spreading his bigotry and inciting hatred. Black men and women are dying in the streets, killed by not so much by individual crooked cops as by a malignant system central to the disciplinary society that underlies our government.

Nothing new here. That’s not a way of ducking the issue, that is the issue. What follows is a highly personal account that I am set out to write to underscore just how deeply fucked up the discipline of architecture is.

Over thirty years ago, on August 6, 1988 I witnessed the peaceful Tompkins Square Protest against neighborhood gentrification turn into a police riot after cops charged the crowd wielding batons. Police claimed that bricks and bottles were thrown, but I was there, they lied. Cops lie, it’s a fact that my father—a conservative and fan of Ronald Reagan—taught me years before that. The cops charged us. We ran. As we inched our way back to the scene, I watched a young black man in dreadlocks being held by two cops on horses as they hit him over and over with their batons. I was fairly far away and I gauged the scene. I yelled at them and gestured obscenely, hoping they would chase me, giving the man some relief. I figured I’d duck into a storefront. Had they come after me, I am sure it would have been bloody, but they didn’t. They continued beating him. Later, the police would be charged with over a hundred counts of police brutality. Only two officers were actually found guilty of any wrongdoing and only one—conveniently for the NYPD, a woman—was dismissed.

I turned 21 that year and I was in New York because I was supposed to go to Columbia for graduate school in architecture, but after three days I’d had enough and I quit, going back to Cornell to do a doctorate in the history of architecture. But these two moments aren’t random bits of biography, the first led to the second. I’d majored in history of architecture for two years at Cornell and as part of that, I took design studio as well. As was the goal of that curriculum, design studio taught me more than anything else during those two years. It seemed to me that there was something fundamentally wrong with the formalist system of education as it was taught at Cornell, where I studied, as well as at so many others schools, including Columbia.

I tried to find out what was behind this system and, since the first project we worked on was the Nine Square Grid, I eventually traced it to a book that was curiously missing from Cornell’s Fine Arts Library and could only be found via interlibrary loan, the 1971 MoMA catalog, The Education of an Architect: a Point of View, which chronicled the teaching of John Hejduk at Cooper Union as well as the book that every student seemed to hide in their desks like a pornographic magazine, the 1975 Five Architects, which presented the work of Peter Eisenman, Michael Graves, Charles Gwathmey, John Hejduk, and Richard Meier. These books were to be read furtively since they suggested that reading, not learning through drawing, might be possible and that the system of education then being employed, which presented itself as timeless, might itself have a history.

I found it difficult, almost impossible, to square these books—filled with formal games and an architecture that defied materiality—with the real political revolution of the 1960s. How could they co-exist in the same milieu? But clearly, it was there, in his preface to Five Architects, Arthur Drexler, Director of Architecture and Design at MoMA, stated “An alternative to political romance is to be an architect, for those who actually have the necessary talent for architecture.”

In 1988, bound for architecture school, I witnessed the violence in the streets and a month later, decided that it was not the time for me to go to architecture school. I wasn’t just interested in understanding the way architecture repressed politics, I needed to understand it. Against all advice, I chose to try to understand the very educational system that I detested.

As I did so, I ran across two curious articles in, of all places, Spy Magazine. Spy, edited by Kurt Andersen, was this wild, satirical magazine that mercilessly targeted the celebrity class, most notably the “short-fingered vulgarian Donald Trump,” who remembered that slight well enough that he tweeted about it in 2015 when Charlie Hebdo‘s offices were attacked. The first was a 1991 article by John Brodie, “Master Philip and the Boys.” Brodie examined how Philip Johnson wielded power and influence in architecture and how he gathered “the Kids,” a boys’ club of formalist architects around himself (curiously enough, this included three of the five members of the New York Five plus Robert Stern and Frank Gehry). The second was the late Michael Sorkin’s 1988 article, “Where was Philip?” (thanks, Trump, you fucking fuck, for dragging your feet on effective measures against the COVID-19 virus and thereby killing Michael Sorkin, much better a man than you or your awful children will ever be). Using primary sources, Sorkin uncovered how the very same Philip Johnson tried to create a fascist party in the United States, going so far as to accompany the Nazis on the blitzkreig into Poland, singing the invading army’s praises in (what we now call fake) news articles for extreme Right-wing publications. This led to that again: I realized these three phenomena—a formalist architectural education, an aged architect who gathered power around himself, and the largely-supressed Nazi past of that architect—were not distinct, but were all aspects of the same system.

I spent three years writing a dissertation exploring how contemporary architecture was fundamentally based on a process of spectactularization—of stripping politics from history for the purposes of an illiberal politics of form. I was writing in the aftermath of the scandals about Paul de Man’s collaborationist writing and I was naïve enough to believe that the academy would take my work to heart and examine itself. Matters had already changed by the time I filed my dissertation when Franz Schulze’s published his biography of Johnson. I hadn’t known Schulze was writing the biography, or even doing research into Johnson’s fascist period until right before publication, so his work hardly impacted mine, apart from some last minute revisions. Schulze’s book was a bizarre and offensive combination of pinkwashing—Johnson loved the boys in their black boots and hot uniforms so it was all ok—and old school outing (Johnson was gay! he was gay, imagine the scandal!). Schulze, who demonstrated no interest in critical thought, either didn’t have the capacity to make the connection between the postwar whitewashing of Johnson’s Nazi past and the concomitant removal of both Left and Right politics from modern architecture or more likely, chose to ignore it. These were the same things: they both served power, privilege, and a Nietzschean drive to form at all costs. If Schulze didn’t draw this connection, I would soon find out that the academy didn’t want to hear it.

I published two articles from my dissertation in the Journal of Architectural Education. Editor Diane Ghirardo greatly supported this work, as did the peer reviews I received. The first article was “We Cannot Not Know History,” in which I examined on Johnson’s Nazi past and the attempts to repress it. This immediately got me in hot water with the JAE’s parent organization, the Association of Collegiate Schools of Architecture. Whether the directorate was interested in killing the article because—as they claimed—it opened the organization to libel lawsuits or whether they hoped to kill it because the article hit too close to home is up to you to decide. The second article was “The Education of the Innocent Eye.” My exploration of the formalist system of architectural education was damning as well—after all, I concluded, “An initial exposure to architecture through the absolute system of the visual language of architecture clearly puts materialist questions second. Beam, column, wall, compression, shear, and rotation take precedence, and history, theory, gender, race, and class take a backseat.” This time, quashing the article wasn’t in the cards and I won Best Article of the Year from the JAE.

But I’d already found that not only was architecture not willing to tackle the questions I’d raised, my dissertation topic was, on the contrary, an impediment to a career in the academy. Granted, it was the recession, but I spent two years unable to find employment until Margaret Crawford gave me a chance to teach history and theory of architecture at SCI_Arc. Even then, my work was received with suspicion by virtually all of the design faculty there and many of would have gladly curried favor with Johnson if given the opportunity. In the end, upon becoming Director, Eric Moss asked me in an accusatory tone what I was hoping to gain from writing this sort of thing. No great surprise, it was well known that he was a wannabe Kid. He made it clear I was unwelcome in his regime and his leadership style (so clearly prefiguring Trump’s presidency in its vulgarity and absolutism) were such that I didn’t want to work for him in any event.

In the meantime, I’d hoped to publish my dissertation in book form. Theory was rapidly falling out of fashion and there weren’t many takers for my work on architecture education. What still shocked me was how reluctant publishers were to publish my material on Johnson. Encouraged by some colleagues (notably Mike Davis), I put together a proposal to publish my work on Johnson together with the texts he wrote back in the 1930s. Unfortunately, these publishers had no interest in helping architecture confront itself. One asked, like Moss, “what I hoped to gain from all this.” (I started posting these articles in 2017 but the silence sapped my spirit…I’ll try again another day and see if there is any greater interest this time).

I taught as a visiting faculty member at the University of Pennsylvania for a term under the late, brilliant and gentle Detlef Mertins, but work there was only temporary. Clearly it wouldn’t have been possible for me to stay in the history of architecture—my work was too dangerous for a powerful figure in that program, friends confided in me (I suspected having a non-anglo-saxon name was also a problem for this fellow… as it has been in so many places throughout my life)—and that was the last time I was a full-time member of a history of architecture faculty in the US. I reinvented myself: if my dissertation had been around social networks, I now looked to theorize the impact of digital networks on cities and forged a career in that field. I managed to land a position in the laboratory research wing of the architecture school at Columbia that lasted well over a decade until it was eliminated by a new Dean seeking to cut costs. Even so, it was clear at Columbia that my presence as a historian was deeply questionable, not to be publicly acknowledged and I was never asked to be part of that faculty. Architecture still can’t stomach critique, this is clear to me.

I’m out of teaching, likely for good. I knew that this work was too dangerous for architecture and I had a backup plan for funding that paid off at just about the same time that the labs were shut down at Columbia so now I can work on other projects that interest me more. I’m much happier without the phony leftism of the university, of faculty who pretend to be Marxists, but whose real goal is defending their own turf and the system.

I have a lot of (political) gardening to do, I need to make some art, and look over a translation of one of my texts, but along the way, I intend to go back to some of these projects and post material from them on this site. Let’s see if there’s any interest this time. I’m not confident about it.

Taking a Break

I am taking a break from email and social media for a few days but more likely for a week. With all of the death around us and the moron in the White House, it’s just too much. If I owe you anything, sorry, I need to protect my mental health more.

The 2010s in Review (The Decade of Shit)

So this post is late, what do you expect from the 2010s? Ten years ago Time Magazine dubbed the 2000s the “worst decade ever,” but in retrospect that was such a carefree time, wasn’t it? Even the end-of-the-decade collapse seemed full of possibility, promising a truly cataclysmic civilizational implosion if nothing else.

In contrast, the 2010s weren’t just another failed decade, they were the decade of shit. All the hype and excitement led us to a universal dissatisfaction. Left, Right, and Center, we’re all pissed off about where we are and not enthusiastic at all about where we’re going. Even the proponents of doom are disappointed. The whole ZeroHedge crowd has been left trying to cover their short positions as the economy lurches onward and the doomers are facing expiration deadlines on their MREs as they wait for TEOFTWAWKI. Now sure, this year we’ve already had firestorms the size of Austria ravaging Australia, a rain of rockets in Baghdad, Ukrainian jetliners getting shot out of the sky, a deadly pandemic in China caused by people eating creatures that they really shouldn’t, and the failure of the Senate to uphold the rule of law, but the banality of it all is crushing. While the Dark Mountain set drinks wine around a campfire, gets henna tattoos, and sings along to songs about the end of nature, for the rest of us, it’s just an exhausting, daily slog through the unrelentingly alarming headlines.

We finish the decade with network culture in its last days. Back in 2010, when I was working in earnest on a book on network culture,* I made the following prediction:

“Toward the end of the decade, there will be signs of the end of network culture. It’ll have had a good run of 30 years: the length of one generation. It’s at that stage that everything solid will melt into air again, but just how, I have no idea.”

Well now we know how. All the giddy delirium about the network and globalization is gone. We’ve got our always-on links to the net in our hands all the time, we’ve got our digital economy, and with it all we’ve have entered a period of stark cultural decline. It’s an empty time, devoid of cultural monuments. Name one consequential building of this past decade, one!

Juoda Paduška, Valdas Ozarinskas

Well, there is this great project by architect Valdas Ozarinskas (who didn’t make it through the decade), a massive dark summation of our failures. But culture’s not doing well. The easy appeal to dopamine receptors provided by Facebook, Twitter, Instagram, Netflix, and YouTube has undone our ability to focus. This is the golden age of cat videos, nothing more. Any sustained thought is gone and bottom up efforts have dissipated.

A decade ago, my blogging comrades and I were plotting how to take over architectural discourse. But this amounted to nothing. Those blogs, and many others, have been silenced, absorbed into garbage sites like Medium, Facebook, Forbes, and BusinessInsider or just left in whatever state they were in, circa 2014, as their creators went on to Twitter, Facebook, Instagram, or just premature (or belated?) fin-de-siècle ennui. Podcasts are the one flourishing outpost of real DIY content on the Internet, perhaps because they distract from the world around us, but in all other respects the DIY ethic is on the wane. The most interesting publication of the 2000s, Make Magazine, went under in 2019, at least temporarily. Kickstarter is the metaphor for the decade: a lot of promises, a lot of crap that we’ve thrown away, a lot of outright lies, and a feeling of dread that somehow we’ll be sucked into its maw and participate again.

But its not just banality, as Bruce Sterling points out at his State of the World 2020 at the Well, we were always too optimistic about what bottom-up efforts could do. Network culture gave birth to the vilest of viral propaganda, some of it from state actors, some of it genuinely home grown. In Bruce’s words, “Our efforts had evolved an ecosystem for distribution of weaponized memes.”

Network culture didn’t usher in a new world of Free! Open Access! Networked Culture!, rather it ushered in the first phase of the Jackpot. That’s a phrase I used often these days, lifted from William Gibson’s 2014 The Peripheral, one of a handful of genuinely insightful cultural artifacts from the last decade (note that William Gibson’s Twitter name is @GreatDismal). The Jackpot refers—not so cheerily—to the end for some 80% of the world’s population, rich and poor, developed and undeveloped (largely the former in each pair).

Time for a lengthy quote from Gibson:

No comets crashing, nothing you could really call a nuclear war. Just everything else, tangled in the changing climate: droughts, water shortages, crop failures, honeybees gone like they almost were now, collapse of other keystone species, every last alpha predator gone, antibiotics doing even less than they already did, diseases that were never quite the one big pandemic but big enough to be historic events in themselves. And all of it around people: how people were, how many of them there were, how they’d changed things just by being there. …

But science … had been the wild card, the twist. With everything stumbling deeper into a ditch of shit, history itself become a slaughterhouse, science had started popping. Not all at once, no one big heroic thing, but there were cleaner, cheaper energy sources, more effective ways to get carbon out of the air, new drugs that did what antibiotics had done before…. Ways to print food that required much less in the way of actual food to begin with. So everything, however deeply fucked in general, was lit increasingly by the new, by things that made people blink and sit up, but then the rest of it would just go on, deeper into the ditch. A progress accompanied by constant violence, he said, by sufferings unimaginable. …

None of that … had necessarily been as bad for very rich people. The richest had gotten richer, there being fewer to own whatever there was. Constant crisis bad provided constant opportunity. … At the deepest point of everything going to shit, population radically reduced, the survivors saw less carbon being dumped into the system, with what was still being produced being eaten by those towers they’d built… And seeing that, for them, the survivors, was like seeing the bullet dodged..

Now amidst modernization, two World Wars, massive pollution, and unprecedented environmental cataclysms such as Minamata bay, Chernobyl, or Bhopal, the twentieth century was hardly a cakewalk and when it comes down to it the salinization of the Fertile Crescent is likely our real original sin (won’t someone start a green Church around this as the fall from grace?). But the Jackpot officially began on 9 November 2016, a day that reminded many of us of the outrage followed by the blank numbness that we experienced on 9/11. A new emergency was declared by never-Trump neoliberals, anti-Trump leftists, and, outraged by all the outrage, the pro-Trump neoreactionaries and neo-Nazis who had stocked up on guns in case HRC was elected and then had nothing to do. There would be no turning back now.

The horrifying truth underlying the Jackpot is that it isn’t just an accident. We used to think that global inequality was based on structural poverty, but now it’s becoming clear that automation will ensure that vast numbers of people are no longer needed. Couple that with climate change and you have the Jackpot. Entire swathes of the world—Afghanistan, Iraq, Syria, Libya, South Sudan, and Yemen—have already been rendered nearly uninhabitable by drought and continual war waged by the United States, Iran, Russia, and their proxies for strategic purposes. Trump’s America is full of individuals who have no future whatsoever and, with self-driving vehicles on the way, millions of truck and taxi drivers are about to find themselves as in demand as coal miners in West Virginia. Moreover, even as we continue to belch out carbon at an unprecedented rate, it’s only a matter of time before jobs in the fossil fuel and traditional automotive industries disappear as well. New jobs will appear, of course, but there will be far fewer of those and the idea of teaching coding to coalminers proved not to be that sound.

The newly disenfranchised have little to lose: the cracks are showing. It’s not that the containment can only last so long, it’s already breaking everywhere. We can see the collapse of the Paris Accords not just as a deliberate step further into dark acceleration, it’s a lizard-like reaction to the Jackpot, part of a new strategy, national government as survivalist retreat.

Sure, the Trump administration is largely composed of knuckle-dragging defectives, but we can discern a strategy if we look carefully enough: rather than making sacrifices to survive the Jackpot, they will do what they can to get the biggest piece of the pie for themselves and their cronies in the colossal redistribution of wealth it represents. And don’t get your hopes up about a socialist revolution in the next round of elections. My academic leftie friends impute way too much to old Frankfurt School notions of ideology: the reason Trump and his cohort of populists have been elected isn’t because the poor have been duped and only need to see the way, it’s because they know there’s no hope for them. There’s no standing in solidarity behind a neo-socialist boomer, they all know that’s not going to work and if they’re going out, they’re going to drag down the elites with them. Americans elected this grinning, Adderall-abusing droog and will most likely do so again, just like the rest of the populists rising to power worldwide. In the words of Joseph de Maistre’s (and Julie Mason’s) words, “every nation gets the government it deserves.” Who better than the “King of Debt” to show us that the Jackpot is here?

If anything, not only has the Left failed to come up with a convincing counter-argument, it’s gone down the entirely wrong route with identity politics, which has all but taken over not just Left politics but also the academy and museums. Identity politics isn’t-anti-Trump, it’s high Trump, embracing the idea that the Jackpot has started and the next step is to redivide the pot in favor of your tribe. In the art world and the academy, it teams with an exhausted neoliberalism looking for an alibi while it also helps sell culture to a new generation of oligarchs even as it further exacerbates the rampant tribalism in our society. Steve Bannon understood it well, if the Left fights on the basis of identity politics, his Right wing identity politics wins every time. But for many on the woke Left, this is hardly a problem: a Trumpian government gives their screaming more legitimacy and feeds their fevered dreams of revolution.

Against the rise of identity politics on the Left and Right, the Center is left floundering. Neoliberalism is exhausted, its most appropriate cultural manifestation being is overtourism. As the decade started, I repeatedly tried to launch a major research project on the phenomenon in the academy, but the project fell on deaf ears. I wasn’t surprised. Universities are just like travel: there is an appearance of diversity and difference, but it’s a generalized sameness, a gray nothing in which you won’t ever encounter anything new, just another Starbucks serving poor quality beans over-roasted beans so that you can’t tell what they are and some screaming about how special the place is.

My sense is that if there’s to be any kind of hope in the next decade to get out of what Bruce Sterling appropriately calls “the New Dark,” it’s going to be to achieve the impossible: throw out identity politics (left and right) and turn back toward a grand project—what academics used to criticize as a “metanarrative”—that most of us can get behind.

As this decade showed, there’s little question that this is climate change and toxins in the environment. Here’s where the Jackpot has its upside: the deaths of billions of humans pales in comparison to the species-cide we are undertaking to species left and right. Have you listened for the dawn chorus of birds lately? A hundred years from now the biggest news of this decade may be that this is when Rachel Carson’s Silent Spring became real as birds and pollinators died off in massive numbers. Here on the Northeast seaboard, we’ve bid goodbye to the ash tree and are watching for beech leaf disease, white pine needle disease, sudden oak death, and hoping the spotted lanternfly doesn’t cross the Delaware. It seems like the only place nature really thrives anymore isn’t in national parks, it’s in radiation exclusion zones.

But dead birds and trees don’t matter much to the average person who doesn’t have anywhere to shop besides the Dollar General Store and hasn’t seen fresh vegetables in years. The key is probably going to be luck, bad luck (and what is the Jackpot about if it’s not luck?). Maybe, just maybe if a series of truly awful major environmental cataclysms hit the key countries involved in carbon production—the US, China, India, Canada—they might, be alarmed enough to do something about it. We aren’t talking about a category 5 hurricane hitting New York City. Nobody will care about that, we are talking flattening a good portion of Florida and the Southeast, plus a good bit of Texas, maybe a good Dust Bowl 2.0 coupled with massive flooding in the Midwest then doing that five or six times over worldwide in the space of a couple of years. That’s a horrible, terrible thing, but if luck isn’t with us and it doesn’t happen, what are our chances of avoiding much worse conditions? And even if it does, will we be too late to turn back the clock?

*Never finished because a publisher botched the project to the point I quit working on it in disgust. But you can read an early essay (circa 2006) on network culture here.

My Dear Berlin Wall

This weekend marks the thirtieth anniversary of the fall of the Berlin Wall and, to commemorate, I thought it would be appropriate to post this chapter from Blue Monday, which Robert Sumrell and I published a decade ago as AUDC.

My Dear Berlin Wall

On June 17, 1979, Eija-Riita Eklöf, a Swedish woman, married the Berlin Wall at Groß-Ziethener Straße, taking Wall Winther Berliner-Mauer as her name. The final piece of heroic modernist architecture, the Berlin Wall was constructed just as the movement’s Utopian political ambitions had begun to wane. By then, with the eastward spread of modernism during the Khrushchev years,  the ideological distinctiveness of modernity had come to an end, making East-West, modern–antimodern harder to distinguish. Still, the architects of the Berlin Wall hoped it would change society.

For a while, it did just that. After the close of World War II, Berlin was a microcosm of Germany. Both city and country were cut into four occupation zones, each overseen by a commander-in-chief from one of the four Allied powers, the United States, the United Kingdom, France, and the Union of Soviet Socialist Republics. As the geopolitical tide continued to shift in the years after the war, tensions escalated between the Soviets and the Western allies. By the time of the Berlin Blockade of 1948-1949, it became clear that the three allied segments of Berlin would become “West Berlin,” an enclave of the Federal Republic of Germany (or “West Germany”) while the Soviet-controlled sector would become “East Berlin,” associated with the German Democratic Republic (or “East Germany”). Responding to the blockade crisis, Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, Netherlands, Norway, Portugal, United Kingdom, and the United States promised mutual military defense through the North Atlantic Treaty Organization (NATO) in 1949. In 1955, West Germany joined NATO and, in response, the Soviet Union formed the Warsaw Pact with Albania, Bulgaria, Czechoslovakia, Hungary, Poland, and Romania. East Germany joined one year later as two events cemented the division of the world. The first was the Suez Crisis in which the United States, fearing thermonuclear war with the Soviet Union, forced France and Britain to withdraw from the Suez Canal, a critical piece of infrastructure they had occupied earlier that year to prevent Egypt from nationalizing it. The second was the Hungarian Revolution, crushed by Soviet tanks as the West watched. With the refusal of the Soviet Union and the United States to enter into direct confrontation, it became clear that the world, Europe, Germany, and Berlin had been divided into spheres of influence controlled by the superpowers.

The superpowers fought the Cold War through their display of achievements in science, propaganda, and accumulation. Berlin became the prime place for the two competing rivals to showcase their material culture. In the East, the Stalinallée, designed by architects Hartmann, Henselmann, Hopp, Leucht, Paulick and Souradny, was a nearly 2km long, 89m wide boulevard, lined with eight-story Socialist Classicist buildings. Inside these vast structures, workers enjoyed luxurious apartments, shops, and restaurants. In response, West Berlin held the Interbau exhibit in 1957, assembling the masters of modern architecture including Alvar Aalto, Walter Gropius, Le Corbusier, Oscar Niemeyer, Johannes van den Broek and Jaap Bakema, Egon Eiermann, and Pierre Vago to build a model modernist community of housing blocks in a park in the Hansaviertel quarter.

Nor was the battle of lifestyle between East and West limited to architecture. Soviet communism followed capitalism to focus on industrial productivity and expansion within a newly global market. While the United States exported its products—and eventually outsourced its production—throughout the world, the Soviets used COMECON, the economic equivalent of the Warsaw Pact, to eliminate trade barriers among communist countries. Each country produced objects to represent its superior quality of life, validating each system not only to its own citizens, but also to each other and to the rest of the world. The importance of material culture in the Cold War is underscored by the 1959 Nixon-Khrushchev “Kitchen Debate,” played out between the two powers in a demonstration kitchen at a model house during the American National Exhibit in Moscow. At the impromptu debate, Soviet Premier Nikita Khrushchev expressed his disgust at the heavily automated kitchen and asked if there was a machine that “puts food into the mouth and pushes it down.” U. S. Vice President Richard Nixon responded, “Would it not be better to compete in the relative merits of washing machines than in the strength of rockets? ” A Cold War battle was fought through house wares.(1)

The climax of the battle came in Berlin. In the divided city, half a million people would go back and forth each day between East and West. Westerners would shop in East Berlin where products, subsidized by the East Bloc, were cheap. Easterners would shop in the Western sector where fetish items such as seamless nylons and tropical fruits could be found. Overall, however, the flow was westward. The rate of defection was unstoppable: between 1949 and 1961 some 2.5 million East Germaners left for the West, primarily through Berlin. But just as destructive was the mass flight of subsidized objects westward, a migration that pushed the inefficient communist system to collapse. Khrushchev had won the debate: cheap goods from the East were more desirable, but the cost to the system was unacceptable. By this time, production and accumulation in the East Bloc had became goals in and of themselves, devoid of logic and use, no longer tied to the basic needs of the economy. In 1961, Khrushchev launched the Third Economic Program to increase the Soviet Union’s production of industrial goods at all costs. During a meeting that year, COMECON decided that the flow of people and products had to be stopped. On August 13, 1961 the border was sealed with a barrier constructed by East German troops.

The result was a city divided in two without regard for prior form or use. Over time, the Berlin Wall evolved from barbed wire fences (1961-1965) to a concrete wall (1965-1975), until it reached its full maturity in 1975. The final form would be not one but two Walls, each constructed from 45,000 3.6 meter high by 1.5 meter wide sections of reinforced concrete, each weighing 2.75 tons, separated by a no-man’s-land as wide as 91 meters. The Wall was capped by a smooth pipe, making it difficult to scale and was accompanied by fences, trenches, and barbed wire as well as over 300 watchtowers and thirty bunkers. (2)

With its Utopian aspiration to change society, the Wall was the last product of heroic modernism. It succeeded in changing society, but as with most modernist products, not in the way its builders intended. East Berlin, open to the larger body of the East Bloc, withered, becoming little more than a vacation wonderland for the Politburo élite of the Soviet Union and Warsaw Pact and a playground for spies of both camps. Cut off, West Berlin thrived. By providing a datum line for Berlin, the Wall gave meaning to the lives of its inhabitants. In recognition of this, Joseph Beuys proposed that the Wall should be made taller by 5cm for “aesthetic purposes.” (3)

As the Wall was being constructed in Berlin, Situationists in Paris and elsewhere were advocating for radical changes in cities as a means of preserving urban life. For them, the aesthetics of modernism and the forces of modernity were destroying urbanity itself. During this period working-class Paris was being emptied out, its inhabitants sent by the government to an artificial modernist city in the suburbs while an equally artificial cultural capital for tourists and industry was created in the cleaned-up center. Led by Guy Debord, the Situationists hoped to recapture the city by creating varied ambiances and environments and strategies providing opportunities for stimulation and chance drift. When Situationist architect Constant Nieuwenhuys deployed the floating

transparent layers of his New Babylon to augment the existing city with unique, flexible and transient spaces Debord condemned the project, arguing that the existing city was already almost perfect. Only minor modifications, were necessary, such as adding light switches to street lights so that they could be turned on and off at will and allowing people to wander in subways after they were shut off at night.

In his 1972 thesis at the Architectural Association, entitled “Exodus, or the Voluntary Prisoners of Architecture,” Rem Koolhaas found a way of reconciling modernism with Situationism through the figure of the Wall. Suggesting that the Wall might be exported to London and made to encircle it, Koolhaas writes, “The inhabitants of this architecture, those strong enough to love it, would become its Voluntary Prisoners, ecstatic in the freedom of their architectural confines.” Inside, life would be “a continuous state of ornamental frenzy and decorative delirium, an overdose of symbols.” Although officially proposing a way of making London more interesting, Koolhaas’s thesis is really a set of observations about the already existing condition of the real Wall. (4)

In choosing to encircle London with the Wall, Koolhaas recognized that it was not only the last great product of modernism, it was the last work of heavy architecture. Already in 1966, in his introduction to 40 Under 40, Robert Stern observed that an increasingly dematerialized “cardboard architecture” was “the order of the day” in the United States while in England, architects such as Archigram were proposing barrier-less technological utopias.(5)

Built of concrete, the wall was solid, weighty. It hearkened back to the days of the medieval city walls, which were not only defensive but attempted to organize and contain a world progressively more interconnected through communications and trade. Walls acted as concentrators, defining places in which early capitalism and urbanity could be found and intensifying both. So long as the modes of communication remained physical and the methods of making and trading goods were slow, nations retained their authority and autonomy through architectural solidity.

The destruction of the Berlin Wall in 1989 is concurrent with the pervasive and irreversible spread of Empire and the end of heavy architecture. Effectively, the Wall fell by accident. During 1989, mass demonstrations in East Germany led to the resignation of East German leader Erich Honecker. Soon, border restrictions between neighboring nations were lifted and the new government decided to allow East Berliners to apply for visas to visit West Germany. On November 9, 1989, East German Minister of Propaganda Günter Schabowski accidentally unleashed the Wall’s destruction. Shortly before a televised press conference, the Minister was handed a note outlining new travel regulations between East and West Berlin. Having recently returned from vacation, Schabowski did not have a good grasp on the enormity of the demonstrations in East Berlin or on the policy being outlined in the note. He decided to read the note aloud at the end of his speech, including a section stating that free travel would be allowed across the border. Not knowing how to properly answer questions as to when these new regulations would come into effect, he simply responded, “As far as I know effective immediately, right now.” Tens of thousands of  people crowded the checkpoints at  the Wall and demanded entry, overwhelming border guards. Unwilling to massacre the crowds, the guards yielded, effectively ending the Wall’s power. (6)

The Schabowski mis-speak points to the underlying reason for the Wall’s collapse: the lack of information flow in the East Bloc. The goals of Khrushchev’s Third Economic Program were finally met by the early 1980s, but by that point the United States was no longer interested in production. For America, the manufacturing of objects proved to be more lucrative when outsourced to the developing world. By sending its production overseas, America assured the success of its ideology in the global sphere while concentrating on the production of the virtual. Having spent itself on production of objects, as well as on more bluntly applied foreign aid, the Soviet Union collapsed. Manuel Castells observes that, “in the 1980s the Soviet Union produced substantially more than the US in a number of heavy industrial sectors: it produced 80 percent more steel, 78 percent more cement, 42 percent more oil, 55 percent more fertilizer, twice as much pig iron, and five times as many tractors.” The problem, he concludes, was that material culture no longer mattered. The PC revolution was completely counter to the Soviet Union’s centralization while photocopy machines were in the hands of the KGB. (7)

With the Wall gone, and with it the division between East and West undone, the world is permeated by flows of information. Physical borders no longer hold back the flow of an immaterial capital. Indeed, soon after the Wall fell, the European Union set about in earnest to do away with borders between individual nation states.

But through it all, the Wall has had no greater fan than Wall Winther Berliner-Mauer. In her view, the Wall allowed peace to be maintained between East and West. As soldiers looked over the Wall to the other side, they saw men just like themselves, with families they loved and wanted to protect. In this way, the Wall created a bond between men who would otherwise be faceless enemies. But Berliner-Mauer’s love for the Wall is far from abstract. For her, objects aren’t inert but rather can possess souls and become individuals to fall in love with. Berliner-Mauer sees the Wall as a noble being and says she is erotically attracted to his horizontal lines and sheer presence. While Berliner-Mauer is in a seemingly extreme, sexual relationship with the object of her desire, our animistic belief in objects is widespread across society. Adults and children alike confide in stuffed animals, yell at traffic signs, and stroke their sports cars all the time. Many people choose to love objects over their friends, their spouses, or themselves. Berliner-Mauer understands that the role of objects in our lives has changed. Objects are no longer tools that stand in to do work for us, or surrogates for normal human activities. Objects have come into their own, and as such we have formed new kinds of relationships with them that do not have a precedent in human interaction. The Wall is a loving presence in her life, and she loves it in return.

Faced with the tragedy of the Wall’s destruction, Berliner-Mauer created a technique she calls “Temporal Displacement” to fix her mind permanently in the period during which the Wall existed, even at the cost of creating new memories. (8) For Berliner-Mauer, marrying the Wall is a last means to preserve the clarity of modernism and the effectiveness of architecture as a solid and heavy means of organization. To love the Berlin Wall is to dream of holding onto the simple, clear disciplinary regime of order and punishment that came before we were all complicit within the system. In today’s society of control, as Gilles Deleuze writes, we don’t need enclosures any more. Instead, we live in a world of endless modulations, internalizing control in order to direct ourselves instead  of being directed.(9) After the fall of the Wall and the end of enclosures, even evil itself, Jean Baudrillard observes, loses its identity and spreads evenly through culture. An Other space is gone from the world. North Korea and Cuba are mere leftovers, relegated to tourism. There is nowhere to defect to: without the Wall, we must all face our own complicity in the system.(10)

(1) Elaine Tyler May, Homeward Bound: American Families in the Cold War Era (New York: Basic Books, 1999), 10-12.

(2) Thomas Flemming, The Berlin Wall: Division of a City (Berlin: be.bra Verlag, 2000), 75.

(3) Irving Sandler, Art of the Postmodern Era (Boulder, Colorado: Westview Press, 1998), 110.

(4) Rem Koolhaas, S, M, L, XL (New York: Monacelli, 1995), 2-21.

(5) Robert A.M. Stern, ed., 40 Under 40: An Exhibition of Young Talent in Architecture (New York: The Architectural League of New York, 1966), vii.

(6) Angela Stent, Russia and Germany Reborn: Unification, the Soviet Collapse, and the New Europe (Princeton: Princeton University Press, 1999), 94-96.

(7) Manuel Castells, End of Millennium, Second Edition (Oxford, UK: Blackwell, 2000), 26-27, See also the entire chapter on “The Crisis of Industrial Statism and the Collapse of the Soviet Union,” 5-67.

(8) https://www.berlinermauer.se (note from 2019: Eija-Riitta Eklöf-Berliner-Mauer died in 2015 and this site is now some kind of spam site)

(9) Gilles Deleuze, “Postscript on the Societies of Control,” October, Vol. 59 (Winter 1992), 3-7.

(10) Jean Baudrillard, “The Thawing of the East,” The Illusion of the End (Stanford, Stanford University Press, 1994), 28-33.

On Networked Publics and the Facebook

At a recent party, an acquaintance asked where I’d disappeared to. Nowhere, I said, and when I inquired into what prompted their question, they responded by saying that they couldn’t find me on the Facebook. I was a little surprised about this since I am hardly the first person to deactivate or delete an account; in the United States and the European Union, the Facebook’s membership has peaked and has been declining for a few years while engagement is in a steep decline.

Still, the Facebook has effectively replaced most social communication for many people, so I initially had some anxiety about deactivating my account. I thought it would be hard to go without the Facebook since it gathered together people from throughout my life and career and also hosted a few key discussion groups that I enjoyed (mainly pertaining to modular synthesis). Two months later, I don’t feel I am missing out on anything; on the contrary, I am more at ease. Losing the constant noise of the Facebook feed is a joy. If anyone actually wants to get in touch, I am here.

The connections we establish on Facebook are superficial. Birthdays are a case in point: it’s lovely when people remember your birthday, but they don’t really, the Facebook prompts them too. And a couple of happy birthdays doesn’t excuse the toxic impact the site has had on politics, which I find much more oppressive than Twitter, which is used by a large number of journalists and bloggers. My most active Twitter circle is composed of some of the best writers in the architecture and urbanism today as well as a smattering of people involved with digital culture. So, when I mentioned I was quitting Facebook, William Ball sent a link to a scary article by Timothy McLaughlin about how Facebook’s rise in Mynamar fueled genocide while Frank Pasquale lamented that Facebook might wind up like Big Tobacco, accepting the shrinking of its domestic market since it is doing so well overseas.        

This gets me to thinking. Something has changed in network culture: the 2016 election and the end of the Facebook’s growth in the US and EU are indicators that the heady exuberance about the Net has turned to anxiety and dismay. That the methods of communication used by networked publics are fundamentally flawed in a way that earlier publics were not isn’t just something that scholars talk about anymore, it’s something we all face, daily. Back when we took on the term as an object of study at the Annenberg Center for Communications, we understood “networked publics” to refer to a group of individuals with a particular interest in connecting together digitally. Some researchers—not coincidentally sponsored by technology corporations rather than universities—mistakenly depicted social network sites as networked publics in themselves. This is a category error: with few exceptions (the first incarnation of aSmallWorld, intended as a gathering place for the mega-rich and their hangers-on comes to mind), social networking sites act not as publics in themselves but rather as hosts or platforms on which, in theory, individuals could participate in a variety of different publics.

But they don’t. Algorithms ensure posts only reach individuals who “like” similar content, inhibiting the discussion and deliberation necessary to create a public, supplanting it with a brown slime of happy (or angry, or sad, depending on sorts of things one “likes”) posts. Social networking sites profit off of various malignant actors who promote content covertly. Controversy is good, creating more engagement, regardless of the human cost.

If publics can’t form on social networking sites and if, for many people, social networking sites are the primary means of interacting with others, is it possible that publics and public discourse are becoming extinct as a consequence? When I see people saying “it’s time for us to have a conversation” when they really want to shame you into accepting whatever idea they have adopted as their cause that day, I wonder if we have become completely unable to engage in actual public discourse. After all, the modern category of the public is less than four centuries old. Why should we be so bold as to think it will endure?

 

Drones at the New School

I am delighted to announce that I will be presenting Perkūnas at the New School tomorrow during the Sonic Pharmakon conference put together by Ed Keller for the Center for Transformational Media (I am a fellow with CTM this year). See the schedule here. As a modular synthesist and with a sound sensitivity disorder, I am committed to the possibilities of drone and am fascinated to see Sunday’s events (alas, I am on a plane back from Kauai today so will miss the day).