A Decade in Retrospect

Never mind that the decade really ends in a little over a year, it’s time to take stock of it. Today’s post looks back at the decade just past while tomorrow’s will look at the decade to come.

As I observed before, this decade is marked by atemporality. The greatest symptom of this is our inability to name the decade and, although commentators have tried to dub it the naughties, the aughts, and the 00s (is that pronounced the ooze?), the decade remains, as Paul Krugman suggests, a Big Zero, and we are unable to periodize it. This is not just a matter of linguistic discomfort, its a reflection of the atemporality of network culture. Jean Baudrillard is proved right. History, it seems, came to an end with the millennium, which was a countdown not only to the end of a millennium but also to the end of meaning itself. Perhaps, the Daily Miltonian suggested, we didn’t have a name for the decade because it was so bad.

Still, I suspect that we historians are to blame. After Karl Popper and Jean-François Lyotard’s condemnation of master narratives, periodizing—or even making broad generalizations about culture—has become deeply suspect for us. Instead, we stick with microhistories on obscure topics while continuing our debates about past periods, damning ourselves into irrelevance. But as I argue in the book that I am currently writing, this has led critical history to a sort of theoretical impasse, reducing it to antiquarianism and removing it from a vital role in understanding contemporary culture. Or rather, history flatlined (as Lewis Lapham predicted), leaving even postmodern pastiche behind for a continuous field in which anything could co-exist with anything else.

Instead of seeing theory consolidate itself, we saw the rise of network theory (a loose amalgam of ideas from the theories of mathematicians like Duncan Watts to journalists like Adam Gopnik) and post-criticism. At times, I felt like I was a lone (or nearly lone) voice against the madding crowd in all this, but times are changing rapidly. Architects and others are finally realizing that the post-critical delirium was an empty delusion. The decade’s economic boom, however, had something of the effect of a war on thought. The trend in the humanities is no longer to produce critical theory, it’s to get a grant to produce marketable educational software. More than ever, universities are capitalized. The wars on culture are long gone as the Right turned away from this straw man and the university began serving the culture of networked-enduced cool that Alan Liu has written about. The alienated self gave way to what Brian Holmes called the flexible personality. If blogs sometimes questioned this, Geert Lovink pointed out that the questioning was more nihilism than anything else.

But back to the turn of the millennium. This wasn’t so much marked by possibility as by delirium. The dot.com boom, the success of the partnership between Thomas Krens and Frank Gehry at the Guggenheim Bilbao, and the emergence of the creative cities movement established the themes for this decade. On March 12, 2000, the tech-heavy NASDAQ index peaked at 4069, twice its value the year before. In the six days following March 16, the index fell by nine percent and it was not through falling until it reached 1114 in August, 2003. If the delirium was revealed, the Bush administration and the Federal Reserve found a tactic to forestall the much-needed correction. Under pretext of striving to avoid full-scale collapse after 9/11, they set out to create artificially low interest rates, deliberately inflating a new bubble. Whether they deliberately understood the consequences of their actions or found themselves unable to stop it, the results were predictable: the second new economy in a decade turned out to be the second bubble in a decade. If, for the most part, tech was calmer, architecture had become infected, virtualized and sucked into the network not to build the corporate data arcologies predicted by William Gibson but as the justification for a highly complex set of financial instruments that seemed to be crafted so as to be impossible to understand by those crafting them. The Dow ended the decade lower than it started, even as national debt doubled. I highly recommend Kevin Phillips book Bad Money: Reckless Finance, Failed Politics, and the Global Crisis of American Capitalism to anyone interested in trying to understand this situation. It’s invaluable.

This situation is unlikely to change soon. The crisis was one created by over-accumulation of capital and a long-term slowdown in the economies of developed nations. Here, Robert Brenner’s the Economics of Global Turbulence can help my readers map the situation. To say that I’m pessimistic about the next decade is putting it lightly. The powers that be had a critical opportunity to rethink the economy, the environment, and architecture. We have not only failed on all these counts, we have failed egregiously.

It was hardly plausible that the Bush administration would set out to right any of these wrongs, but after the bad years of the Clinton administration, when welfare was dismantled and the Democrats veered to the Right, it seemed unlikely that a Republican presidency could be that much worse. If the Bush administration accomplished anything, they accomplished that, turning into the worst presidency in history. In his review of the decade, Wendell Barry writes "This was a decade during which a man with the equivalent of a sixth grade education appeared to run the Western World." If 9/11 was horrific, the administration’s response—most notably the disastrous invasions of Afghanistan and Iraq, alliances with shifty regimes such as Pakistan, and the turn to torture and extraordinary rendition—ensured that the US would be an enemy for many for years to come. By 2004, it was embarrassing for many of us to be American. While I actively thought of leaving, my concerns about the Irish real estate market—later revealed as well-founded—kept me from doing so. Sadly, the first year of the Obama administration, in which he kept in place some of the worst policies and personnel of the Bush administration’s policy, received a Nobel peace prize for little more than inspiring hope, and surrounded himself with the very same sorts of financiers that caused the economic collapse in the first place proved the Democrats were hopeless. No Republican could have done as much damage to the Democratic party as their own bumbling leader and deluded strategists did. A historical opportunity has been lost to history. 

Time ended by calling it "the worst decade ever."

For its part, architecture blew it handily. Our field has been in crisis since modernism. More than ever before, architects abandoned ideology for the lottery world of starchitecture. The blame for this has to be laid with the collusive system between architects, critics, developers, museum directors and academics, many of whom were happy as long as they could sit at a table with Frank Gehry or Miuccia Prada. This system failed and failed spectacularly. Little of value was produced in architecture, writing, or history.

Architecture theory also fell victim to post-criticism, its advocates too busy being cool and smooth to offer anything of substance in return. Perhaps the most influential texts for me in this decade were three from the last one: Deleuze’s Postscript on the Society of Control, Koolhaas’s Junkspace, together with Hardt and Negri’s Empire. If I once hoped that some kind of critical history would return, instead I participated in the rise of blog culture. If some of these blogs simply endorsed the world of starchitecture, by the end of the decade young, intelligent voices such as Owen Hatherley, David Gissen, Sam Jacob, Charles Holland, Mimi Zeiger, and Enrique Ramirez, to name only a few, defined a new terrain. My own blog, founded at the start of the decade has a wide readership, allowing me to engage in the role of public intellectual that I’ve always felt it crucial for academics to pursue.   

Indeed, it’s reasonable to say that my blog led me into a new career. Already, a decade ago, I saw the handwriting on the wall for traditional forms of history-theory. Those jobs were and are disappearing, the course hours usurped by the demands of new software, as Stanley Tigerman predicted back in 1992. Instead, as I set out to understand the impact of telecommunications on urbanism, I found that thinkers in architecture were not so much marginal to the discussion as central, if absent. Spending a year at the University of Southern California’s Annenberg Center for Communication led me deeper into technology and not only was Networked Publics the result, I was able to lay the groundwork for the sort of research that I am doing at Columbia with my Network Architecture Lab.

The changes in technology were huge. The relatively slow pace of technological developments from the 1950s to the 1980s was left long behind. If television acquired color in the 1960s and cable and the ability to play videotapes in the late 1980s, it was still fundamentally the same thing: a big box with a CRT mounted in it. That’s gone forever now, with analog television a mere memory. Computers ceased being big objects, connected via slow telephone links (just sixteen years ago, in 1993, 28k baud modems were the standard) and became light and portable, capable of wireless communications fast enough to make downloading high definition video an everyday occurrence for many. Film photography all but went extinct during the decade as digital imaging technology changed the way we imaged the world. Images proliferated. There are 4 billion digital images on Flickr alone. The culture industry, which had triumphed so thoroughly in the postmodern era, experienced the tribulations that Detroit felt decades before as the music, film, and periodicals all were thrown into crisis by the new culture of free media trade. Through the iPod, the first consumer electronics device released after 9/11, it became possible for us to take with us more music than we would be able to listen to in a year. Media proliferated wildly and illicitly.

For the first time, most people in the world had some form of telecommunication available to them. The cell phone went from a tool of the rich in 1990 to the tool of the middle class in 2000. By 2010, more than 50% of the world’s population owned a cell phone, arguably a more important statistic than the fact that at the start of this decade for the first time more people lived in cities than in the country. The cell phone was the first global technological tool. Its impact is only beginning to be felt. In the developed world, not only did most people own cell phones, cell phones themselves became miniature computers, delivering locative media applications such as turn-by-turn navigation, geotagged photos (taken with the built in cameras) together with e-mail, web browsing, and so on. Non-places became a thing of the past as it was impossible to conceive of being isolated anymore. Architects largely didn’t have much of a response to this, and parametric design ruled the studios, a game of process that, I suppose, took minds off of what was really happening.

Connections proliferated as well, with social media making it possible for many of us to number our "friends" in the hundreds. Alienation was left behind, at least in its classical terms, as was subjectivity. Hardly individuals anymore, we are, as Deleuze suggested, today, dividuals. Consumer culture left behind the old world of mass media for networked publics (and with it, politics, left behind the mass, the people, and any lingering notion of the public) and the long tail reshaped consumer culture into a world of niches populated by dividuals. If there was some talk about the idea of the multitude or the commons among followers of Hardt and Negri (but also more broadly in terms of the bottom up and the open source movement), there was also a great danger in misunderstanding the role that networks play in consolidating power at the top, a role that those of us in architecture saw first-hand with starchitecture’s effects on the discipline. If open source software and competition from the likes of Apple hobbled Microsoft, the rise of Google, iTunes, and Amazon marked a new era of giants, an era that Nicholas Carr covered in the Big Switch (required reading).   

The proliferation of our ability to observe everything and note it also made this the era an era in which the utterly unimportant was relentlessly noted (I said relentlessly constantly during this decade, simply because it was a decade of relentlessness). Nothing, it seemed, was the most important thing of all.

In Discipline and Punish, Foucault wrote, "visibility is a trap." In the old regime of discipline, panopticism made it possible to catch and hold the subject. Visibility was a trap in this decade too, as architects and designers focussed on appearances even as the real story was in the financialization of the field that undid it so thoroughly in 2008 (this was always the lesson of Bilbao… it wasn’t finance, not form, that mattered). Realizing this at the start of the decade, Robert Sumrell and I set out to create a consulting firm along the lines of AMO. Within a month or two, we realized that this was a ludicrous idea and AUDC became the animal that it is today, an inheritor to the conceptual traditions of Archizoom, Robert Smithson, and the Center for Land Use Interpretation. Eight years later, we published Blue Monday, a critique of network culture. I don’t see any reason why it won’t be as valuable—if not more so—in a decade than it is now.   

I’ve only skimmed the surface of this decade in what is already one of the lengthiest blog posts ever, but over the course of the next year or two hope to do so to come to an understanding of the era we were just in (and continue to be part of) through the network culture book. Stay tuned.

Read more

2009 in Review

It’s time for this blog to look backwards and forwards, first to the last year, then to the past decade, and finally to the decade ahead. 

The single biggest story of 2009 was the continued collapse of the economy. For architects—and a sizable proportion of my readers are architects—this was as bad a year as any.

In the United States more jobs were lost in the profession than in any other. Nearly 18% of architects received pink slips over the year, according to MSBNC. Overseas, in places like my other "home" countries of Lithuania and Ireland—economies and architects fared worse. I predicted this situation long ago and found it alarming to watch so many architects drink the Kool-Aid of unfettered growth so readily.

The new economy was not forever and, at the end of it all, many were much worse off than what it began. I’ll have more to say about this tomorrow, when I look back at the decade, but the situation is not going to change much in 2010 or anytime soon. If it does, then be very worried. The correction is painful, but measures being taken now to lessen it are likely to cause more pain in the future. First the Bush, then the Obama administrations pumped huge amounts of money into the economy in an effort to stimulate it; for example, the real estate industry didn’t crash only because of the tax credit to first-time homeowners.

Temporarily, this has prevented an outright collapse, but the massive amounts of debt incurred to prop up the finance and real estate sectors will have to be repaid. At best, this will force the US to curtail its foreign military adventures (already, the Right is turning away from nation-building, toward isolationism) and will put a brake on further expansionist bubbles by imposing a permanent tax burden. As far as the worst, well, think of the long collapse of the British, Dutch, or Spanish Empires, with the country in permanent economic stagnation.   

A corollary to the economy was the new discussion of infrastructure. The Infrastructural City came out at the tail end of 2009 and received a great deal of attention. The hardcover pressing went out of print rapidly and the paperback is one of ACTAR’s biggest sellers for 2009. I’m not at all surprised: attention to collapsing infrastructure in this country has been necessary since the 1980s. Much of the attention revolved around Obama’s call for a WPA 2.0 last December, but by the time the stimulus bill was drafted, infrastructure had left the agenda. It was sad to watch Obama surround himself with the usual suspects and defend the very industry (Wall Street) that caused all the trouble in the first place. It is clear now that Obama’s rise to power was not the story of a come-from-behind victory by an underdog with grass-roots support, but rather the carefully staged simulation of that story. Architects and critics pinned their hopes on infrastructure, but were slow to understand that this too was a simulation, even though I warned them. Requiring large investment in physical objects instead of in financial instruments and a lengthy time before results are seen, infrastructure is hard to sell to a political machine beholden to speculation and rapid gratification for immediate election gains. 

Any battle for infrastructure funds will be a slow march through the A. I. A., the universities, policy think-tanks, and the parties. Still, its better than the residential and real estate markets, with the phenomenal amount of overbuilding that took place there. The big question will be how can architects claim to design infrastructure, generally something that engineers take on.

On a related note—and since I am a space fanboy—the Obama administration also handily bungled its chance at NASA. If it initially seemed like the administration would take bold action resulting in the rapid retirement of the poorly-conceived Ares launch vehicle and the adoption of Direct-X plus or a commercial manned launch system, thus far we’ve heard nothing. Instead, the program lumbers on, even as 2010 promises the end of shuttle flights. It seems that like much over-leveraged real estate, the space station is due to be underpopulate and to rapidly fall into decay, never used for its original intended purpose. The end of regular manned space flight in this country is only a year away. With the moon and Mars essentially out of the question and the space station likely cannibalized for a Russian station by the end of the next decade, any future US launch vehicle seems to be purposeless. A silver lining is that maybe a decade from now, once manned spaceflight is shut down we can concentrate on the robotic science missions that have delivered so much to us in the last few years. Still, don’t be surprised if a decade from now this seems like an over-optimistic prediction.  

I wound up on a tour of universities this fall, presenting Netlab research on infrastructure at many of them. It’s been gratifying to know that the project is of continued interest. 

Networked Publics may have received less attention, but it was no less important. The debates that we outlined in that book—originally drafted in 2006!—have continued to be of critical importance. It was with great sadness that learned—just last week—of the death of Anne Friedberg, my co-author of the place article, but the work that we did has continued to be of relevance as we continue to move deeper into a world of networked place. In culture, the collapse of media that began with the decline of the music industry, a key part of the chapter on culture is now extended to the massive implosion of the news and media industry. The list of magazines and newspapers that shut their doors in 2009 is lengthy and will only grow in 2010. The problems that we saw facing politics, i.e. our inability to find a way to make online deliberation as effective as online mobilization were extended. The liberals and conservatives in this country are more polarized than ever while the Obama campaign’s use of social media has not been matched by any significant efforts toward using social media to decide policy.  With the heady growth of data consumption by iPhone users, the question of network neutrality now affects not only wired lines but also mobile data.

This coming spring, we will be discussing the topics from Networked Publics in public at Studio-X Soho. Watch this space for more about those conversations. 

Some other news worth reflecting on is the failure of augmented reality on the iPhone to have the same broad success as locative media applications. Although it has an initial gee-whiz factor, holding the iPhone up to augment the world is pretty goofy. Unless everyone really does start wearing iGlasses, it’s unlikely to take hold. On the other hand, the biggest story of the year in tech was the rumored Apple tablet. Where the year started with the suggestion that universities and philanthropic organizations would need to keep media alive, media is now counting on Apple to save them. Time will tell. 

Last year, I predicted that networked urbanism would be the rage in 2009. Indeed it was, but as it developed, I began to sound a note of alarm. Two things bothered me. First, much of the talk about networked urbanism seemed to be too earnest about its appeal to a tech-savvy class of digirati. A century ago, Woodrow Wilson, then still President of Princeton, warned against the danger of the automobile: "nothing has spread socialistic feeling in this country more than the use of automobiles." Wilson worried that a rift would emerge between the car-owning rich and the poor, less mobile masses. Henry Ford listened and built the Model T. Networked urbanism is blind to this reality at its own peril. Moreover, we still have no way to capitalize the changes in media. This is a non-trivial matter. The networked future is hardly replacing the jobs being lost.   

The year started off with a site redesign at varnelis.net, and my addition of tumblelogs to the site. The result has been more updates—over two posts a week to the main varnelis.net blog plus more to the tumblelog. Even if these aren’t as regular as I’d like them, it’s a step forward as climbing readership has shown. And of course there is Twitter, where I’ve made hundreds of posts so far this year. 

The majority of my year was consumed by research and writing for the Netlab. The network culture book is well underway and I posted an early version of the introduction together with material on art at Networked, a networked book on networked art. This summer, we made progress toward network city project at the Netlab. We’ll have more results from that work throughout the spring of 2010. Watch this space. AUDC published articles in New Geographies 2 and Design Ecologies while I published articles in my role as Netlab director in venues from the Architects’ Newspaper to Volume (here and here)  to the Architectural Review to the ICA catalog Dispersion. It was a full year and I hardly expect the next year to be any less full. 

 

Read more

On Death

I’m usually late in sending out holiday greetings and this year is no exception. We had planned to make a physical version of our annual family photo but didn’t manage to do it in time for the holidays, so we wound up sending out virtual versions. At least there was snow. I sent out the photo to perhaps 150 friends and colleagues and received the usual 20 bounces. One bittersweet surprise was finding out that my friend Daniel Beunza has moved to the London School of Economics. I’m sure it’ll be a great place for him—and he’s closer to his home country of Spain—but I’ll miss discussions about finance with this remarkable colleague. Much sadder was receiving an automated e-mail from Anne Friedman, another friend with whom I co-wrote the Place chapter of Networked Publics saying that she was on indefinite medical leave. I had received this same message a while back and was concerned, but I didn’t get in touch. This time, I looked her up in Google news—just in case—and was saddened to hear that she died this October.

I remember Anne and I talking about how I had discovered that Derek Gross, a college friend who died on 1996 via his Web page. This was before the age of blogs, but Derek updated his Web page regularly and when I visited it to see when his band was next playing, I found he had died, together with a record of his experience. Certainly it’s something I had never wished to see again, but just as surely discovering Anne’s death via the net is not going to be the final time.   

Anne was a brilliant scholar, as evidenced by her books Window Shopping and the Virtual Window, as well as a great friend. She was crucial for not only my chapter, but also for the Networked Publics group and our book, articulating issues that were fundamental to the project, asking and giving me sage advice throughout. I could not have written the chapter of the book without her. Together we sat in our offices, she in her Lautner House, I in the AUDC studio on Wilshire Boulevard, and wrote the chapter simultaneously on Writely (now Google Docs). In so doing, we experienced the phenomenon of our voices becoming co-mingled, producing a third entity that was neither Anne nor myself. I am heartbroken that there will never be a sequel.

Read more

The Spectacle of the Innocent Eye

So many of the recent events and discussions in architecture remind me of material I covered in my dissertation. Some of the writing is juvenalia, some of it is prophetic. Either way, it ensured I’d be persona non grata around Cornell ever since.

Enough people ask me about it that I should upload it and see what the response is. Since the original files are now fifteen years old, forgive me for the inevitable formatting problems and the lack of illustrations (a list is appneded to give you an idea of what you missed).

I produced the attached text a few months after the dissertation itself, incorporating further revisions.

The abstract reads as follows.

 

The Spectacle of the Innocent Eye:
Vision, Cynical Reason, and
The Discipline of Architecture in Postwar America
1994

 

 

In this dissertation, I trace the growth of cynical reason and the spectacle in postwar American architecture by examining the emergence of a new attitude toward form in postwar American architecture and the rise of the group of architectural celebrities that represented it.

From the 1950s onward, a number of architectural educators–most notably Colin Rowe and John Hejduk–derived a theory of architectural design from the visual language developed by graphic art educators Laszlo Moholy-Nagy and Gyorgy Kepes. The architectural educators’ intent was to solidify architecture’s claim to artistic autonomy through a focus on the rigorous use of form. In doing so, they hoped to resist the threat to architecture as a discipline, then having its domain of inquiry attacked by the encroaching social sciences and engineering.

Like Moholy-Nagy and Kepes, the architectural educators aimed to create an innocent eye in the student, restricting vision to instantaneous, prelinguistic perception of two-dimensional formal relationships. The student would become a retinalized subject under the influence of outside forces rather than an agent capable of independent action and hence ethically responsible in their life and architecture. In addition, the new theory of architecture was unable to divest itself of its origin in graphic art and produced a formally complex but atectonic, cardboard (-like) architecture.

Against this background, I investigate the rise of the movement’s representatives–Peter Eisenman, Michael Graves, Richard Meier, and Robert Stern–and their relationship to their patron, Philip Johnson. Together, they promoted each other and cardboard architecture, as well as a history and architecture reduced to image.

But history has a material reality: in the 1930s, Johnson participated in the American fascist movement and left as evidence a body of fascistic and antisemitic texts he wrote for publications in the movement. Since then he and his promoters, among them Stern and Eisenman, have carefully repressed his past by making it into a public secret. Ultimately, the kids do not have innocent eyes: along with Johnson they have promoted a spectacular architectural discourse of cynicism.

 

Read more

Complexity and Contradiction in Infrastructure

I gave the following talk on Banham’s Los Angeles, non-plan, and infrastructure in the Ph.D. lecture series at the Columbia Graduate School of Architecture, Planning, and Preservation in November, 2009. 

Most of us are prone to hero worship. This talk sets out to address one a major problem in the work of one of the contemporary heros of architectural and planning historiography, Reyner Banham, and his advocacy of a (mythical) laissez-faire form of planning based on his reading of Los Angeles.

Below the talk I have embedded a video. Although today its considered bad practice to read lectures, I’ve started doing it again even if my delivery seems more stale. When you give ten lectures a term outside of school and when nearly every venue insists upon something custom, the practice of keynoting ex tempore from notes becomes a bit of a drag. Eventually you realize that with a little more work—and granted, a little worse delivery—your project could convery more, have more theoretical meaning, and be generative toward other projects. At the level of production that I’ve been trying to stay at lately, the only way to produce content is to follow the advice that Slavoj Zizek gave in the movie about him: everything either needs to be a spin-off or work toward the next major project.

This talk, then, is a spin off of my work toward the Infrastructural City but also sets out to tackle Banham critically (something that I’ve also done here), something I intend to take up soon.

Complexity and Contradiction in Infrastructure

The title of my talk refers to Robert Venturi’s 1966 Complexity and Contradiction in Architecture, generally accepted as an inaugural text in postmodern architecture. For Venturi, the modernists failed because they strove for purity of form in. Venturi wrote:

“today the wants of program, structure, mechanical equipment, and expression, even in single buildings in simple contexts, are diverse and conflicting in ways previously unimaginable. The increasing dimension and scale of architecture in urban and regional planning add to the difficulties. I welcome the problems and exploit the uncertainties. By embracing contradiction as well as complexity, I aim for vitality as well as validity.”

In other words, Venturi suggested that architects rather than trying to sweep messes under the rug, architects embrace complexity and contradiction by introducing deliberate errors in their works.

Venturi concluded that an appropriate architectural response “must embody the difficult unity of inclusion rather than the easy unity of exclusion. More is not less.”

Note, however, that Venturi’s argument is historically specific:
“today the wants of program, structure, mechanical equipment, and expression, even in single buildings in simple contexts, are diverse and conflicting in ways previously unimaginable. The increasing dimension and scale of architecture in urban and regional planning add to the difficulties.”

This text, then, comes about at a transition point between late modernity and postmodernity and its virtue is that Venturi not only diagnosed a condition, he also suggested an architectural approach. Both of these suggested a schism from the modern, a move into a new condition. Today I want to talk about another phase in the era of complexity, which is why I cite Venturi at the outset. 

Along with Denise Scott Brown and Steven Izenour, Venturi’s next book, the 1972 Learning From Las Vegas would tackle issues of signage, semiotics, automobility and the commercialization of the American city, flipping the valence on landscapes that had been roundly derided as degraded by architecture critics. But for the purposes of this talk, its worth noting that the authors original interest was in Los Angeles and the Yale studio that resulted in Learning from Las Vegas visited that city first. It may be that the more smaller and more picturesque of the two cities (unlike Vegas, Los Angeles has no central strip) proved more easily explainable.

For architects and historians of architecture (I file myself in the latter category), Reyner Banham’s 1971 Los Angeles. The Architecture of Four Ecologies took on the urban conditions in a more total approach. Banham set out to dissect the city as a total landscape—both geographically and historically, both physically and psychically—as well as in terms of its infrastructural, social, and architectural systems. In this, Banham’s work has been pathbreaking and The Infrastructural City uses his book as inspiration and as a point of departure, something that my subtitle Networked Ecologies in Los Angeles alludes to.

But Banham’s foremost innovation was to flip the valence on the historical evaluation of Los Angeles, praising precisely those qualities that others listed as irredeemable failings: its posturban sprawl; its lack of an overall plan; its chaotic, untamed signscape; its comical roadside architecture; its ubiquitous boulevards, parking lots, and freeways. 

Although we could ascribe this to a characteristic British fascination with the degraded, Banham also had a theoretical impetus. By the mid-1960s, he had become fascinated with the possibilities of what he called “non-plan,” a laissez-faire attitude toward urban planning, part of a larger project that he undertook along with Paul Barker, deputy editor of the magazine the New Statesman. In 1967, Barker ran excerpts from Herbert Gans’s The Levittowners “as a corrective to the usual we-know-best snobberies about suburbia.” At roughly the same time, Barker and Peter Hall set out with a “maverick thought… could things be any worse if there was no planning at all?” The result, strongly influenced by Banham’s writings in the magazine, was a special issue publshed in 1969 and titled “Non-Plan: An Experiment in Freedom.” Barker recalls, “We wanted to startle people by offending against the deepest taboos. This would drive our point home.” To this end Hall, Banham, and architect Cedric Price each took a section of the revered British countryside and blanketed it with a low-density sprawl driven by automobility. According to Barker the reaction was a “mixture of deep outrage and stunned silence.”

For Banham, Los Angeles stood as the greatest manifestation of Non-Plan to date. “Conventional standards of planning do not work in Los Angeles,” he wrote, “it feels more natural (I put it no stronger than that) to leave the effective planning of the area to the mechanisms that have already given the city its present character: the infrastructure to giant agencies like the Division of Highways and the Metropolitan Water District and their like; the intermediate levels of management to the subdivision and zoning ordinances; the detail decisions to local and private initiatives; with ad hoc interventions by city, State, and pressure-groups formed to agitate over matters of clear and present need.”

Now there’s some question as to how well Banham’s Los Angeles worked in the first place: it was in its worst period of air pollution in history, the freeways were wreaking devastation upon the city and the Watts Riots had just shaken any lingering mirage of Los Angeles as either a progressive metropolis or as paradise for the white middle class. Still, in his evaluation, Banham felt that the city—in his mind epitomized not by the Watts Riots but by the individualistic exuberance of Watts Towers— worked because it had no central plan. Rather, planning was left to the competing forces in the city, public and private.

If Banham set out against modernist urban planning, non-plan gave a theoretical basis for neoliberalist planning. Reducing the modernist ethical imperative to a question of fascination with the bottom-up to embrace “a messy vitality” (this is not Banham but Venturi’s term), modernism would be reduced from a question of morality and rational planning to a question of desire, both individual and institutional. The result parallels Manfredo Tafuri’s observation in Architecture and Utopia that the avant-garde’s singular accomplishment is not so much a physical change to the metropolis but rather an adjustment in how it is viewed. We can see this quite literally in Banham’s own role in his book: what remains at the end of the modern project is the experience of the city and the observer’s voyeuristic pleasure in the psychogeographic experience of drifting on the boulevards and freeways of the city.

But things have changed. For one, the 1970s were an era of limits for the city, the state, and country with the first large-scale economic recession since the war, the OPEC  energy crisis, Vietnam, and finally stagflation. If the late 1960s were a period of great social unrest, by the mid-1970s, such unrest had largely been reshaped into concerns with individual rights and self-realization, above all the right to property and to dispose of one’s wealth as one wants. Thus, the system of non-plan that Banham lauded would be institutionalized in California in 1978 with the passing of State Proposition 13, reducing property tax by 57% and mandating that future tax increases require a two-thirds majority in the state legislature. Two years later former California governor Ronald Reagan would become President and set out on a draconian program of reducing non-military governmental spending at a national level.

By the time that Reagan took office, with a decade of cutbacks caused by the combination of economic crises and funds being siphoned off for defense, due to dwindling urban tax roles caused by outmigration since the 1930s and due to the more natural phenomena of age, infrastructure was coming undone nationwide.

Thus in 1981, precisely at the instigation of the nation’s Californization (or, and I hesitate to suggest it, Californication?) economists Pat Choate and Susan Walters published a pamphlet for the Council of State Planning Agencies titled America in Ruins: Beyond the Public Works Pork Barrel. The pamphlet soon attracted a large amount of press attention, including a Newsweek cover story on August 2, 1982 entitled “The Decaying of America.” (August 2, 1982) and a US News and World Report story To Rebuild America: $2,500,000 Job, September 27, 1982. Literature searches suggest that is at this moment that infrastructure begins to gain popularity as a term. Infrastructure enters into the national consciousness during crisis.

But a Californicated America would have no room for public infrastructural spending. Instead, the exemplary infrastructures of the 1980s and 1990s—telecoms after deregulation, the mobile phones, the Internet—are privatized. Here, Richard Barbrook and Andy Cameron describe the legitimizing narrative for such ventures as the Californian Ideology, a union of hippie self-realization, neoliberal economics, and above all, privatization advocated by Silicon Valley pundits like Stewart Brand editor of the Whole Earth Catalog and founder of Wired Magazine. As Barbrook and Cameron suggest, the growth of Silicon Valley and indeed, California as a whole, was made possible only due to exploitation of the immigrant poor and defense funding. Los Angeles, after all, became the country’s foremost industrial city in the postwar period, largely due to defense contracts at aerospace firms. So, government subsidies for corporations and exploitation of non-citizen poor: a model for future administrations. 
But there’s more to infrastructural crisis then neoliberal economic policy. Once again Banham and Los Angeles provide a reference point. Banham describes the ecologies of Los Angeles as dominated by an individualism that allows architecture to flourish. But such a model of the city is insufficient. In the Reluctant Metropolis: The Politics of Urban Growth in Los Angeles, William Fulton describes Los Angeles as an exemplar of what Harvey Molotch calls “the city as growth machine.” In this model, certain industries—primarily the finance and real estate industries—dominate urban politics with the intention of expanding their businesses. Newspapers too endorse the growth machine as a way of expanding their subscription base and selling real estate ads. Moreover,  arts organizations such as the symphony, opera, and art museums are also beholden to the model of the city as growth machine.These interests promote a naturalized view of growth in which we are simply not to question that cities will always get bigger or that they should always get bigger.

By the 1960s, however, homeowner discontent about encroaching sprawl led individuals to band together to form homeowner groups. The first of these was the Federation of Hillside and Canyon Associations, which protested the construction of a four-lane highway in place of scenic Mulholland Drive. Soon, homeowners teamed with environmental organizations such as the Sierra Club to create a regional park in the Santa Monica Mountains to prevent further development in their back yards. By the time that Proposition 13 passed, Angelenos were set against the growth machine and with it, too, the big infrastructure necessary to drive it or even the projects necessary to repair it.   

The result, then, is a long, steady process of infrastructural decay, privatized infrastructure acting as a layer or retrofit onto a decaying public infrastructure.
    
It’s in this context, then, that we must situate both Venturi and Banham, as transitional approaches to the material, reducing questions of complexity to form matters, which of course is not too uncommon in architecture. In Venturi’s case, complexity is produced through form, in Banham’s case formal complexity is produced by the laissez-faire city.

Now I’d like to turn to some contradictions that emerge out of this condition. First, we could sense a threat to the vaunted neoliberal individual rights from failing infrastructure.  Some of these are quite obvious: the inconvenience of traffic and long commutes but also the potholes that (in Los Angeles) cause an average of $746 of damage annually per automobile, collapsing bridges, energy crises caused by privatization such as electricity grids failing and refineries going offline indefinitely (here the city of Los Angeles, which has not privatized its power wound up ahead of the rest of the state during the crisis that brought down Gray Davis during Enron’s salad days).
  
Neoliberalism thus exacerbates what sociologist Ulrich Beck calls “risk society.” Banham’s autopia isn’t a risk free world, but rather a condition in which risk and threat are everyday factors, creating a contradiction within capitalism. Beck:

“… everything which threatens life on this Earth also threatens the property and commercial interests of those who live from the commodification of life and its requisites. In this way a genuine and systematically intensifying contradiction arises between the profit and property interests that advance the industrialization process and its frequently threatening consequences, which endanger and expropriate possessions and profits (not to mention the possession and profit of life).” (Beck 1992: 39)

* * *
Now if environmentalism was in part, a movement created by homeowner desires to protect their rights, we would expect that infrastructural collapse (or for that matter the state of California schools) would also be of concern to homeowners and corporations, but in California, Proposition 13 and a politics of stalemate make it impossible to act. Even as voters seek mandates to restore services, the state is hamstrung by the legislature’s terror of touching Proposition 13, which is known as the “third rail” of state politics. Last month the Guardian asked “Will California become America’s first failed State?”

I want to be stress that in other respects conditions have intensified, moving postmodernism to another phase. Take risk. Environmentalism has been thoroughly capitalized as the green movement, with the Californian ideology now promising to save us from global warming through technological means. Crisis becomes profitable.
Crisis becomes profitable.

On to my last two points. Profit, as Robert Brenner tells us in the economics of global turbulence has become a problem, in part because of some of the problems that face infrastructure. Massive investment in fixed capital make it impossible to abandon when more efficient structures elsewhere threaten. The most familiar aspect of this, of course, is the rise of Chinese industry and the evacuation of American production. But infrastructure is of equal concern. Infrastructure, like other technologies, follows a classic S-curve, in which initially steep returns per dollar invested are followed by diminishing returns as the curve flattens.

The results, for the country have been devastating. California, together with Soho and Boston appeared to enjoy massive growth in high technology, particularly telecommunications and digital technology, during the last three decades. But much of this growth happened not in terms of production, but rather in finance, both in the lucrative financial instruments that accompanied public offerings and in terms of technology that made ever more complex financial operations possible.
Traditional profits, in this context, were considered devalued in comparison with the profits obtainable. Jeffrey Nealon in Foucault Beyond Foucault suggests that in this sort of operation, the classic equation that Marx observed in Capital of M-C-M’ is now rewritten as M-M’, in other words, capital leads to capital growth without any intervening commodity.

The result, then, is a bit of what we saw this spring when, after President-Elect Obama made a YouTube speech calling for a WPA 2.0 as an economic stimulus, he turned away from infrastructure in the actual stimulus bill. Blame has been laid on Obama’s chief economic advisor Larry Summers.

But how the Democrats (or in California, Schwarzenegger) are going to get out of this mess is entirely unclear. Economic indicators suggest that the country will endure a long term period of stagnation, different from, but reminiscent of the 1970s and 1980s. This month, the New York Times reported that unemployment and underemployment now stands at 17.5%, the highest level since the Great Depression. Official unemployment in California now stands at 12%. These are staggering numbers. The state is making cutbacks while raising tuitions at the University of California system, leading to mass student protests and the regents macing students. California leads the nation again, it seems.

If the restructuring of the 1980s destroyed manufacturing, this decade’s recession has mowed down the creative class and the financial sectors. In the latest New Left Review, Gopal Balakrishnan suggests that we have entered into a stationary state, a long period of systemic stagnation. As he points out, Adam Smith never expected the wealth of nations to improve perpetually but rather expected it would come to an end in the nineteenth century as resources were exhausted. Capital’s perpetual growth would have been a mystery to him.
 
To conclude then, I want to return to where I started, the theme of complexity. I’ve been thinking about these issues a lot lately, re-reading archealogist Joseph Tainter’s The Collapse of Complex Societies. Tainter’s thesis differs from Jared Diamond’s (and also precedes it by a decade). Instead of turning to the external forces of ecological catastrophe (as Diamond does) or to foreign invasion (as other commentators do), Tainter sees complexity as the downfall of societies.

As societies mature, Tainter observes, they become more complex, especially in terms of communication. A highly advanced society is highly differentiated and highly linked. That means that just to manage my affairs, I have to wrangle a trillion bureaucratic agents such as university finance personnel, bank managers, insurance auditors, credit card representatives, accountants, real estate agents, Apple store “geniuses,” airline agents, delivery services, outsourced script-reading hardware support personnel, and lawyers in combination with non-human actors like my iPhone, Mac OS 10.6, my car, the train, and so on.

This is the contemporary system at work, and it’s characteristic of the bureaucratized nature of complex societies. On the one hand, in a charitable reading, we produce such bureaucratic entities in hopes of making the world a better place, keeping each other honest and making things work smoothly. But in reality, not only is this dysfunction necessary for the operation of the service economy, these kinds of entities rub up against each other, exhibiting cascading failure effects that produce untenable conditions.

In Tainter’s reading, complex societies require greater and greater amounts of energy until, at a certain point, the advantages of the structures they create are outweighed by diminishing marginal returns on energy invested. The result is not just catastrophe but collapse, which Tainter defines as a greatly diminished level of complexity.

Just as rigidity was the failure point for Fordism, complexity is the failure point for post-Fordism. In this light, the culture of congestion valorized by Koolhaas is undone by the energy costs of that complexity.

Now I agree with Tainter when he concludes that the only hope to forestall the collapse of a complex society is technological advance. I’d argue that this is what’s driving the field of networked urbanism at the moment. But, I’m not so sure we can do it. This is where my optimism rubs up against my nagging feeling that urban informatics, locative media, smart grids, and all the things that the cool kids at LIFT and SXSW are dreaming up are too little, too late.

Technology itself is already all but unmanageable in everyday life and adding greater layers of complexity can’t be the solution. It’s in this sense that the Infrastructural City was more Mike Davis than Reyner Banham, something few have caught on to yet.

We should have taken our lumps when the dot.com boom collapsed and retrenched for five or six years. Instead we added that much more complexity—take the debt and what is required to maintain it or the impossible war or the climate—and now our options are greatly limited.

So we need to develop a new set of tools to deal with the failures of the neoliberal city and the impossible conditions of complexity today. This is hardly an overnight task, if it can be done at all.

Now Tainter holds one other card, suggesting that most of the people who experience collapse don’t mind it too much. Many of them seem happy enough to just walk away from the failing world around them, much like owners of foreclosed homes do today. Eventually a new civilization springs up and with it, perhaps we can imagine a better future.  

I want to conclude by talking about whether I’m a pessimistic or an optimist since I’m apparently being accused of being a pessimist at all my talks recently (parenthetically, I’ll add, I suppose that’s better than being accused of being an optimist). Back to Los Angeles: anyone visiting Hollywood Boulevard is accosted by attractive young men and women asking if one is an optimist or a pessimist. The next step is being lured into the Scientology Center to take a test. Maybe we’re better off not taking that test, but rather looking at reality, not a future scripted by a science fiction writer.

Second, I’m afraid that academe is a bit infected by Prozac culture these days. Hope would be fine if we had a President who seemed to have an ability to deal with the issues or if the alternative to this one wasn’t so deeply frightening.

End  

Complexity and Contradiction of Infrastructure from kazys Varnelis on Vimeo.

Alternate Scenarios Wanted

British author Charles Leadbetter critiques the “Digital Britain plan” for making broadband ubiquitous, much like the Obama Administration’s own plan. Leadbetter points out that both are flawed because they focus on infrastructure in a narrow way, failing to address the deep transformations that the Internet is making on network culture and economy. Read his response here.

This section is particularly important:

Accelerating the spread of broadband will not save these industries but make their predicaments more difficult. Here’s the truth: plans to invest more in digital technologies will only pay off if they bring further disruption to economies that are already in turmoil. We will know when politicians are really serious about the coming digital revolution when they start to admit that it will have to cause significant disruption to established business models if it is to pay off.

This is particularly tricky in the UK. The implosion of financial services, long the flagship of the services economy, means the cultural and media industries, in which Britain has a strong position, will take on an even more important role.

Leadbetter has this right and what he says can also be applied to the two countries that I work in, the United States and Ireland, but the problem for capital will come in monetizing what he calls “mutual media,” the rising ecology of bottom-up media production.

The problem with this model, also proposed by other authors such as Yochai Benkler and Clay Shirky is that it does not give an adequate explanation of how to monetize such media or how to distribute wealth in a remotely equitable manner (let’s forget socialism for the moment, I’m talking about market monopolies, in particular the inherent power-law nature of networks and how we can have anything beyond Google). Let’s be clear about this: mutual media are incredibly successful not just because we can produce anything we want and upload it, they are successful because it has us producing content for free for corporations.

Make no mistake about it, the day that it dawns on the administration at the New York Times that there are bloggers out there who would work for free, for the fashionable cachet of a byline on a Times column, and that these bloggers are better than many of the Times’s own writers is about two weeks before the entire staff of the Arts & Leisure section finds itself looking for work at Starbucks.

The economy undergoing an unprecedented transition. The owl of Minerva spreads her wings at dusk. Theory once again dreamed its successor era: if in the years between 1988 and 1994 theory seemed to be everything only to vanish, in the years since culture has seemed to be everthing, but on a much vaster scale, forming what appeared to be a new backbone in the economy (even if, as I’ve pointed out, it was finance all along). That’s vanishing now and with it, economic crisis is at our doorstep. There is no way out of this on the horizon. The wealth of networks is not in their ability to promote sharing or interaction, but in their ability to strip away jobs and destroy industries without proposing sustainable new ones.

For anyone who thinks I’m being pessimistic, I do hope you’re right and I’m wrong. Really, I do.

Alternate scenarios wanted. My only caveat is that I we don’t cook the books or take on more Ponzi schemes like the real estate bubble.

On Mad Men

Fellow resident of my adopted hometown of Montclair, NJ and New York Times journalist David Carr has a new piece out yesterday entitled “The Fall and Rise of Media” in which he explores the rapid decline of the (traditional) media industry and makes a case for optimism about new media. It’s a good read, take a look.

Carr puts on a brave face as he remind us that all reigns are temporary. The media jobs being swept away are positions that were obsolete years ago, he suggests, all but invoking Joseph Schumpeter’s “creative destruction” as an up side to the devastation that media outlets face today. As historian Jackson Lears reminds us in his latest book, Rebirth of a Nation, Americans have a longstanding fascination with the idea of rebirth and our own era is hardly immune to.

This struck a chord for me this morning as I had just finished watching the third season of Mad Men last night* and wondered about the show’s future. (spoiler alert!) With the end of the old firm that the Mad Men worked for, would the new firm they would build be nimble and intelligent, able to embrace the changing terrain of the 1960s, a diabolical player in an alternate universe version of Thomas Frank’s The Conquest of Cool? Or is it destined to be wiped out by the juggernaut of sociocultural change that comprises the mid and late 1960s the way Philip Johnson was, at least for a decade? In the atemporal world of network culture, we often forget how commonly we still look backward to find reference points for transformations in the contemporary world. Here I’d identify the popularity of Mad Men today. It offers us a glimpse at a moment of massive, societal transformation, as a relatively comfortable came unglued. Perhaps four decades from now we’ll see a remake of Mad Men set at the New York Times, or at a dot.com corporation. Certainly, it would lack well-designed furniture and well-cut suits, but so it goes.

In his article, Carr points to a new generation of under-30 journalists armed with netbooks, wireless connections, and visions of reshaping their world. Let’s hope so. The dinosaurs were dinosaurs not only because of their attitude and their budgets, but also because of the poverty, our worse yet, the outright fiction, of their reportage (no disrespect to David, but the Times itself often led the way with this: Judith Miller anyone?). No question, it’s high time to renew media. Already the architectural blogosphere is smarter, sharper, and more critical than newspaper critics have been in decades.

But there’s also much to dread and not just for the dinosaurs. Rarely do things go back to normal after a serious downturn. Economic regimes undergo radical changes during recessions, often even more dramatic than during boom times when excess liquidity keeps the status quo well lubricated.

What we’re seeing now, then, isn’t just the disappearance of some crufty old salts from journalism, but rather the restructuring of the creative class. Media is very much at the forefront of this. Faced by the perfect storm of a collapsing subscription base and the decline of the advertising dollar, media corporations have figured out that the losses of income are permanent and made cuts accordingly.

In contrast, architects are flailing about. This doesn’t mean that job losses in the profession haven’t been massive, but the profession has done little to rethink how it operates. There’s little question that we won’t see another building boom the size of the one we just witnessed again in our lifetime (nor do I wish it: there’s only so much economic destabilization we can take!). The downsizing is going to be permanent. The result will be heady competition between young unemployed veterans with serious job experience after a few years in the job force and a corps of new graduates trained in new skills that even those who graduated five years ago don’t have. If my readers want to see me as a pessimist, that’s fine, chalk up my position to a refusal to buy Prozac, but I’ve lived through enough recessions to know that the last few years were a huge anomaly and there’s a price to be paid for the excesses.

Beyond the collapse of the media sector, the very core of the contemporary upper middle class—jobs in media, advertising, real estate, finance, law and other services—faces evisceration, and may well follow the lower middle class into extinction over the course of the next decade. Those jobs are gone now and with them a host of possible commissions for architects. More than that, since the Obama administration’s greatest accomplishment seems to be to have unloaded the word “hope” of any meaning, at this point it seems likely that the shift rightward during the next elections will ensure that cities are deprived of the funding necessary to keep them afloat. Fade back to Mad Men and the early 1960s. It’s at this moment that New York takes a turning point and Mayor Robert F. Wagner sees his city entering into a multi-decade fiscal crisis from which it barely recovered.

Decades from now, will the monuments of the last decade—sadly much inferior to the monuments of the 1950s (where, after all, is our Seagram or Lever? The Standard? Magnolia Bakery maybe?)—remind us of the last days of the Creative Class and the hipster city? In 2029 will Sex in the City be as anachronistic in its depiction of the city as a thriving place for young people, just as Breakfast at Tiffany’s was in 1979?

Or is it possible that somehow the Obama administration will wise up? That he’ll take a cue from Harvard and fire Larry Summers together with the investment bankers that have infected the Cabinet, and insist that America not only has a public option for health insurance but that we’re going to rebuild manufacturing, in some smart, as yet unforeseen way? Heck, maybe the multitude will throw off its shackles and we’ll all live in a Shangri-La of post-Marxist immaterial culture.

One thing’s for sure, though. We’re not going back to 2002. Time will tell who succeeds in navigating through it as individuals, nations, and worlds.

*In general, I don’t have the time to ever watch shows when they first come out so I watch them time-shifted, either on my pitifully small Verizon DVR or on my AppleTV,  Roku box, or sometimes even via Blu-Ray disc from Netflix. I point this out since I want to hammer home how media consumption habits are changing. It’s particularly interesting watching my children, who have never known a world without on-demand or, for that matter, full-time PBS Kids Sprout.