On Intensification

Over the course of the last year, I’ve read and reread Jeffrey Nealon’s Foucault Beyond Foucault . Works centering on a particular philosopher are almost always formulaic and rarely interesting. This is a notable exception. Anyone with an interest in theorizing contemporary culture should get Foucault Beyond Foucault. Nealon re-reads Foucault for the present day in a highly intelligent way. To reduce his argument to a sound bite, Nealon looks at Foucault through the lens of Deleuze’s essay on the societies of control.The central point of Nealon’s book is Foucault (and Deleuze’s) concept of “intensification,” which explains the way that power operates in contemporary society.

Nealon:

For Foucault, this charting of emergent modes of power is hardly a story of progress or Enlightenment, but a story of what he calls the increasing ‘intensity’ (intensité) of power: which is to say its increasing ‘lightness’ and concomitant ‘economic’ viability, in the broadest sense of the word ‘economic.’ Power’s intensity most specifically names its increasing efficiency within a system, coupled with increasing saturation. As power becomes more intense, it becomes ‘more economic and more effective’ (“plus economique et plus efficace”; D&P, 207). In this sense, the genealogical shift from torturing the body to training it is hardly the eradication of the punitive gesture; rather it works to extend and refine the efficacy of that gesture by taking the drama of putative power and resistance out of the relatively scarce and costly criminal realms and into new situations or ‘markets’—to everyday life in the factory, the home, the school, the army, the hospital.” (32)

Nealon reads our society of control (and with it what I call network culture) as an intensification of both postmodernism and modernism, a far more effective system than the disciplinary society that Foucault analyzed. Nealon’s discussion of contemporary economics is also insightful: he explains that Marx’s old model of M-C-M’ (where M is money, C is a commodity, and M’ is more money generated by the production and sale of the commodity) is now dethroned by M-M’, speculative finance. This is crucial for understanding our contemporary economic condition.   

Get the book and find out more.

Continue reading “On Intensification”

Networked Publics 2010

Two phrases occupy my thoughts at the moment:

"All that is solid melts into air," Karl Marx’s adage suggesting that under capitalism all existing order will be swept away to be remade for the purposes of profit and efficiency has never been more true than today, when capitalism’s creative destruction is viciously turned on itself, causing a global economy crisis.

"The more things change the more they stay the same," or as written by Jean-Baptiste Alphonse Karr in the original French, "Plus ça change, plus c’est la même chose." Not only is Karr’s statement a way of looking at what Marx said, but it also seems true of what I’ve been doing for the last few years. As I finished Networked Publics and the Infrastructural City, I thought I had put those projects behind me, but now it’s clear that they are not so much books as categories that the Netlab will pursue for the foreseeable future, even as the other categories of network culture and the network city get added.

This spring, the Netlab is launching an ambitious series of panels, Discussions on Networked Publics, at Columbia’s Studio-X Soho. These will be framed along the categories that framed the chapters of  the Networked Publics book, e.g. culture, place, politics, and infrastructure.

The first panel, "culture" will be held at 6.30 on February 9 and will include as panelists Michael Kubo, Michael Meredith, Will Prince, Enrique Ramirez, David Reinfurt and Mimi Zeiger. These are among the sharpest minds in the field today and I am excited to have them participate in this discussion with me. There are more plans afoot in this project and I’ll keep you alerted as they develop.

In the meantime, I’ve spent a few days rebuilding various aspects of the Networked Publics site that broke during the past few years. The front page has been fixed after an update to a Drupal module killed the last version. I’ve also gone in and fixed a number of the links to videos, both the curated gallery of videos for the DIY video conference and also the videos for the three future scenarios that accompany the chapter on infrastructure and bring up consequences of policy decisions regarding network access. Throughout, the material hasn’t so much dated as demonstrated the importance of what we were talking about from 2005 to 2008. Seriously though, this isn’t a plug for me but rather for the other members of the team, who did such a great job identifying the critical issues.

Get the book, come to the discussions, and stay tuned to this blog to see how you can get involved (or if you’re really interested, drop me a line).

Continue reading “Networked Publics 2010”

The Decade Ahead

It’s time for my promised set of predictions for the coming decade. It has been a transgression of disciplinary norms for historians to predict the future, but its also quite common among bloggers. So let’s treat this as a blogosphere game, nothing more. It’ll be interesting to see just how wildly wrong I am a decade from now.

In many respects, the next decade is likely to seem like a hangover after the party of the 2000s (yes, I said party). The good times of the boom were little more than a lie perpetrated by finance, utterly ungrounded in any economy reality, and were not based on any sustainable economic thought. Honestly, it’s unclear to me how much players like Alan Greenspan, Ben Bernanke, Hank Paulson, and Larry Summers were duplicitous and how much they were just duped. Perhaps they thought they would get out in time or drop dead before the bubbly stopped flowing. Or maybe they were just stupid. Either way, we start a decade with national and global economies in ruins. A generation that grew up believing that the world was their oyster is now faced with the same reality that my generation knew growing up: that we would likely be worse off than our parents. I see little to correct this condition and much to be worried about.

Gopal Balakshrishan predicts that the future global economy will be a stationary state, a long-term stagnation akin to that which we experienced in the 1970s and 1980s. China will start slowing. The United States, EU, the Mideast and East Asia will all make up a low growth block, a slowly decaying imperium. India, together with parts of Africa and South America, will be on the rise. To be clear: the very worst thing that could happen is that we would see otherwise. If another bubble forms—in carbon trading or infrastructure for example—watch out. Under network culture, capitalism and finance have parted ways. Hardt and Negri are right: our economy is immaterial now, but that immateriality is not the immateriality of Apple Computer, Google, or Facebook, it’s the immateriality of Goldman Sachs and AIG. Whereas under traditional forms of capitalism the stock market was meant to produce returns on investment, a relationship summed up in Marx’s equation M-C-M’ (where M is money, C is a commodity produced with the money, and M’ is money plus surplus value), the financial market now seems to operate under the scheme of M-M’ (see Jeffrey Nealon’s brilliant Foucault Beyond Foucault). Surplus value is the product of speculation.

There’s every chance that I have little idea to what lengths the financial powers will go to continue this condition. After all, I would have said that we should have had a lengthy recession following the dot.com boom and we didn’t. Still, the Dow Jones, NASDAQ, house prices (measured in real dollars), and salaries all went down over the course of the decade, so it’s plausible to say that for the most part, the economy was a shambles.

Climate change will become more widely accepted as corporations realize that it can lead to consumption and profits when little else can. If we are unlucky, the green "movement" will become a boom. We will finally realize that peak oil has past, perhaps around 2006. Climate change will be very real. It will not be as apocalyptic as some have predicted, but major changes will be in the works. We should expect more major natural disasters, including a tragic toll on human life.   

Populations will be aging worldwide during the next decade and baby boomers will be pulling more money out of their retirement accounts to cover their expenses. At the same time, younger people will find it harder to get a job as the de facto retirement age rises well into the seventies, even the eighties. A greater divide will open up between three classes. At the top, the super-rich will continue controlling national policies and will have the luxury of living in late Roman splendor. A new "upper middle" class will emerge among those who were lucky enough to accumulate some serious cash during the glory days. Below that will come the masses, impossibly in debt from credit cards, college educations, medical bills and nursing home bills for their parents but unable to find jobs that can do anything to pull them out of the mire. The rifts between all three classes will grow, but it’s the one between the upper middle class (notice there is no lower middle class anymore) and the new proles that will be the greatest. This is where social unrest will come from, but right now it seems more likely to be from the Right than the Left. Still, there’s always hope.

Speaking of hope, if things go right, governments will turn away from get-rich-quick schemes like "creative cities" or speculative financial schemes and instead find ways to build long-term strategies for resurrecting manufacturing. It will be a painful period of restructuring for the creative industries. Old media, the arts, finance, law, advertising, and so on will suffer greatly. Digital media will continue to be a relatively smart choice for a career, even as it becomes more mainstreamed into other professions. For example, it will become as common in schools of architecture to study the design of media environments as it is now to study housing. We will see a rise of cottage industries in developing nations as individuals in their garages will realize that they can produce things with the means of production at hand. Think of eBay and Etsy, but on a greater scale. National health insurance in the US will help in this respect, as it will remove individuals from the need to work for large corporations. But all will not be roses in the world of desktop manufacture. Toxicity caused by garage operations will be a matter of contention in many communities.

Some cities are simply doomed, but if we’re lucky, some leaders will turn to intelligent ways of dealing with this condition. To me, the idea of building the world’s largest urban farm in Detroit sounds smart. Look for some of these cities—Buffalo maybe?—to follow Berlin’s path and become some of the most interesting places to live in the country. If artists and bohemians are finding it impossible to live in places like New York, San Francisco or Los Angeles anymore, they may well turn elsewhere, to the boon of cities formerly in decline. The hippest places to live will no longer be New York or Los Angeles or San Francisco. The move toward smaller cities—remember Athens, Georgia, Austin, Texas and Seattle?—will explode in this decade as the over-capitalized major cities will face crises. But to be clear, this is an inversion from the model of the creative city. These cities will not see real estate values increase greatly. The new classes populating them will not be rich, but rather will turn to a of new DIY bohemianism, cultivating gardens, joining with neighbors communally and building vibrant cultural scenes.

With the death of creative cities, planners will also have to turn toward regions. As jobs continue to empty out, city cores will also see a decline in their fortunes. Eventually, this may resurrect places like New York and San Francisco as interesting places to live in again, but for now, it will cause a crisis. Smart city leaders will form alliances with heads of suburban communities to force greater regional planning than ever before. This will be the decade of the suburbs. We began the last decade with over 50% of the world’s population living in urban areas. I predict that by the end of the next decade over 50% of the world’s population will live in suburban areas. This isn’t just Westchester and Rancho Palos Verdes but rather Garfield, New Jersey and East Los Angeles. Worldwide, it will include the banlieues and the shantytowns. Ending the anti-suburban rhetoric is critical for planners. Instead, we’ll be asking how to make suburbs better while boosting the city core. Suburbs may become the models for cities as the focus turns toward devolving government toward local levels, even as tax revenue will be shared across broad regions.

Urban farming will come to the fore and community-supported agriculture will become widespread. This won’t just be a movement among the hipster rich. It will spread to the immigrant poor who will realize that they can eat better, healthier, and cheaper by working with members of their immigrant community running farms inside and outside the city instead of shopping at the local supermarket. A few smart mayors will realize that cities in decline need community gardens and these will thrive. The rising cost of long-distance transportation due to the continued decline of infrastructure and peak oil will go a long way toward fostering this new localism.

The divisions in politics will grow. By the end of the decade, the polarization within countries will drive toward hyper-localism. Nonpartisan commissions will study the devolution of power to local governments in areas of education, individual rights (abortion will be illegal in many states, guns in many others), the environment, and so on. In many states gay rights will become accepted, in others, homosexuality may become illegal again. Slowly talk will start on both sides about the US moving toward the model of the EU. Conservatives may drive this initially and the Left will pick it up. In that case, I’m moving to Vermont, no question.

Architects will turn away from starchitecture. Thoughtful books, videos, and Web sites on the field will grow. Parametric modeling will go urban, looking toward GIS. Some of those results will be worth talking about. Responsive architecture will become accepted into the profession as will the idea of architects incorporating interfaces—and interface design—into their work.

In technology, the introduction of the Apple iSlate will make a huge difference in how we view tablets. It will not save media, but it will allow us to interface with it in a new way. eBooks will take hold, as will eBook piracy. Apple itself will suffer as its attempts to make the iSlate a closed platform like the iPhone will lead first to hacks and later to a successful challenge on the basis of unfair restraint of trade. A few years after the introduction of the iSlate, an interface between tablets and keyboards will essentially replace notebook computers. Wine will advance to such a point that the distinction between operating systems will begin to blur. In a move that will initially seem puzzling but will then be brilliant, Microsoft will embrace Wine and encourage its production. By the end of the decade, operating systems will be mere flavors.

The Internet of Things will take hold. An open-source based interface will be the default for televisions, refrigerators, cars and so on. Geolocative, augmented-reality games will become popular. Kevin Slavin will be the Time Web site’s Man of the Year in 2018. As mobile network usage continues to grow, network neutrality will become more of an issue until a challenger (maybe Google, maybe not) comes to the scene with a huge amount of bandwidth at its disposal. Fears about Google will rise and by the end of the decade, antitrust hearings will be well-advanced.

We will see substantive steps toward artificial intelligence during the decade. HAL won’t be talking to us yet, but the advances in computation will make the technology of 2019 seem far, far ahead of where it is now. The laws of physics will take a toll on Moore’s Law, slowing the rate of advance but programmers will turn back toward more elegant, efficient code to get more out of existing hardware.

Manned spaceflight will end in the United States, but the EU, China, and Russia will continue to run the International Space Station, even after one or two life- and station-threatening crises onboard. Eventually there will be a world space consortium established, even as commercial suborbital flights go up a few dozen times a year and unmanned probes to Pluto, Mars, Venus and Europa deliver fantastic results. Earth-like planets will be found in other solar systems and there will be tantalizing hints of microscopic life elsewhere in the solar system even as the mystery of why we have found nobody else in the universe grows.

Toward the end of the decade, there will be signs of the end of network culture. It’ll have had a good run of 30 years: the length of one generation. It’s at that stage that everything solid will melt into air again, but just how, I have no idea.

As I stated at the outset, this is just a game on the blogosphere, something fun to do after a day of skiing with the family. Do pitch in and offer your own suggestions. I’m eager to hear them.

Continue reading “The Decade Ahead”

A Decade in Retrospect

Never mind that the decade really ends in a little over a year, it’s time to take stock of it. Today’s post looks back at the decade just past while tomorrow’s will look at the decade to come.

As I observed before, this decade is marked by atemporality. The greatest symptom of this is our inability to name the decade and, although commentators have tried to dub it the naughties, the aughts, and the 00s (is that pronounced the ooze?), the decade remains, as Paul Krugman suggests, a Big Zero, and we are unable to periodize it. This is not just a matter of linguistic discomfort, its a reflection of the atemporality of network culture. Jean Baudrillard is proved right. History, it seems, came to an end with the millennium, which was a countdown not only to the end of a millennium but also to the end of meaning itself. Perhaps, the Daily Miltonian suggested, we didn’t have a name for the decade because it was so bad.

Still, I suspect that we historians are to blame. After Karl Popper and Jean-François Lyotard’s condemnation of master narratives, periodizing—or even making broad generalizations about culture—has become deeply suspect for us. Instead, we stick with microhistories on obscure topics while continuing our debates about past periods, damning ourselves into irrelevance. But as I argue in the book that I am currently writing, this has led critical history to a sort of theoretical impasse, reducing it to antiquarianism and removing it from a vital role in understanding contemporary culture. Or rather, history flatlined (as Lewis Lapham predicted), leaving even postmodern pastiche behind for a continuous field in which anything could co-exist with anything else.

Instead of seeing theory consolidate itself, we saw the rise of network theory (a loose amalgam of ideas from the theories of mathematicians like Duncan Watts to journalists like Adam Gopnik) and post-criticism. At times, I felt like I was a lone (or nearly lone) voice against the madding crowd in all this, but times are changing rapidly. Architects and others are finally realizing that the post-critical delirium was an empty delusion. The decade’s economic boom, however, had something of the effect of a war on thought. The trend in the humanities is no longer to produce critical theory, it’s to get a grant to produce marketable educational software. More than ever, universities are capitalized. The wars on culture are long gone as the Right turned away from this straw man and the university began serving the culture of networked-enduced cool that Alan Liu has written about. The alienated self gave way to what Brian Holmes called the flexible personality. If blogs sometimes questioned this, Geert Lovink pointed out that the questioning was more nihilism than anything else.

But back to the turn of the millennium. This wasn’t so much marked by possibility as by delirium. The dot.com boom, the success of the partnership between Thomas Krens and Frank Gehry at the Guggenheim Bilbao, and the emergence of the creative cities movement established the themes for this decade. On March 12, 2000, the tech-heavy NASDAQ index peaked at 4069, twice its value the year before. In the six days following March 16, the index fell by nine percent and it was not through falling until it reached 1114 in August, 2003. If the delirium was revealed, the Bush administration and the Federal Reserve found a tactic to forestall the much-needed correction. Under pretext of striving to avoid full-scale collapse after 9/11, they set out to create artificially low interest rates, deliberately inflating a new bubble. Whether they deliberately understood the consequences of their actions or found themselves unable to stop it, the results were predictable: the second new economy in a decade turned out to be the second bubble in a decade. If, for the most part, tech was calmer, architecture had become infected, virtualized and sucked into the network not to build the corporate data arcologies predicted by William Gibson but as the justification for a highly complex set of financial instruments that seemed to be crafted so as to be impossible to understand by those crafting them. The Dow ended the decade lower than it started, even as national debt doubled. I highly recommend Kevin Phillips book Bad Money: Reckless Finance, Failed Politics, and the Global Crisis of American Capitalism to anyone interested in trying to understand this situation. It’s invaluable.

This situation is unlikely to change soon. The crisis was one created by over-accumulation of capital and a long-term slowdown in the economies of developed nations. Here, Robert Brenner’s the Economics of Global Turbulence can help my readers map the situation. To say that I’m pessimistic about the next decade is putting it lightly. The powers that be had a critical opportunity to rethink the economy, the environment, and architecture. We have not only failed on all these counts, we have failed egregiously.

It was hardly plausible that the Bush administration would set out to right any of these wrongs, but after the bad years of the Clinton administration, when welfare was dismantled and the Democrats veered to the Right, it seemed unlikely that a Republican presidency could be that much worse. If the Bush administration accomplished anything, they accomplished that, turning into the worst presidency in history. In his review of the decade, Wendell Barry writes "This was a decade during which a man with the equivalent of a sixth grade education appeared to run the Western World." If 9/11 was horrific, the administration’s response—most notably the disastrous invasions of Afghanistan and Iraq, alliances with shifty regimes such as Pakistan, and the turn to torture and extraordinary rendition—ensured that the US would be an enemy for many for years to come. By 2004, it was embarrassing for many of us to be American. While I actively thought of leaving, my concerns about the Irish real estate market—later revealed as well-founded—kept me from doing so. Sadly, the first year of the Obama administration, in which he kept in place some of the worst policies and personnel of the Bush administration’s policy, received a Nobel peace prize for little more than inspiring hope, and surrounded himself with the very same sorts of financiers that caused the economic collapse in the first place proved the Democrats were hopeless. No Republican could have done as much damage to the Democratic party as their own bumbling leader and deluded strategists did. A historical opportunity has been lost to history. 

Time ended by calling it "the worst decade ever."

For its part, architecture blew it handily. Our field has been in crisis since modernism. More than ever before, architects abandoned ideology for the lottery world of starchitecture. The blame for this has to be laid with the collusive system between architects, critics, developers, museum directors and academics, many of whom were happy as long as they could sit at a table with Frank Gehry or Miuccia Prada. This system failed and failed spectacularly. Little of value was produced in architecture, writing, or history.

Architecture theory also fell victim to post-criticism, its advocates too busy being cool and smooth to offer anything of substance in return. Perhaps the most influential texts for me in this decade were three from the last one: Deleuze’s Postscript on the Society of Control, Koolhaas’s Junkspace, together with Hardt and Negri’s Empire. If I once hoped that some kind of critical history would return, instead I participated in the rise of blog culture. If some of these blogs simply endorsed the world of starchitecture, by the end of the decade young, intelligent voices such as Owen Hatherley, David Gissen, Sam Jacob, Charles Holland, Mimi Zeiger, and Enrique Ramirez, to name only a few, defined a new terrain. My own blog, founded at the start of the decade has a wide readership, allowing me to engage in the role of public intellectual that I’ve always felt it crucial for academics to pursue.   

Indeed, it’s reasonable to say that my blog led me into a new career. Already, a decade ago, I saw the handwriting on the wall for traditional forms of history-theory. Those jobs were and are disappearing, the course hours usurped by the demands of new software, as Stanley Tigerman predicted back in 1992. Instead, as I set out to understand the impact of telecommunications on urbanism, I found that thinkers in architecture were not so much marginal to the discussion as central, if absent. Spending a year at the University of Southern California’s Annenberg Center for Communication led me deeper into technology and not only was Networked Publics the result, I was able to lay the groundwork for the sort of research that I am doing at Columbia with my Network Architecture Lab.

The changes in technology were huge. The relatively slow pace of technological developments from the 1950s to the 1980s was left long behind. If television acquired color in the 1960s and cable and the ability to play videotapes in the late 1980s, it was still fundamentally the same thing: a big box with a CRT mounted in it. That’s gone forever now, with analog television a mere memory. Computers ceased being big objects, connected via slow telephone links (just sixteen years ago, in 1993, 28k baud modems were the standard) and became light and portable, capable of wireless communications fast enough to make downloading high definition video an everyday occurrence for many. Film photography all but went extinct during the decade as digital imaging technology changed the way we imaged the world. Images proliferated. There are 4 billion digital images on Flickr alone. The culture industry, which had triumphed so thoroughly in the postmodern era, experienced the tribulations that Detroit felt decades before as the music, film, and periodicals all were thrown into crisis by the new culture of free media trade. Through the iPod, the first consumer electronics device released after 9/11, it became possible for us to take with us more music than we would be able to listen to in a year. Media proliferated wildly and illicitly.

For the first time, most people in the world had some form of telecommunication available to them. The cell phone went from a tool of the rich in 1990 to the tool of the middle class in 2000. By 2010, more than 50% of the world’s population owned a cell phone, arguably a more important statistic than the fact that at the start of this decade for the first time more people lived in cities than in the country. The cell phone was the first global technological tool. Its impact is only beginning to be felt. In the developed world, not only did most people own cell phones, cell phones themselves became miniature computers, delivering locative media applications such as turn-by-turn navigation, geotagged photos (taken with the built in cameras) together with e-mail, web browsing, and so on. Non-places became a thing of the past as it was impossible to conceive of being isolated anymore. Architects largely didn’t have much of a response to this, and parametric design ruled the studios, a game of process that, I suppose, took minds off of what was really happening.

Connections proliferated as well, with social media making it possible for many of us to number our "friends" in the hundreds. Alienation was left behind, at least in its classical terms, as was subjectivity. Hardly individuals anymore, we are, as Deleuze suggested, today, dividuals. Consumer culture left behind the old world of mass media for networked publics (and with it, politics, left behind the mass, the people, and any lingering notion of the public) and the long tail reshaped consumer culture into a world of niches populated by dividuals. If there was some talk about the idea of the multitude or the commons among followers of Hardt and Negri (but also more broadly in terms of the bottom up and the open source movement), there was also a great danger in misunderstanding the role that networks play in consolidating power at the top, a role that those of us in architecture saw first-hand with starchitecture’s effects on the discipline. If open source software and competition from the likes of Apple hobbled Microsoft, the rise of Google, iTunes, and Amazon marked a new era of giants, an era that Nicholas Carr covered in the Big Switch (required reading).   

The proliferation of our ability to observe everything and note it also made this the era an era in which the utterly unimportant was relentlessly noted (I said relentlessly constantly during this decade, simply because it was a decade of relentlessness). Nothing, it seemed, was the most important thing of all.

In Discipline and Punish, Foucault wrote, "visibility is a trap." In the old regime of discipline, panopticism made it possible to catch and hold the subject. Visibility was a trap in this decade too, as architects and designers focussed on appearances even as the real story was in the financialization of the field that undid it so thoroughly in 2008 (this was always the lesson of Bilbao… it wasn’t finance, not form, that mattered). Realizing this at the start of the decade, Robert Sumrell and I set out to create a consulting firm along the lines of AMO. Within a month or two, we realized that this was a ludicrous idea and AUDC became the animal that it is today, an inheritor to the conceptual traditions of Archizoom, Robert Smithson, and the Center for Land Use Interpretation. Eight years later, we published Blue Monday, a critique of network culture. I don’t see any reason why it won’t be as valuable—if not more so—in a decade than it is now.   

I’ve only skimmed the surface of this decade in what is already one of the lengthiest blog posts ever, but over the course of the next year or two hope to do so to come to an understanding of the era we were just in (and continue to be part of) through the network culture book. Stay tuned.

Continue reading “A Decade in Retrospect”

2009 in Review

It’s time for this blog to look backwards and forwards, first to the last year, then to the past decade, and finally to the decade ahead. 

The single biggest story of 2009 was the continued collapse of the economy. For architects—and a sizable proportion of my readers are architects—this was as bad a year as any.

In the United States more jobs were lost in the profession than in any other. Nearly 18% of architects received pink slips over the year, according to MSBNC. Overseas, in places like my other "home" countries of Lithuania and Ireland—economies and architects fared worse. I predicted this situation long ago and found it alarming to watch so many architects drink the Kool-Aid of unfettered growth so readily.

The new economy was not forever and, at the end of it all, many were much worse off than what it began. I’ll have more to say about this tomorrow, when I look back at the decade, but the situation is not going to change much in 2010 or anytime soon. If it does, then be very worried. The correction is painful, but measures being taken now to lessen it are likely to cause more pain in the future. First the Bush, then the Obama administrations pumped huge amounts of money into the economy in an effort to stimulate it; for example, the real estate industry didn’t crash only because of the tax credit to first-time homeowners.

Temporarily, this has prevented an outright collapse, but the massive amounts of debt incurred to prop up the finance and real estate sectors will have to be repaid. At best, this will force the US to curtail its foreign military adventures (already, the Right is turning away from nation-building, toward isolationism) and will put a brake on further expansionist bubbles by imposing a permanent tax burden. As far as the worst, well, think of the long collapse of the British, Dutch, or Spanish Empires, with the country in permanent economic stagnation.   

A corollary to the economy was the new discussion of infrastructure. The Infrastructural City came out at the tail end of 2009 and received a great deal of attention. The hardcover pressing went out of print rapidly and the paperback is one of ACTAR’s biggest sellers for 2009. I’m not at all surprised: attention to collapsing infrastructure in this country has been necessary since the 1980s. Much of the attention revolved around Obama’s call for a WPA 2.0 last December, but by the time the stimulus bill was drafted, infrastructure had left the agenda. It was sad to watch Obama surround himself with the usual suspects and defend the very industry (Wall Street) that caused all the trouble in the first place. It is clear now that Obama’s rise to power was not the story of a come-from-behind victory by an underdog with grass-roots support, but rather the carefully staged simulation of that story. Architects and critics pinned their hopes on infrastructure, but were slow to understand that this too was a simulation, even though I warned them. Requiring large investment in physical objects instead of in financial instruments and a lengthy time before results are seen, infrastructure is hard to sell to a political machine beholden to speculation and rapid gratification for immediate election gains. 

Any battle for infrastructure funds will be a slow march through the A. I. A., the universities, policy think-tanks, and the parties. Still, its better than the residential and real estate markets, with the phenomenal amount of overbuilding that took place there. The big question will be how can architects claim to design infrastructure, generally something that engineers take on.

On a related note—and since I am a space fanboy—the Obama administration also handily bungled its chance at NASA. If it initially seemed like the administration would take bold action resulting in the rapid retirement of the poorly-conceived Ares launch vehicle and the adoption of Direct-X plus or a commercial manned launch system, thus far we’ve heard nothing. Instead, the program lumbers on, even as 2010 promises the end of shuttle flights. It seems that like much over-leveraged real estate, the space station is due to be underpopulate and to rapidly fall into decay, never used for its original intended purpose. The end of regular manned space flight in this country is only a year away. With the moon and Mars essentially out of the question and the space station likely cannibalized for a Russian station by the end of the next decade, any future US launch vehicle seems to be purposeless. A silver lining is that maybe a decade from now, once manned spaceflight is shut down we can concentrate on the robotic science missions that have delivered so much to us in the last few years. Still, don’t be surprised if a decade from now this seems like an over-optimistic prediction.  

I wound up on a tour of universities this fall, presenting Netlab research on infrastructure at many of them. It’s been gratifying to know that the project is of continued interest. 

Networked Publics may have received less attention, but it was no less important. The debates that we outlined in that book—originally drafted in 2006!—have continued to be of critical importance. It was with great sadness that learned—just last week—of the death of Anne Friedberg, my co-author of the place article, but the work that we did has continued to be of relevance as we continue to move deeper into a world of networked place. In culture, the collapse of media that began with the decline of the music industry, a key part of the chapter on culture is now extended to the massive implosion of the news and media industry. The list of magazines and newspapers that shut their doors in 2009 is lengthy and will only grow in 2010. The problems that we saw facing politics, i.e. our inability to find a way to make online deliberation as effective as online mobilization were extended. The liberals and conservatives in this country are more polarized than ever while the Obama campaign’s use of social media has not been matched by any significant efforts toward using social media to decide policy.  With the heady growth of data consumption by iPhone users, the question of network neutrality now affects not only wired lines but also mobile data.

This coming spring, we will be discussing the topics from Networked Publics in public at Studio-X Soho. Watch this space for more about those conversations. 

Some other news worth reflecting on is the failure of augmented reality on the iPhone to have the same broad success as locative media applications. Although it has an initial gee-whiz factor, holding the iPhone up to augment the world is pretty goofy. Unless everyone really does start wearing iGlasses, it’s unlikely to take hold. On the other hand, the biggest story of the year in tech was the rumored Apple tablet. Where the year started with the suggestion that universities and philanthropic organizations would need to keep media alive, media is now counting on Apple to save them. Time will tell. 

Last year, I predicted that networked urbanism would be the rage in 2009. Indeed it was, but as it developed, I began to sound a note of alarm. Two things bothered me. First, much of the talk about networked urbanism seemed to be too earnest about its appeal to a tech-savvy class of digirati. A century ago, Woodrow Wilson, then still President of Princeton, warned against the danger of the automobile: "nothing has spread socialistic feeling in this country more than the use of automobiles." Wilson worried that a rift would emerge between the car-owning rich and the poor, less mobile masses. Henry Ford listened and built the Model T. Networked urbanism is blind to this reality at its own peril. Moreover, we still have no way to capitalize the changes in media. This is a non-trivial matter. The networked future is hardly replacing the jobs being lost.   

The year started off with a site redesign at varnelis.net, and my addition of tumblelogs to the site. The result has been more updates—over two posts a week to the main varnelis.net blog plus more to the tumblelog. Even if these aren’t as regular as I’d like them, it’s a step forward as climbing readership has shown. And of course there is Twitter, where I’ve made hundreds of posts so far this year. 

The majority of my year was consumed by research and writing for the Netlab. The network culture book is well underway and I posted an early version of the introduction together with material on art at Networked, a networked book on networked art. This summer, we made progress toward network city project at the Netlab. We’ll have more results from that work throughout the spring of 2010. Watch this space. AUDC published articles in New Geographies 2 and Design Ecologies while I published articles in my role as Netlab director in venues from the Architects’ Newspaper to Volume (here and here)  to the Architectural Review to the ICA catalog Dispersion. It was a full year and I hardly expect the next year to be any less full. 

 

Continue reading “2009 in Review”

On Death

I’m usually late in sending out holiday greetings and this year is no exception. We had planned to make a physical version of our annual family photo but didn’t manage to do it in time for the holidays, so we wound up sending out virtual versions. At least there was snow. I sent out the photo to perhaps 150 friends and colleagues and received the usual 20 bounces. One bittersweet surprise was finding out that my friend Daniel Beunza has moved to the London School of Economics. I’m sure it’ll be a great place for him—and he’s closer to his home country of Spain—but I’ll miss discussions about finance with this remarkable colleague. Much sadder was receiving an automated e-mail from Anne Friedman, another friend with whom I co-wrote the Place chapter of Networked Publics saying that she was on indefinite medical leave. I had received this same message a while back and was concerned, but I didn’t get in touch. This time, I looked her up in Google news—just in case—and was saddened to hear that she died this October.

I remember Anne and I talking about how I had discovered that Derek Gross, a college friend who died on 1996 via his Web page. This was before the age of blogs, but Derek updated his Web page regularly and when I visited it to see when his band was next playing, I found he had died, together with a record of his experience. Certainly it’s something I had never wished to see again, but just as surely discovering Anne’s death via the net is not going to be the final time.   

Anne was a brilliant scholar, as evidenced by her books Window Shopping and the Virtual Window, as well as a great friend. She was crucial for not only my chapter, but also for the Networked Publics group and our book, articulating issues that were fundamental to the project, asking and giving me sage advice throughout. I could not have written the chapter of the book without her. Together we sat in our offices, she in her Lautner House, I in the AUDC studio on Wilshire Boulevard, and wrote the chapter simultaneously on Writely (now Google Docs). In so doing, we experienced the phenomenon of our voices becoming co-mingled, producing a third entity that was neither Anne nor myself. I am heartbroken that there will never be a sequel.

Continue reading “On Death”

The Spectacle of the Innocent Eye

So many of the recent events and discussions in architecture remind me of material I covered in my dissertation. Some of the writing is juvenalia, some of it is prophetic. Either way, it ensured I’d be persona non grata around Cornell ever since.

Enough people ask me about it that I should upload it and see what the response is. Since the original files are now fifteen years old, forgive me for the inevitable formatting problems and the lack of illustrations (a list is appneded to give you an idea of what you missed).

I produced the attached text a few months after the dissertation itself, incorporating further revisions.

The abstract reads as follows.

 

The Spectacle of the Innocent Eye:
Vision, Cynical Reason, and
The Discipline of Architecture in Postwar America
1994

 

 

In this dissertation, I trace the growth of cynical reason and the spectacle in postwar American architecture by examining the emergence of a new attitude toward form in postwar American architecture and the rise of the group of architectural celebrities that represented it.

From the 1950s onward, a number of architectural educators–most notably Colin Rowe and John Hejduk–derived a theory of architectural design from the visual language developed by graphic art educators Laszlo Moholy-Nagy and Gyorgy Kepes. The architectural educators’ intent was to solidify architecture’s claim to artistic autonomy through a focus on the rigorous use of form. In doing so, they hoped to resist the threat to architecture as a discipline, then having its domain of inquiry attacked by the encroaching social sciences and engineering.

Like Moholy-Nagy and Kepes, the architectural educators aimed to create an innocent eye in the student, restricting vision to instantaneous, prelinguistic perception of two-dimensional formal relationships. The student would become a retinalized subject under the influence of outside forces rather than an agent capable of independent action and hence ethically responsible in their life and architecture. In addition, the new theory of architecture was unable to divest itself of its origin in graphic art and produced a formally complex but atectonic, cardboard (-like) architecture.

Against this background, I investigate the rise of the movement’s representatives–Peter Eisenman, Michael Graves, Richard Meier, and Robert Stern–and their relationship to their patron, Philip Johnson. Together, they promoted each other and cardboard architecture, as well as a history and architecture reduced to image.

But history has a material reality: in the 1930s, Johnson participated in the American fascist movement and left as evidence a body of fascistic and antisemitic texts he wrote for publications in the movement. Since then he and his promoters, among them Stern and Eisenman, have carefully repressed his past by making it into a public secret. Ultimately, the kids do not have innocent eyes: along with Johnson they have promoted a spectacular architectural discourse of cynicism.

 

Continue reading “The Spectacle of the Innocent Eye”

Complexity and Contradiction in Infrastructure

I gave the following talk on Banham’s Los Angeles, non-plan, and infrastructure in the Ph.D. lecture series at the Columbia Graduate School of Architecture, Planning, and Preservation in November, 2009. 

Most of us are prone to hero worship. This talk sets out to address one a major problem in the work of one of the contemporary heros of architectural and planning historiography, Reyner Banham, and his advocacy of a (mythical) laissez-faire form of planning based on his reading of Los Angeles.

Below the talk I have embedded a video. Although today its considered bad practice to read lectures, I’ve started doing it again even if my delivery seems more stale. When you give ten lectures a term outside of school and when nearly every venue insists upon something custom, the practice of keynoting ex tempore from notes becomes a bit of a drag. Eventually you realize that with a little more work—and granted, a little worse delivery—your project could convery more, have more theoretical meaning, and be generative toward other projects. At the level of production that I’ve been trying to stay at lately, the only way to produce content is to follow the advice that Slavoj Zizek gave in the movie about him: everything either needs to be a spin-off or work toward the next major project.

This talk, then, is a spin off of my work toward the Infrastructural City but also sets out to tackle Banham critically (something that I’ve also done here), something I intend to take up soon.

Complexity and Contradiction in Infrastructure

The title of my talk refers to Robert Venturi’s 1966 Complexity and Contradiction in Architecture, generally accepted as an inaugural text in postmodern architecture. For Venturi, the modernists failed because they strove for purity of form in. Venturi wrote:

“today the wants of program, structure, mechanical equipment, and expression, even in single buildings in simple contexts, are diverse and conflicting in ways previously unimaginable. The increasing dimension and scale of architecture in urban and regional planning add to the difficulties. I welcome the problems and exploit the uncertainties. By embracing contradiction as well as complexity, I aim for vitality as well as validity.”

In other words, Venturi suggested that architects rather than trying to sweep messes under the rug, architects embrace complexity and contradiction by introducing deliberate errors in their works.

Venturi concluded that an appropriate architectural response “must embody the difficult unity of inclusion rather than the easy unity of exclusion. More is not less.”

Note, however, that Venturi’s argument is historically specific:
“today the wants of program, structure, mechanical equipment, and expression, even in single buildings in simple contexts, are diverse and conflicting in ways previously unimaginable. The increasing dimension and scale of architecture in urban and regional planning add to the difficulties.”

This text, then, comes about at a transition point between late modernity and postmodernity and its virtue is that Venturi not only diagnosed a condition, he also suggested an architectural approach. Both of these suggested a schism from the modern, a move into a new condition. Today I want to talk about another phase in the era of complexity, which is why I cite Venturi at the outset. 

Along with Denise Scott Brown and Steven Izenour, Venturi’s next book, the 1972 Learning From Las Vegas would tackle issues of signage, semiotics, automobility and the commercialization of the American city, flipping the valence on landscapes that had been roundly derided as degraded by architecture critics. But for the purposes of this talk, its worth noting that the authors original interest was in Los Angeles and the Yale studio that resulted in Learning from Las Vegas visited that city first. It may be that the more smaller and more picturesque of the two cities (unlike Vegas, Los Angeles has no central strip) proved more easily explainable.

For architects and historians of architecture (I file myself in the latter category), Reyner Banham’s 1971 Los Angeles. The Architecture of Four Ecologies took on the urban conditions in a more total approach. Banham set out to dissect the city as a total landscape—both geographically and historically, both physically and psychically—as well as in terms of its infrastructural, social, and architectural systems. In this, Banham’s work has been pathbreaking and The Infrastructural City uses his book as inspiration and as a point of departure, something that my subtitle Networked Ecologies in Los Angeles alludes to.

But Banham’s foremost innovation was to flip the valence on the historical evaluation of Los Angeles, praising precisely those qualities that others listed as irredeemable failings: its posturban sprawl; its lack of an overall plan; its chaotic, untamed signscape; its comical roadside architecture; its ubiquitous boulevards, parking lots, and freeways. 

Although we could ascribe this to a characteristic British fascination with the degraded, Banham also had a theoretical impetus. By the mid-1960s, he had become fascinated with the possibilities of what he called “non-plan,” a laissez-faire attitude toward urban planning, part of a larger project that he undertook along with Paul Barker, deputy editor of the magazine the New Statesman. In 1967, Barker ran excerpts from Herbert Gans’s The Levittowners “as a corrective to the usual we-know-best snobberies about suburbia.” At roughly the same time, Barker and Peter Hall set out with a “maverick thought… could things be any worse if there was no planning at all?” The result, strongly influenced by Banham’s writings in the magazine, was a special issue publshed in 1969 and titled “Non-Plan: An Experiment in Freedom.” Barker recalls, “We wanted to startle people by offending against the deepest taboos. This would drive our point home.” To this end Hall, Banham, and architect Cedric Price each took a section of the revered British countryside and blanketed it with a low-density sprawl driven by automobility. According to Barker the reaction was a “mixture of deep outrage and stunned silence.”

For Banham, Los Angeles stood as the greatest manifestation of Non-Plan to date. “Conventional standards of planning do not work in Los Angeles,” he wrote, “it feels more natural (I put it no stronger than that) to leave the effective planning of the area to the mechanisms that have already given the city its present character: the infrastructure to giant agencies like the Division of Highways and the Metropolitan Water District and their like; the intermediate levels of management to the subdivision and zoning ordinances; the detail decisions to local and private initiatives; with ad hoc interventions by city, State, and pressure-groups formed to agitate over matters of clear and present need.”

Now there’s some question as to how well Banham’s Los Angeles worked in the first place: it was in its worst period of air pollution in history, the freeways were wreaking devastation upon the city and the Watts Riots had just shaken any lingering mirage of Los Angeles as either a progressive metropolis or as paradise for the white middle class. Still, in his evaluation, Banham felt that the city—in his mind epitomized not by the Watts Riots but by the individualistic exuberance of Watts Towers— worked because it had no central plan. Rather, planning was left to the competing forces in the city, public and private.

If Banham set out against modernist urban planning, non-plan gave a theoretical basis for neoliberalist planning. Reducing the modernist ethical imperative to a question of fascination with the bottom-up to embrace “a messy vitality” (this is not Banham but Venturi’s term), modernism would be reduced from a question of morality and rational planning to a question of desire, both individual and institutional. The result parallels Manfredo Tafuri’s observation in Architecture and Utopia that the avant-garde’s singular accomplishment is not so much a physical change to the metropolis but rather an adjustment in how it is viewed. We can see this quite literally in Banham’s own role in his book: what remains at the end of the modern project is the experience of the city and the observer’s voyeuristic pleasure in the psychogeographic experience of drifting on the boulevards and freeways of the city.

But things have changed. For one, the 1970s were an era of limits for the city, the state, and country with the first large-scale economic recession since the war, the OPEC  energy crisis, Vietnam, and finally stagflation. If the late 1960s were a period of great social unrest, by the mid-1970s, such unrest had largely been reshaped into concerns with individual rights and self-realization, above all the right to property and to dispose of one’s wealth as one wants. Thus, the system of non-plan that Banham lauded would be institutionalized in California in 1978 with the passing of State Proposition 13, reducing property tax by 57% and mandating that future tax increases require a two-thirds majority in the state legislature. Two years later former California governor Ronald Reagan would become President and set out on a draconian program of reducing non-military governmental spending at a national level.

By the time that Reagan took office, with a decade of cutbacks caused by the combination of economic crises and funds being siphoned off for defense, due to dwindling urban tax roles caused by outmigration since the 1930s and due to the more natural phenomena of age, infrastructure was coming undone nationwide.

Thus in 1981, precisely at the instigation of the nation’s Californization (or, and I hesitate to suggest it, Californication?) economists Pat Choate and Susan Walters published a pamphlet for the Council of State Planning Agencies titled America in Ruins: Beyond the Public Works Pork Barrel. The pamphlet soon attracted a large amount of press attention, including a Newsweek cover story on August 2, 1982 entitled "The Decaying of America.” (August 2, 1982) and a US News and World Report story To Rebuild America: $2,500,000 Job, September 27, 1982. Literature searches suggest that is at this moment that infrastructure begins to gain popularity as a term. Infrastructure enters into the national consciousness during crisis.

But a Californicated America would have no room for public infrastructural spending. Instead, the exemplary infrastructures of the 1980s and 1990s—telecoms after deregulation, the mobile phones, the Internet—are privatized. Here, Richard Barbrook and Andy Cameron describe the legitimizing narrative for such ventures as the Californian Ideology, a union of hippie self-realization, neoliberal economics, and above all, privatization advocated by Silicon Valley pundits like Stewart Brand editor of the Whole Earth Catalog and founder of Wired Magazine. As Barbrook and Cameron suggest, the growth of Silicon Valley and indeed, California as a whole, was made possible only due to exploitation of the immigrant poor and defense funding. Los Angeles, after all, became the country’s foremost industrial city in the postwar period, largely due to defense contracts at aerospace firms. So, government subsidies for corporations and exploitation of non-citizen poor: a model for future administrations. 
But there’s more to infrastructural crisis then neoliberal economic policy. Once again Banham and Los Angeles provide a reference point. Banham describes the ecologies of Los Angeles as dominated by an individualism that allows architecture to flourish. But such a model of the city is insufficient. In the Reluctant Metropolis: The Politics of Urban Growth in Los Angeles, William Fulton describes Los Angeles as an exemplar of what Harvey Molotch calls “the city as growth machine.” In this model, certain industries—primarily the finance and real estate industries—dominate urban politics with the intention of expanding their businesses. Newspapers too endorse the growth machine as a way of expanding their subscription base and selling real estate ads. Moreover,  arts organizations such as the symphony, opera, and art museums are also beholden to the model of the city as growth machine.These interests promote a naturalized view of growth in which we are simply not to question that cities will always get bigger or that they should always get bigger.

By the 1960s, however, homeowner discontent about encroaching sprawl led individuals to band together to form homeowner groups. The first of these was the Federation of Hillside and Canyon Associations, which protested the construction of a four-lane highway in place of scenic Mulholland Drive. Soon, homeowners teamed with environmental organizations such as the Sierra Club to create a regional park in the Santa Monica Mountains to prevent further development in their back yards. By the time that Proposition 13 passed, Angelenos were set against the growth machine and with it, too, the big infrastructure necessary to drive it or even the projects necessary to repair it.   

The result, then, is a long, steady process of infrastructural decay, privatized infrastructure acting as a layer or retrofit onto a decaying public infrastructure.
    
It’s in this context, then, that we must situate both Venturi and Banham, as transitional approaches to the material, reducing questions of complexity to form matters, which of course is not too uncommon in architecture. In Venturi’s case, complexity is produced through form, in Banham’s case formal complexity is produced by the laissez-faire city.

Now I’d like to turn to some contradictions that emerge out of this condition. First, we could sense a threat to the vaunted neoliberal individual rights from failing infrastructure.  Some of these are quite obvious: the inconvenience of traffic and long commutes but also the potholes that (in Los Angeles) cause an average of $746 of damage annually per automobile, collapsing bridges, energy crises caused by privatization such as electricity grids failing and refineries going offline indefinitely (here the city of Los Angeles, which has not privatized its power wound up ahead of the rest of the state during the crisis that brought down Gray Davis during Enron’s salad days).
  
Neoliberalism thus exacerbates what sociologist Ulrich Beck calls “risk society.” Banham’s autopia isn’t a risk free world, but rather a condition in which risk and threat are everyday factors, creating a contradiction within capitalism. Beck:

“… everything which threatens life on this Earth also threatens the property and commercial interests of those who live from the commodification of life and its requisites. In this way a genuine and systematically intensifying contradiction arises between the profit and property interests that advance the industrialization process and its frequently threatening consequences, which endanger and expropriate possessions and profits (not to mention the possession and profit of life).” (Beck 1992: 39)

* * *
Now if environmentalism was in part, a movement created by homeowner desires to protect their rights, we would expect that infrastructural collapse (or for that matter the state of California schools) would also be of concern to homeowners and corporations, but in California, Proposition 13 and a politics of stalemate make it impossible to act. Even as voters seek mandates to restore services, the state is hamstrung by the legislature’s terror of touching Proposition 13, which is known as the “third rail” of state politics. Last month the Guardian asked “Will California become America’s first failed State?”

I want to be stress that in other respects conditions have intensified, moving postmodernism to another phase. Take risk. Environmentalism has been thoroughly capitalized as the green movement, with the Californian ideology now promising to save us from global warming through technological means. Crisis becomes profitable.
Crisis becomes profitable.

On to my last two points. Profit, as Robert Brenner tells us in the economics of global turbulence has become a problem, in part because of some of the problems that face infrastructure. Massive investment in fixed capital make it impossible to abandon when more efficient structures elsewhere threaten. The most familiar aspect of this, of course, is the rise of Chinese industry and the evacuation of American production. But infrastructure is of equal concern. Infrastructure, like other technologies, follows a classic S-curve, in which initially steep returns per dollar invested are followed by diminishing returns as the curve flattens.

The results, for the country have been devastating. California, together with Soho and Boston appeared to enjoy massive growth in high technology, particularly telecommunications and digital technology, during the last three decades. But much of this growth happened not in terms of production, but rather in finance, both in the lucrative financial instruments that accompanied public offerings and in terms of technology that made ever more complex financial operations possible.
Traditional profits, in this context, were considered devalued in comparison with the profits obtainable. Jeffrey Nealon in Foucault Beyond Foucault suggests that in this sort of operation, the classic equation that Marx observed in Capital of M-C-M’ is now rewritten as M-M’, in other words, capital leads to capital growth without any intervening commodity.

The result, then, is a bit of what we saw this spring when, after President-Elect Obama made a YouTube speech calling for a WPA 2.0 as an economic stimulus, he turned away from infrastructure in the actual stimulus bill. Blame has been laid on Obama’s chief economic advisor Larry Summers.

But how the Democrats (or in California, Schwarzenegger) are going to get out of this mess is entirely unclear. Economic indicators suggest that the country will endure a long term period of stagnation, different from, but reminiscent of the 1970s and 1980s. This month, the New York Times reported that unemployment and underemployment now stands at 17.5%, the highest level since the Great Depression. Official unemployment in California now stands at 12%. These are staggering numbers. The state is making cutbacks while raising tuitions at the University of California system, leading to mass student protests and the regents macing students. California leads the nation again, it seems.

If the restructuring of the 1980s destroyed manufacturing, this decade’s recession has mowed down the creative class and the financial sectors. In the latest New Left Review, Gopal Balakrishnan suggests that we have entered into a stationary state, a long period of systemic stagnation. As he points out, Adam Smith never expected the wealth of nations to improve perpetually but rather expected it would come to an end in the nineteenth century as resources were exhausted. Capital’s perpetual growth would have been a mystery to him.
 
To conclude then, I want to return to where I started, the theme of complexity. I’ve been thinking about these issues a lot lately, re-reading archealogist Joseph Tainter’s The Collapse of Complex Societies. Tainter’s thesis differs from Jared Diamond’s (and also precedes it by a decade). Instead of turning to the external forces of ecological catastrophe (as Diamond does) or to foreign invasion (as other commentators do), Tainter sees complexity as the downfall of societies.

As societies mature, Tainter observes, they become more complex, especially in terms of communication. A highly advanced society is highly differentiated and highly linked. That means that just to manage my affairs, I have to wrangle a trillion bureaucratic agents such as university finance personnel, bank managers, insurance auditors, credit card representatives, accountants, real estate agents, Apple store "geniuses," airline agents, delivery services, outsourced script-reading hardware support personnel, and lawyers in combination with non-human actors like my iPhone, Mac OS 10.6, my car, the train, and so on.

This is the contemporary system at work, and it’s characteristic of the bureaucratized nature of complex societies. On the one hand, in a charitable reading, we produce such bureaucratic entities in hopes of making the world a better place, keeping each other honest and making things work smoothly. But in reality, not only is this dysfunction necessary for the operation of the service economy, these kinds of entities rub up against each other, exhibiting cascading failure effects that produce untenable conditions.

In Tainter’s reading, complex societies require greater and greater amounts of energy until, at a certain point, the advantages of the structures they create are outweighed by diminishing marginal returns on energy invested. The result is not just catastrophe but collapse, which Tainter defines as a greatly diminished level of complexity.

Just as rigidity was the failure point for Fordism, complexity is the failure point for post-Fordism. In this light, the culture of congestion valorized by Koolhaas is undone by the energy costs of that complexity.

Now I agree with Tainter when he concludes that the only hope to forestall the collapse of a complex society is technological advance. I’d argue that this is what’s driving the field of networked urbanism at the moment. But, I’m not so sure we can do it. This is where my optimism rubs up against my nagging feeling that urban informatics, locative media, smart grids, and all the things that the cool kids at LIFT and SXSW are dreaming up are too little, too late.

Technology itself is already all but unmanageable in everyday life and adding greater layers of complexity can’t be the solution. It’s in this sense that the Infrastructural City was more Mike Davis than Reyner Banham, something few have caught on to yet.

We should have taken our lumps when the dot.com boom collapsed and retrenched for five or six years. Instead we added that much more complexity—take the debt and what is required to maintain it or the impossible war or the climate—and now our options are greatly limited.

So we need to develop a new set of tools to deal with the failures of the neoliberal city and the impossible conditions of complexity today. This is hardly an overnight task, if it can be done at all.

Now Tainter holds one other card, suggesting that most of the people who experience collapse don’t mind it too much. Many of them seem happy enough to just walk away from the failing world around them, much like owners of foreclosed homes do today. Eventually a new civilization springs up and with it, perhaps we can imagine a better future.  

I want to conclude by talking about whether I’m a pessimistic or an optimist since I’m apparently being accused of being a pessimist at all my talks recently (parenthetically, I’ll add, I suppose that’s better than being accused of being an optimist). Back to Los Angeles: anyone visiting Hollywood Boulevard is accosted by attractive young men and women asking if one is an optimist or a pessimist. The next step is being lured into the Scientology Center to take a test. Maybe we’re better off not taking that test, but rather looking at reality, not a future scripted by a science fiction writer.

Second, I’m afraid that academe is a bit infected by Prozac culture these days. Hope would be fine if we had a President who seemed to have an ability to deal with the issues or if the alternative to this one wasn’t so deeply frightening.

End   


 

Complexity and Contradiction of Infrastructure from kazys Varnelis on Vimeo.

Continue reading “Complexity and Contradiction in Infrastructure”

Alternate Scenarios Wanted

British author Charles Leadbetter critiques the "Digital Britain plan" for making broadband ubiquitous, much like the Obama Administration’s own plan. Leadbetter points out that both are flawed because they focus on infrastructure in a narrow way, failing to address the deep transformations that the Internet is making on network culture and economy. Read his response here.

This section is particularly important:

Accelerating the spread of broadband will not save these industries but make their predicaments more difficult. Here’s the truth: plans to invest more in digital technologies will only pay off if they bring further disruption to economies that are already in turmoil. We will know when politicians are really serious about the coming digital revolution when they start to admit that it will have to cause significant disruption to established business models if it is to pay off.

This is particularly tricky in the UK. The implosion of financial services, long the flagship of the services economy, means the cultural and media industries, in which Britain has a strong position, will take on an even more important role.

Leadbetter has this right and what he says can also be applied to the two countries that I work in, the United States and Ireland, but the problem for capital will come in monetizing what he calls "mutual media," the rising ecology of bottom-up media production.

The problem with this model, also proposed by other authors such as Yochai Benkler and Clay Shirky is that it does not give an adequate explanation of how to monetize such media or how to distribute wealth in a remotely equitable manner (let’s forget socialism for the moment, I’m talking about market monopolies, in particular the inherent power-law nature of networks and how we can have anything beyond Google). Let’s be clear about this: mutual media are incredibly successful not just because we can produce anything we want and upload it, they are successful because it has us producing content for free for corporations.

Make no mistake about it, the day that it dawns on the administration at the New York Times that there are bloggers out there who would work for free, for the fashionable cachet of a byline on a Times column, and that these bloggers are better than many of the Times’s own writers is about two weeks before the entire staff of the Arts & Leisure section finds itself looking for work at Starbucks.

The economy undergoing an unprecedented transition. The owl of Minerva spreads her wings at dusk. Theory once again dreamed its successor era: if in the years between 1988 and 1994 theory seemed to be everything only to vanish, in the years since culture has seemed to be everthing, but on a much vaster scale, forming what appeared to be a new backbone in the economy (even if, as I’ve pointed out, it was finance all along). That’s vanishing now and with it, economic crisis is at our doorstep. There is no way out of this on the horizon. The wealth of networks is not in their ability to promote sharing or interaction, but in their ability to strip away jobs and destroy industries without proposing sustainable new ones. 

For anyone who thinks I’m being pessimistic, I do hope you’re right and I’m wrong. Really, I do.

Alternate scenarios wanted. My only caveat is that I we don’t cook the books or take on more Ponzi schemes like the real estate bubble.

 

 

 

Continue reading “Alternate Scenarios Wanted”