On the iPad’s Fatal Flaw

I’ve had my iPad for a short while and am enjoying it immensely. Anecdotally speaking, I’ve noticed that people who don’t immediately understand how they would want one wind up taking them back to the store or, if they didn’t purchase one, sometimes even get hostile (sometimes, even when they should know better because, say, they teaching in the digital media field). 

There’s no question anymore that this is a successful implementation of a computing typology that is fundamentally different from either a laptop or a desktop. A tablet computer that is ready to go at a moment notice is great for looking up recipes in the kitchen, for reading a newspaper or a book in the subway, and perfect for taking notes in lectures. It’s much less intrusive than a laptop, which can’t be held in one hand when standing and creates a barrier between the individual and others in a seminar or classroom. The multitouch interface works much better on the iPad than it does on the iPhone. Of the two, the latter seems like the unit I can more easily live without. 

I take immense pleasure in being able to haul around hundreds of books in a device that weighs less than a copy of Fredric Jameson’s Postmodernism book and occupies less space. Highlighting isn’t available yet, but it will be soon and with it, full-text search. At that point, the transformation of academic books into immaterial objects will be just a matter of time. I used to care a great deal accumulating a library at home, but if I can have one with me in my bag, then which is more useful? 

Still, don’t get me wrong. If a comparable product emerges from another vendor, I will defect immediately. I’m no great fan of the walled garden of applications that Apple has created, nor am I a fan of their "Father Knows Best" attitude toward the user. But everything so far is still vaporware or much less capable, so I’m stuck with the iPad for now.

As promised in the title of this piece, there IS a fatal flaw to the iPad, only it’s fatal not to Apple but to the media. There has been a lot of noise about how the iPad would give the media one more chance to survive. I was dubious that the iPad would play Jesus to the media to begin with, but now that Apple has banned applications developed by Adobe’s Flash Packager for iPhone, it’s game over. 

Where a periodical previously would have been able to develop an issue in Indesign, distribute it in print and over the net, convert it to Flash for non-Apple devices and use Flash Package for Apple devices, now the latter are inaccessible unless the media developer hand codes the application. This is much, much harder. At the Netlab, for example, we would have loved to produce periodicals, pamphlets, and books to read on the iPad  using a workflow consisting of Indesign, Flash, and the media packager, but now this is impossible. I’m not lamenting this too much. It’s disappointing, but our material will appear on the Web and as PDFs.

I see no great reason to complain. The Netlab doesn’t make money off its publications. But what about commercial periodicals? They’ll have to struggle to monetize content on the iPad and that difficulty—precisely at a time when they’re struggling just to stay afloat—will prove fatal for many. The rapid pace of creative destruction moves on. 

Read more

Some notes on the iPad

This morning I went to the local Apple store and picked up my iPad. I have been dying for a way to bring my PDF library with me for some time. My work revolves around reading and since I commute and travel it’s difficult not to have texts with me. In fourteen years of teaching, I’ve never had a proper office with a bookshelf and my current office, at Columbia’s Studio-X, doesn’t have room for more than a few books. A tablet full of PDFs struck me as a good replacement for the books in my library that don’t have high-resolution imagery. Moreover, putting PDFs on a tablet for use in during class discussions strikes me as a better way to go paperless in the classroom. I forget photocopies from time to time and a tablet is more like a sheet of paper than a laptop is. I can put it down and it’s no longer a physical artifact between my students and myself.

My initial impression is that this will be a tremendous success for me. I have hundreds of books in Papers on my Macs and can read them in Skim on either computer or in the Papers application on the iPad. It’s frustrating not having highlighting yet, but this will come either via Papers or a competitor.

To be sure, I have mixed feelings. I don’t like the closed nature of the App Store and I am wary of adding yet another product from the Apple ecosystem into my workflow. Still, just as the Kindle spurred the iPad, the iPad will, I hope, spur competitors. And I doubt that calling this the decade in which computing becomes pervasive will be much of a mistake. Computing is already pervasive.

It was enlightening to stand in line and discuss the different usage scenarios with the other purchasers. One individual is an artist who wants to use it as a sketchpad (I’m afraid that I would not buy Moleskine stock after today…my Moleskine is likely to be a victim), another is a photographer who wants to bring his portfolio with him, a third works with autistic children and uses the iPod Touch with them, a fourth thought he would use it for medical applications. I also see the tablet as an ideal device for the aging baby-boomer set whole will use it to browse the Web, make purchases, read magazines, do the New York Times crosswords, see pictures of the grandkids, and watch Netflix on it.

To some degree, it will spur new, media-rich publications. At the same time, these are difficult to make and even more difficult to capitalize. This decade will be devastating to media in this regard—and the last decade was already bad. Recently, I was at a lecture where someone suggested that his students weren’t familiar with CDs, that they used iTunes to purchase their music. Actually, he was grossly mistaken. Young people today simply don’t purchase music, they Torrent it or download it from Rapidshare. There are already large numbers of texts— academic and commercial—available for free from pirate sites. It’s a losing battle to fight this. I’m sure that some sites will be shut down, but others will open up. Moreover book piracy is immensely attractive to individuals doing research or teaching in foreign countries. A colleague from the developing world mentioned that she would use my writing in her courses because so little material on contemporary architecture was on the Web. Today she would have a choice of hundreds of books, all available to download freely, if illegally. Foreign governments will tacitly look the other way. Why should the US, the EU, and Japan have the knowledge? My children’s generation will find books purchasing as foreign as music purchasing is for the millenials.

Having easily searchable text will transform scholarship. Reading scholarly books cover to cover may become as odd as listening to albums cover to cover. My colleague at AUDC, Robert Sumrell suggests that complete availability and the ease of search will undo the need to read books at all. To some extent, I suppose it will. Moreover, the staggering amount of knowledge that will be at your fingertips may well act as a disincentive to create. The music industry hasn’t just collapsed because nobody is purchasing music: there haven’t been any new movements since hip-hop, alternative, and electronica. In part, this may be because technology now makes it possible to produce any sounds you can conceive of so technology is actually not a driver for music anymore and in part, this may be because local scenes just don’t develop the way they used to now that music spreads across the world in minutes. When everything is known, Baudrillard would suggest, we have no need for anything anymore. This is a polemic of course, but I think it makes some sense too.

There’s a lot of noise now about how media-rich magazines will be solve the problem of monetizing content for the press. But who will pay for all that media and all that design? Magazines are already strapped. I am not convinced that this will work, at least not for the majority of publications. 

All that said (and I thought I’d mention that all this was written on an iPad that I am using a bluetooth keyboard with), I can’t deny that the iPad makes me feel like it’s 2010, just as using a DVD for the first time in bed on my laptop in 2001 (and yes, it WAS 2001: A Space Odyssey) made me feel like it was 2001. Welcome to the future, now let’s see how we survive it.  

Location:N Fullerton Ave,Montclair,United States

Read more

Read the Infrastructural City

I’m delighted to announce that the good people at m.ammoth.us have organized an online reading group to read the Infrastructural City. Find out more at their site

Like Networked Publics, the Infrastructural City has become a long-term project that goes beyond the bounds of Los Angeles. I’m currently immersed in the Network Culture book, but I have some plans for a follow-up article to my introduction in Infrastructural City later this year and maybe even a book some time later. 

Read more

On Localism

Many thanks to everyone who came out yesterday and to all of the participants on the panel. Our next panel is on politics and will take place April 13. Steve Graham will be our special guest, with our focus the topic of his next book… Cities Under Siege. Video of today’s panel will be up by the end of the day, or so I hope.

To me, the most interesting point raised by the panel was a distinction between localism and conventional ideas about local place. For many people today, localism is a counterpoint to globalization. "Locally-sourced" produce, local food (particularly slow food), and local crafts undo the sameness that globalization relentlessly imposes everywhere.

Localism is a reaction to the loss of place which, if we follow Marc Augé’s definition from his book Non-Place, is a space with significance, a space in which meaning accrues out of historical activity. Think of a market in a town square to which the same people go daily to sell or buy produce. Over the years relationships build: children grow up, adults grow old, days gone by are remembered. For Augé, non-place, that is spaces of transit that we pass through, disconnected from others, is rapidly obliterating place.I’ve argued elsewhere that in the two decades since he wrote his book, Augé’s non-place is itself disappearing: instead we live in an oversaturated world, and non-places become not spaces of disconnect but rather spaces in which we connect with others.

But localism isn’t a return to place. For many of us, the necessities of a highly-specialized job market (how many architecture historians studying contemporary telecommunications do you know?) force us to move around too often to develop a lasting connection with a place. Localism is a simulation of the local. We make connections, we became regulars, we have intense but fleeting relationships with others, generally based around consumption (either with the staff at our favorite local restaurant or with the friends we go there with), but for most of us it’s temporary. Soon we’re on our way again. The ties break, or at best, are held together by the Net. Perhaps this accounts for localism’s wistfulness. Place is tragic: a great hope shattered by the Fall. Localism is comic: a temporary reconciliation that everyone knows is momentary, a bit of light laughter that helps us forget the inevitable. 

Read more

Today We Collect Nothing

The assignment, by now familiar to many of you, is to take two texts—Alison & Peter Smithson’s "But Today We Collect Ads" and Reyner Banham’s "The Great Gizmo"—and juxtapose them, with relevant commentary.

Published in 1956, the essay by the Smithsons preceeds Banham by nine years. It won’t be my first time looking at it so let’s start there. In "But Today We Collect Ads," the Smithsons look back at the era of heroic modernism when European architects like Gropius and Corbusier turned to American industrial construction for inspiration. Consciously or not, the Smithsons identify an epochal economic transition in the works, from Fordist industrialism to post-Fordist media and services. Instead of photos of grain silos, they collect ads. Since they are writing in 1950s Britain, in an economy constrained by postwar recovery and rationing, the ads are from overseas, filled with images of high-tech life in America and disposable throw-away bits of color from Japan.

For once, architecture was ahead of the game. Who else imagined, in 1956, that it would be the media, not big industry that would dominate economies, in direction, if not (immediately) in revenue? This essay anticipates the cultural logic of late capitalism at work and in that, is remarkable.

It also lays out a method: the high dips into the low for renewal, much as the Frederick Jackson Turner’s American went to the wilderness to renew himself after too much time in the city. As the Smithsons point out, this is a tried and true modernist method, far from the studied confusion of the two under postmodernism.

Today, however, this hardly seems plausible. Network culture levels out the differences between high and low. The transactions are constant and if our access to knowledge isn’t perfect, it’s awfully close. There’s nothing to collect anymore, no sources out there that are unknown to architecture. The Internet and globalization put paid to that. From Baudrillard’s perspective, this condition of total knowledge puts an end to history. I suppose he’s right, the coolhunt is over, we live in a world of stylistic tropics as Brian Eno puts it. Now I hardly think it’s the end of intentionality, that we all might as well abandon culture for "intentionless" (as if there is no intent in code) generative architecture, but if not that, what then?

One hint is the utter collapse of the ad market during this restructuring period. The Post-Fordist darling is as dead as the automobile industry. Instead, technology has come to the fore, which leads us to Reyner Banham’s "The Great Gizmo."

To be honest, I’m feeling a little exasperated with Banham now. I have to write about him for this piece and I have to put together a lecture for the University of Michigan in which I use Banham as the departure point as well. He’s an unquestioned hero to a generation of critics, which means it’s time to question him, something that I’ve been doing for half a decade now, albeit apparently with little impact. I know that my positions are too negative for many tastes, but I have my issues with the foremost advocate of technology and non-plan in architecture, patient zero in the conversion from morality to techno-fetishism.

Still what of the Great Gizmo and how do I reconcile attacking Banham for techno-fetishism with my suggestion that this precisely what we should be looking for in his essay? Let’s recap Banham’s argument, briefly: he describes the American type as overcoming the challenges of nature with portable technological apparti. I’m not sure where Banham slots infrastructure and the technological sublime that David Nye has written about so eloquently in his narrative, but still, you get the picture so let’s run with the gizmo for now.   

Since Banham’s day, the gizmo has continued to evolve from transistor radios, tape players big and small, laptops, portable media players, smart phones, to tablets. Even automobiles have been reshaped as gizmos, possessing not one but many electronic brains, each ready to monitor, interpret, and control an aspect of the vehicle’s performance. That architecture hasn’t caught up yet is a challenge to architects. No matter how cleverly designed by the latest parametric-modeling software, virtually all buildings remain dumb boxes, incapable of making decisions or accommodating the rapid pace of change. Of course I know that good friends and colleagues are working on this sort of thing (for example, the Living), but its not here yet, and that’s what counts. Our lives have been thoroughly transformed by gizmos, but our architecture is much the same. Give us our smart phones and laptops and put us in the waiting room of Sterling Cooper and we’d be fine. Perhaps we might notice its a bit hotter than our taste in winter and colder than we’d like in summer due to the pre-OPEC penchant for over-conditioning the environment. But certainly there’d be little difference, nor would we see much difference in the places the mad men lived in, although we might notice that they had less junk than we did. Compare that to life in the 1920s when air conditioning did not exist, refrigerators were scarce, and many homes in the countryside had outhouses instead of toilets. 

If you did time-travel back to the 1960s and happened to run into an aficionado of architecture (like Banham) about the future of houses, you’d get a vision virtually identical to the Wired dreams of our own day. Perhaps the only real difference would be the embrace of obsolescence and the lack of green, but both visions would include houses that could be remotely controlled, might reconfigure themselves as needed, anticipating our needs and desires, would be able to whisk away waste, and might even clean up after themselves. The future, when it comes to houses, has been deferred.

To blame architects would be a mistake. Something larger is up here, something coded deep into society. My sense is that it’s that the house isn’t necessary.

In part this is the fundamental innovation of American architecture. The Puritans could have chosen to build houses out of materials that would last, like brick and stone, but they didn’t, believing that the Second Coming was on its way and to build for posterity would be tantamount to blasphemy. With the excess of capital that produced, settlers were able to spread swiftly across the continent while industry could expand even more rapidly than in Europe.  

That’s one narrative, and it should be placed besides Banham’s. But back to the gizmo. Already for the settler, it has taken over, letting the house become just a dumb box, a background condition. But don’t take my word for it, check out another essay by Banham, in this case his A Home is Not A House in which he suggests that the ultimate modernist environment might be a conditioned space within a membrane enclosure, an "environment bubble" that seems to exist as much in metaphor as in reality.

banham bubble

So it’s back to this will kill that, only this time via Banham not Hugo, and the house instead of the cathedral. But if our dreams are similar, and if the Internet realizes Marshall McLuhan’s Global Village, then what about the bubble?

Could it be that the real estate bubble finally unloaded the house? Is it possible that this recession isn’t just a recession but a fundamental restructuring, a restructuring of architecture that will undo the dumb box? By this I don’t mean that the house will simply become smarter, more sentient, more technological but rather that in turning architecture into a virtual product in the financial realm, the bubble allowed it to become a virtual product in the physical realm? 

We will need at least a decade to absorb the excess housing currently in the market. With the credit crunch, it’ll be much harder for renters to purchase their first homes and for homeowners to sell their underwater homes. Jobs will be scarce and individuals will have to be willing to travel for them. Mobility will rise, but homes will become less the spaces of self-realization that they were for the last decade (and which I predicted they would be back in the 1990s) and more shells to be filled temporarily, with only a few, highly-intelligent objects in one’s possession. Maybe this is the dream that  gizmo-creator Steve Jobs had in his home in the early 1980s in unconsciously recreating this scene from Banham? 

Is this an end condition to architecture? Maybe. But when hasn’t architecture been in an end condition? Even modernism’s noble efforts were tilting against an impossible windmill of capital and postmodernism fared even worse. I’m sure some of us will figure out some way to keep the discipline doing the same old thing. But maybe there are other possibilities? 

It strikes me that architects are missing a major opportunity here. All of this is very similar to what the Eameses were up to when they moved away from construction to media. They built the best house of the century but architecture couldn’t hold their attention. It was too slow. Instead, they turned to media. Today’s media are more spatial than film ever could be. Hertzian space—and the interface to it—is the new frontier.

Architects should be sure not miss out.

#lgnlgn

Read more

On the Creative Destruction of Books

It has become a cliché that the iPad, which is available for pre-sale this Friday will save the book industry. Apple’s proprietary book purchasing and reading application, Steve Jobs tells us, is so easy to use and so sexy that it will make consumers flock to Apple’s e-books.

If it only were that simple. Capital is in a new position now, having become far more efficient than Communism ever was for creating weapons to destroy industries. Creative Destruction is now loosed like never before, the contradictions that capital inspires destroying industries without offering any hope that they will be replaced.

In this case, my educated hunch is that Apple’s painfully quaint bookstore will be an also ran. This doesn’t mean it, and it its competitor at Amazon, won’t make money. After all, the iTunes Store has been a smashing success. On the other hand, what I suspect is that book piracy will be to this decade what music piracy was to the last. Today, with a little bit of legwork, you can find virtually any music you ever wanted online for free. I predict that in less than a decade this will be true for books as well.

 

Read more

On atemporality

I wanted to lay out some thoughts about atemporality in response to Bruce Sterling’s great presentation on the topic over at Transmediale.* We’ve had a dialogue about this back and forth over the net, in places like Twitter and it’s my turn to respond. 

The topic of atemporality is absorbing my time now. I have the goal of getting the first chapter of my book on network culture up by the end of next month (I know, last year I thought it would be the end of March of that year, but so it goes) and it is the core of an article that I’m working on at present for the Cornell Journal of Architecture. 

Anyway, I was impressed by how Bruce framed his argument for network culture. This isn’t a new master narrative at all, there’s no need to expect the anti-periodization take-down to come, or if it does, it’ll be interesting to see the last living postmodernists. Instead, network culture is a given that we need to make sense of. I was also taken by how Bruce gave it an expiry date: it’s going to last about a decade before something else comes along. 

Then there’s Bruce’s tone, always on the verge of laughter. It’s classic Bruce, but it’s also network culture at work, the realm of 4chan, lolcatz, chatroulette and infinite snark. And I can imagine that one day Bruce will say "It’s all a big joke. I mean come on, did you think I was serious about this?" And I’d agree. After all, a colleague once asked me if the Internet wasn’t largely garbage, a cultural junkspace devoid of merit? Of course, I said, what do you take me for a fool? She replied by saying she was just wondering since after all, I studied it. I said, well yes, it’s mainly dreck but what are you going to do with these eighty trillion virtual pages of dreck, wave your hands and pretend they’ll go away? It’s not going to happen. So yes, snark is how we talk about this cultural ooze, because that’s not only what it deserves, it’s what it wants. To adopt a big word from literary criticism: snark is immanent to network culture.   

I was also taken by Bruce’s description of early network culture and late network culture. Again, network culture isn’t a master narrative. It has no telos or end goal. We’re not going to hold up Rem Koolhaas or hypertext or liberalism or the Revolution or the Singularity, Methusalarity or anything else as an end point to history. In that, we part from Hegel definitively. Instead, network culture is transitional. Bruce suggests that it has ten years before something else comes along. He also talks about early network culture, which we’re in now, and late network culture, which we can’t really anticipate yet.   

I think he’s on to something there, but I think we need to make a further division: network culture before and after the crash. The relentless optimism of the pre-crash days is gone, taking starchitecture, Dubai (remember Dubai?), post-criticism, the magazine era, Prada, and hedge fund trading with it. We are in a different phase now, in which portents of collapse are as much part of the discourse as the next big thing. Let’s call it the uneasy middle of network culture.

Things are much less sure and they’re unlikely to get any better anytime soon. It’s going to be a slow ten years, equal to the 70s or maybe somewhere between the 70s and the 30s. Instead of temporary unemployment, we’re looking at a massive restructuring in which old industries depart this mortal coil. Please, if you are out of work, don’t assume the jobs will return when the recession ends. They won’t. They’re gone.

But as Bruce suggested, we have to have some fun with network culture. Over at the Netlab research blogs, we’re starting to put together a dossier of evidence about practices of atemporality in contemporary culture. You’ll be hearing a lot more about atemporality from me over the next month. 

*The talk is below. 

If you prefer, you can now read the transcript online here

Read more

2/9/10 Discussions in Networked Publics

The Network Architecture Lab announces a series of evening panels entitled “Discussions on Networked Publics “at Columbia University’s Graduate School of Architecture, Planning and Preservation’s Studio-X Soho Facility to investigate the changing conditions of the media, architecture, and urbanism today.
The mass audience and mass media analyzed by the Frankfurt School are long gone. As digital media and network technologies are increasingly integral with everyday life, the public is transforming. Today we inhabit multiple, overlapping and global networks such as user forums, Facebook, Flickr, blogs, and wikis. In lieu of watching TV, listening to the radio, or playing records, we text each other, upload images to social networking sites, remix videos, write on blogs and make snarky online comments. The media industry, which just a decade ago seemed well established, is in flux, facing its greatest challenge ever. If we can be certain of anything, it’s that as Karl Marx wrote, "all that is solid melts into air."

In 2008, we published Networked Publics (MIT Press), a book produced in collaboration with the University of Southern California’s Annenberg Center for Communication examining how the social and cultural shifts centering around new technologies have transformed our relationships to (and definitions of) place, culture, politics, and infrastructure.

“Discussions on Networked Publics” seeks to explore the ramifications of these changes, giving particular attention to architecture and cities. In a set of five panels—culture, place, politics, infrastructure, and network society—we will explore the consequences of networked publics in detail. Our goal will be to come to an understanding of the changes in culture and society and how architects, designers, historians, and critics might work through this milieu.

The first panel is on culture. Our panelists will address the question of how media, architecture, and architectural media are changing in the context of networked publics.

Panel 1. Culture
9 February, 6.30
featuring: Michael Kubo, Michael Meredith, Will Prince, Enrique Ramirez, David Reinfurt, and Mimi Zeiger

Panel 2. Place
25 March, 6.30

Panel 3. Politics
13 April, 6.30
featuring special guest Stephen Graham

Panel 4. Infrastructure
4 May, 6.30

Free and open to the public
RSVP: [email protected]
Events begin at 6:30 unless otherwise noted.
Studio-X New York
180 Varick Street, Suite 1610
1 train to Houston Street
[Studio-X is a downtown studio for experimental design and research run by the Graduate School of Architecture, Planning and Preservation of Columbia University.]


 

 

 

Read more