On Art and the Universal, I

In his Theory of the Avant-Garde, Peter Bürger concluded that the avant-garde’s purpose is for art to sublate(assimilate) into life. In opposition to nineteenth century aestheticism that aimed to emphasize the autonomy of art from life, Bürger’s reading of the historical avant-garde—be it Dada, Surrealism, Productivism, Constructivism, or the Bauhaus—was that it aimed to break down the barrier between art and life, allowing the fullness of artistic expression to pass into all aspects of life.

For a large group of people in the developed world, this is now an everyday condition. Members of the creative class curate their lives around aesthetic choices, work and life are inseparable. Our lives are filled with intentional choices that express our individuality: we aspire to cook modernist cuisine, clean up with Marie Kondo, and obsess over the right boots and hat to go gardening in. STEM and maker culture are not opposed but inseparable: who doesn’t make their own jewelry or design their own body art these days, often using 3D modeling software and printers? Tens of thousands of people worldwide sit on Philippe-Starck-designed toilets every day. The workplace is a playground. Even after the recent plague, design festivals and biennales are a dime a dozen now. Go glamping in Marfa, spend an evening at the local sip ‘n paint, bring your friends to the immersive van Gogh. This curated life is thoroughly documented, to be posted on Instagram for the world to see.

In fairness, Bürger believed that by the 1950s, when avant-garde techniques from Dada and Surrealism had been incorporated into advertising and television (think Ernie Kovacs or Ray and Charles Eames’s films here), the aestheticization of everyday life had been complete and the avant-garde had been dealt a fatal blow. For Bürger, this is a false sublation, but I’m twice as old and jaded as I was when I first read the Theory of the Avant Garde and I don’t see how Bürger’s historical avant-garde could have ever been anything but a temporary reconciliation with an ultimate tragic end. The avant-garde was always a historically delimited moment. And if it’s fair to say that contemporary culture is thoroughly spectacularized, you would be right, but when a book on Constant Nieuwenhuys sells for $1,892 on Amazon, what is the spectacle anymore? Writing about Situationism has earned more than one professor tenure at a top university. Pinot Gallizio’s works, once sold by the yard, now sell for tens of thousands of Euros. The practices of Situationism have long since been absorbed by the spectacle. What is Dîner en Blanc® if not a Situationist practice? What is Situationism if not an excellent guerrilla marketing project?

That the Situationists or Fluxus chose to continue on with the neo-avant-garde was merely an after-effect. No doubt there is much truth there. The historical avant-garde is long dead and with it too the promise of art sublating into life.

Much of the art world has long abandoned any pretense of avant-gardism, embracing instead the idea of self-validation and value. Take NFTs, the realm of garish cartoon apes that have escaped from a Hot Topic store to scream “I am rich.” This is no different from the art at the very top of the market, touted as an investment vehicle that cuts out the vicissitudes of corporate ups and downs, skipping price/earnings ratios and dividends for an unabashed belief in inflation and the greater fool theory, but in reality act primarily as a signifier for extreme wealth and good taste (and often a front for money laundering).

Other forms of art and architecture use politics as a form of branding, taking a page from Debord’s idea of the Spectacle. Take the hyper-branded architecture of Rem Koolhaas, Bjarke Ingels, Diller, Scofidio + Renfro and their ilk, often presented by academic “critics” as somehow serving to liberate people (which I suppose means from architectural convention) or progressive (which is just baffling). 

These two positions—the idea of self-validation and branding—come together in art that espouses a political position or identity politics. Now the central point of the avant-garde had been to communicate political ideas and, especially after the Black Lives Matter and #metoo movements, there has been a burst of interest in the art world in such art. Yet, nobody has ever gone to an art gallery and come out a communist. Hedge fund manager Daniel Loeb collects art by Jean-Michael Basquait, Richard Prince, Mike Kelley, and Cindy Sherman, all of whom have been political art darlings of Leftist art critics and yet is a major donor to Right-wing causes as well as a supporter of the neo-fascist menace that occupied the White House from 2017 to 2021. He is merely one egregious illustration; ultimately one’s political position hardly matters. What does it mean to have an El Lissitzky on one wall and a Frida Kahlo on another? It signifies wealth and aesthetic appreciation, not political allegiance. What does it mean to demonstrate solidarity with an identity group? Why is one lauded for affirming one’s sexuality loudly in art, even if Mapplethorpean transgression can no longer demonstrate the shock of the new? All this merely demonstrates one’s virtue.

Many members of the bourgeoisie, unable to escape the deeply engrained notions of Protestantism, but questioning its superstitions, have replaced the delusion of original sin with the notion of “privilege.” Surrounding oneself with art that trumpets the identity of its maker is a way of assuaging this guilt, even if—as the notorious Whitney “Collective Actions” show demonstrated—political art’s functional purpose isn’t to change the structural condition that it critiques but rather to underscore and cement those very structural conditions. Nor is this new, notwithstanding the newness of the phrase “virtue signaling,” virtue and art have long been linked, initially through religion, later on through connoiseurship. And, of course, for many artists, the idea that art needs to be socially relevant assuages their own guilty consciousnesses for producing useless things for the rich.

And yet, as Peter Sloterdijk explained in his Critique of Cynical Reason, it’s the habit of such guilty consciousnesses to turn to cynical. The cynic (in the sense that Sloterdijk and I always speak of) is someone with an enlightened false consciousness, someone who knows that something is wrong but goes on doing it anyway. Having read critical theory in university, the modern cynic knows that what she or he is doing is wrong, but they do it anyway. Sloterdijk writes that this makes them “borderline melancholics, who can keep their symptoms of depression under control and can remain more or less able to work.” For Sloterdijk, once an individual has become cynical, his or her hope has been lost, abandoned for expediency. Take for example, the Marxist professor (a figure I met all too often in the university) who realizes that with Revolution endlessly deferred, the best thing they can do is to defend their academic position at all costs so they can continue preaching Adorno and Benjamin, even if that defense comes at the cost of cutting down rising faculty, avoiding any political activities outside the university, or looking upon staff as human beings worthy of consideration. Fascism—both interwar and present-day American and European fascism—is the ultimate result, of course, a politics based on brutal expediency, in which democracy must ultimately give way to a “politics of pure violence.”

There is, however, a choice that avoids the cynical, the choice of embracing the most degraded of all ideas in art today, that of “the universal,” and it may not be what many of you will think or find acceptable (although in private conversations, many of you have said that this is precisely what is necessary…). That possibility is the subject of Part II, which will come after an interregnum in which I get some work out there.

On the Academy

I ran into the following article by Michael Hanlon recently, "The Golden Quarter. Why has Human Progress Ground to a Halt?" Hanlon's thesis is that even if we all have supercomputers in our pockets, the big advances—landing men on the moon, computers and the birth of the Internet, the Pill, feminism, the gay rights movement and so on—all happened in the 25 years from 1945 to 1971. 

This is true enough, I suppose, although we could argue that personal computing, smart phones, self-driving cars (which I believe will be common by 2020), cellular phone access for the entire world, and the (largely illicit) digitization of much of the world's knowledge into freely available libraries are in fact radically new. If Sputnik and Viking were important, the Mars Science Rover is a massive advance as is landing Philae on Comet 67P/Churyumov-Gerasimenko and (we hope) flying New Horizons past Pluto. So, too citing the birth of the women's rights movement may be disingenous, its seed having came much earlier, in the suffragete movement. The advances in gay rights during the last five years have been massive. The networked publics that have emerged in the last couple of decades are an unprecedented shift in how we relate to each other and our own decade is likely to be remembered as the one in which knowledge-based artificial intelligence has spread into everyday usage in the developed world, not a minor point in human history. 

But what's interesting to me about this article is that it is so applicable to the humanities. When I went to graduate school, it was an incredibly exciting, even revolutionary time, when French theory was making massive headway and every visit to the academic bookstore promised something new and cutting edge, if sometimes impenetrable, to read. But the humanities have come to a crashing halt. When theory is talked about anymore, it is in terms of concepts like "biopolitics," "postcolonialism," and "the control society," formulated long ago. Maybe I'm grumpy or these fields are no longer new to me, but I suspect something is up. 

Here I think that Hanlon's point really does apply, and that academics in particular has become risk averse. The biggest innovation in academics during the last decade hasn't been in theory, it's been the development of a digital humanities that has largely traded scholarly advancement for funding. With universities increasingly corporatized, academics are expected to fundraise, not to take risks or create innovative theories. Stories of brilliant scholars who don't get tenure due to taking risks and programs being shut down for being too edgy are common.

Moreover, theory itself has become quite conservative. To talk about "accelerationism," for example, or even suggest that we are no longer under a postmodern condition, is widely met with derision by tenured theorists who might otherwise expect to have sympathy with such experimental thought. But no. Take architecture, where a rather pat formula has emerged that everyone seems to follow: find a largely obscure architect or event from the 1950s or the 1960s, head to the archive, make a few conclusions invoking French theory (generally Foucault), and you're done.  

What to do then? Being Samogitian, my natural demeanor is gloomy rather than optimistic. But I'd like to suggest, optimistically, that leaving the academy may be an opportunity, or at least another possibility.

Marx, Freud, and Benjamin, to take only three key intellectuals operated primarily outside the university, as did Clement Greenberg, Le Corbusier, Donald Judd, and Robert Smithson. This isn't to say that it would be necessarily easy outside the university—for one, the conditions of journalism today have become quite difficult as well, so that route is a problem—but it points to a line of flight that it seems to me most worthwhile to explore these days.      

  

Read more