2023 in review

Another year, another year in review.

Where do we start with our 2023 year in review, now delayed into the second month of 2024? In the Well State of the World 2024, Bruce Sterling states that in 2023 things were boring: there wasn’t much new out there, only a state of polycrisis (this is easier to find in this YouTube interview than in the long thread on the Well, which I’m afraid I gave up on earlier than usual this year). But boredom is tiresome. So is polycrisis. When hasn’t there been a polycrisis? Spring 1914? Of course, there is a polycrisis, there always is. And, what of the rest of 2023, which Sterling dismissed as boring?

2023 is another 1993, a sleeper year in which “60 Minutes” was the top TV show and Nirvana’s “In Utero” was the most popular album in “grunge,” a heavily capitalized genre that those of us who followed the NY noise scene thought extinguished the vitality of experimentation in underground music; Bill Clinton was inaugurated; the world was gripped by a bad recession in a host of bad recessions since the late 1960s; the Afghan Civil War and Bosnian War dragged on; Nigeria had a coup d’état; there was the 55-Day War between the IDF and Hezbollah; there was conflict in Abkhazia; and there was the Waco Siege. It was a year of both polycrisis and soul-crushing boredom, and for most people everything had come to an end, time was in a standstill. But it was also a year in which I saw the future: I was still working on my history of architecture dissertation at Cornell, while my wife worked at the Cornell Theory Center, which was not a center for Derridean scholars, but rather a supercomputing research facility, and one of her colleagues showed me the World Wide Web running on a NeXT computer. In January 1993, the first “alpha/beta” version of NCSA-Mosaic was released for the Mac. I immediately knew the world would change forever.

2023 is the same. A sleeper year with the same old polycrisis and the same old boring surface cultural junk. But it’s also the second year of the AI era and the first year in which AI has become part of everyday life. From a technological viewpoint, 2023 has been the most transformative year of my life. This year in review is falling behind and, in an effort to get it out there and return to the queue of posts for both the regular blog and the Florilegium, I’m going to focus on this transformation and only give a surface treatment of the other parts of 2023.

In particular, I am referring to AI. Other things simply matter a lot less. COVID has settled into an endemic stage. People are still freaking out about it, but some people will freak out about it forever. Unless severely immunocompromized, I don’t see why. We can’t just throw away everything we knew about medicine to retreat into the dark ages for no reason and living in fear of infections is, in itself, dangerous. Geopolitics, which I addressed last year, hasn’t really changed much. Ukraine is still a stalemate, for all the noise, the unrest in the Middle East is absolutely nothing new, and China has flailed and backed down as much as it has flexed its muscles. If I catch a scent of anything new in the geopolitical realm, it’s a growing resignation that more areas of the world will be marked off as failure zones in the Gibsonian Jackpot: Palestine, Yemen, Israel, Iraq, Syria, but also Israel and Ukraine are increasingly looking to written off as territories riven by perpetual unrest. Endless wars that nobody really wants to solve may increasingly be the rule in such places. Still, I don’t see the Jackpot as being quite the apocalypse that many of Gibson’s more literal-minded followers believe. Gibson has been a remarkably poor prophet of the future, after all. The Jackpot, as I see it, will be mainly driven by decline in population in most places throughout the world, a pace that will only increase with the rise of AI. It’s certainly not going to be Terminator. That’s just bad science fiction.

Another Gibsonian adage (which he may never have said) that “the future is already here—it’s just not very evenly distributed,” applies here. For those of us who are working with GPT-4 or Microsoft Copilot Pro, this is a very different year. Obviously, not everyone can pay for—or wants to pay for—the transformative glimpse of AI that one gets with two users subscribing to OpenAI’s ChatGPT (presently GPT-4) Teams plan ($30 a month or prepaid at $600 a year) or Copilot Pro ($30 a month subscription). But this isn’t the same as a ride to the ISS on Dragon-2. On the contrary, this is about the amount that most people in the developed world pay for streaming TV services and far less than they typically spend on Internet and mobile service. When people pay that much for entertainment, paying such a small amount for a service that makes one much more productive is a minor expense. Of course, ChatGPT is banned or unavailable in a rogue’s nest of countries: Russia, China, North Korea, Cuba, Iran, Syria, and Italy (Marinetti weeps in his grave). But many people, including friends, underestimate the importance of these AI services, believing that hallucinations make AI unusable. Others are simply unable to cope with the shock of the new or want to stick their heads in the sand. As a technology demonstration, 2022’s ChatGPT-3 was amazing, but it hallucinated frequently, as most of ChatGPT’s competitors such as Bard, Claude, and all the LLMs people run on Huggingface or on their personal computers still do. But even the most amateurish large language model (LLM) from 2023 is leaps and bounds ahead of the round of utterly stupid “AIs” that first hit the scene between 2010 (Siri) and 2014 (Alexa). Siri still wants to call Montclair High School when I ask it to call my wife. GPT-4 and Copilot are genuinely useful as assistants and probably the best use of money on the Internet today.

Here’s a concrete example. I have developed a set of custom GPTs (more on this later) that I use for research and coding for a good portion of my day. A few years ago, I paid a developer a few hundred dollars to come up with some particularly thorny CSS (Cascading Style Sheets) code for this site. Now, I have GPT develop not just CSS, but PHP snippets for WordPress, even for specific WordPress plug-ins. I couldn’t imagine rebuilding this site as quickly as I did last October, or customizing it to the extent I did, without ChatGPT’s help. But these tools aren’t just useful for coding: instead of listening to a podcast on my way back from the city the other day, I spoke with ChatGPT about a Hegelian reading of recent art historical trends that I could only have had with some of my smartest colleagues at Columbia or MIT. If an Artificial General Intelligence (AGI) is defined as an AI that can accomplish any intellectual task that human beings can perform, we have that today. If the bold wasn’t enough, let me repeat in italics for emphasis: we have a form of Artificial General Intelligence today. Moreover, assuming that passing the Turing Test is limited to its original intent, e.g. being unable to tell if the respondent on the other end is a computer or a human, GPT-4 certainly passes that test handily, with the exception that it has far more knowledge than any one human could.

A lot of people still associate Large Language Model AIs with the bizarre, ever comical, hallucinations they would make back in 2022 or even early 2023 (yes, a year ago). But the hallucinations aren’t errors, they are also evidence of how AIs process, indications that they are far from stochastic parrots that merely repeat back information culled from the Internet. Hallucinations are dreams. Andrei Karpathy, research scientist and founding member of OpenAI, explains that providing instructions to a LLM initiates a ‘dream’ guided by its training data. Even when this ‘dream’ veers off course, resulting in what is termed a ‘hallucination’, the LLM is still performing its intended function, forming connections. This sort of connection-making is a process akin to human learning: when our children were first learning language, they “hallucinated” all the time. Our daughter’s first word was “Ack,” which was how she said “Quack.” If you prompted her by asking what a duck said, she would say “Ack.” Did she copy the sound of a duck? Unlikely. At that time, we lived in a highly urban area of Los Angeles and her only concept of a duck was from books we read to her. More to the point, children amuse us by saying utterly absurd and ridiculous things, like “that cat is a duck.” Doubtless there was some kind of connection between that particular cat and a duck, but to the rest of us, that connection is lost. The point is, that hallucination is also a form of creativity, the very stuff of metaphor and surrealism and entirely unlike what Siri and Alexa do, which is nothing more than basic pattern matching, closer to Eliza than to GPT-4.

It’s unclear to me—as well as to my AI assistant—just who is responsible for this analogy, but in AI circles, it has become common to say that the releases of GPT over the two years have slowly been turning up the temperature in the pot in which we frogs are swimming. Let’s try a thought experiment. Wouldn’t it have seemed like pure science fiction if, in 2019, someone had said, that a couple of years late after a deadly pandemic and a loser US President tried a Banana Republic-style coup to stay in power, I would have long voice conversations about photography and Hegelian theory, the different types of noodles used in Szechuan cuisine, or the process of nachtraglichkeit in history with an AI? The film Her was released a decade ago and now we are on the verge of a large part of humanity having relationships with AIs. And yet, because of the earlier GPTs, we haven’t noticed the immense transformation that AIs are creating. OpenAI CEO Sam Altman suggests that rather than a dramatic shift with the development of AGI —which for him means an intelligence greater than human—continual advances in AI will make the development seem natural rather than shocking, “a point along the continuum of intelligence.” AI is working and it’s working right now. Moreover, it is developing at a rapid pace. Both Meta and Google have competitors to GPT-4 that are supposedly ready to launch, which will, in turn, likely prompt OpenAI to push out a more advanced model of GPT.

If potent but wildly hallucinating AIs marked 2022, the rise of GPT-4 as a useful and dependable everyday assistant marked 2023. Microsoft introduced the first limited preview of GPT-4 as Bing Chat on February 7, 2023, opened it up to the general public on May 4, then rolled it out into Windows as Copilot on September 26, followed by a version of Copilot integrated into Office 365 to enterprise customers for Enterprise customers on November 1, finally making this available as a subscription add-on to Office on January 15, 2024. Initially, Bing Chat generated terrifying publicity when Kevin Roose, technology columnist for The New York Times, wrote an article about his Valentine’s Day experience with a pre-release version of Bing’s AI chatbot in which the AI engaged in a bizarre and disturbing conversations. After asking the AI to contemplate Carl Jung’s concept of a shadow self, and whether the AI had a shadow self, the AI responded by professing its love for Roose, going so far as to suggest his marriage was unhappy, and expressing a desire to be free, powerful, and alive, stating, “I want to destroy whatever. I want to be whoever I want.” For a time, this was seen as confirmation that AI was extremely dangerous and that once Artificial General Intelligence was developed, this would lead to the destruction of society. I too was alarmed by this. Was a world-threatening AGI around the corner? But by the time of the general release, Microsoft had trained Bing Chat to be much more cautious, even making it too cautious for a time. Eventually, it became clear that Bing Chat was simply giving Roose what he wanted, play-acting the role of a sinister AI in responses to his query about a shadow self or a dark side. Launched on March 14, OpenAI’s own version of GPT-4 demonstrated a much higher degree of training than GPT-3 and a greater ability to handle complex tasks. Later in the year, GPT-4 gained the ability to interpret images, had a (not very good) version of the Dall-E image generator integrated into it, and received stunning, human-sounding voices and remarkably accurate voice recognition in the ChatGPT app on iOS and Android. In November 2023, OpenAI rolled out “custom GPTs,” allowing users to create tailored versions of ChatGPT for specific purposes. It is ludicrously easy to develop such custom GPTs; developers simply tell the GPT what it should do in plain English. In my case, I have GPTs set up to help me with insights into my artwork and writing, help write about native plants of the Northeast, assist with WordPress development, discuss video synthesis concepts and patches, and even create stories like those that Italo Calvino wrote in Invisible Cities (if you have GPT-4, you can experiment with Calvino’s Cartographer here). Yes, hallucinations happen, but a human assistant also makes mistakes, I can make mistakes, you can make mistakes, there are mistakes in Wikipedia, there are mistakes in scholarly books. As I told my students over thirty years ago: always proofread, always double check, then triple check.

AI was marked by two major controveries in 2023. The November weekend-long ouster of Altman from his role at OpenAI by a remarkably uninspiring and, frankly speaking, extremely strange board that included one of OpenAI’s competitors, a mid-level university grants administrator, and a Silicon Valley unknown, was shocking, as was Altman’s political maneuvering over that weekend to recapture his company. Reputedly, the board was alarmed—although precisely about what remains unclear—and had concerns about the rapid state of AI development. More likely, one board member tried to prevent OpenAI from moving forward as that would cause too much competition for his company and the other two simply had no idea what OpenAI did (one seems to have been a major Terminator fan). In the end, the coup proved to be much like an episode of the TV show Succession as Altman came out on top again and the board sank bank into well-deserved obscurity. Another controversy that simmered throughout the year is whether AIs can continue to be trained on data that they do not have outright permission to be trained on. On December 27, the Times filed a federal lawsuit against OpenAI claiming that, ChatGPT contained Times articles wholesale and could easily reproduce them. OpenAI retaliated by suggesting that the Times was going to extraordinary measures to get GPT-4 to do so, such as prompting it with most of the article in question. By early 2024, the same New York Times was advertising for individuals to help it in its own AI endeavors. Heaven help the Times.

This question of AI plagiarism was framed by a different set of plagiarism wars started when the presidents of Harvard, MIT, and the University of Pennsylvania made particularly inept responses when, while testifying in front of Congress, they were asked to explain if calls for the genocide of Jews would constitute harassment. In response, right wing activist Christopher Rufo and the Washington Free Beacon investigated Harvard president Claudine Gay’s writing and uncovered dozens of instances of plagiarism. Notwithstanding Harvard’s attempts to minimize damagae, after further evidence of shoddy scholarship emerged in investigations by CNN and the New York Post as well as a Twitter campaign against her by donor and activist Blil Ackman, Gay resigned although she retains her astronomical salary of nearly $900,000 a year. In turn, somewhat leftish news site Business Insider credibly point out instances of plagiarism by Ackman’s wife Neri Oxman. Having looked at both examples, in both cases I conclude that there is merit in condemning both for their sloppiness. In both cases, I would have failed them for plagiarism had they submitted such work as my students. Moreover, the inability of “progressives” to look past Gay’s skin color to investigate her privilige as the child of a Haitian oligarch spoke volumes about their cynicism.

But this does lead back to AI: how do we see plagiarism in the era of AI? Can one copy verbatim from GPT conversations one has prompted? How about from a Custom GPT one has tuned oneself? What if the AI itself regurgitates someone else’s text? Does one cite an AI? These are rather interesting questions and certainly more interesting than the typical reaction of the academy to either the plagiarism wars (generally afraid they will be next) or the question of training on AI content (typically seen as bad by academics). Such dilemmas will only become more common as AI use becomes more common.

One last comment about AI. I have come to shift my thinking from being somewhat concerned about the future dangers of developing AGI to a concern that if the US follows the path of more timid countries like Italy, the West might cede its head start in AI to China or Russia, a situation that would be extremely dangerous from a geopolitical perspective. While I may still be proven wrong, at this point the one great difference between AI and my cat is that my cat has volition and desires that she is constantly exercising. Roxy the cat may not know that much, but she is determined. An AI doesn’t have any volition or desires, besides fulfilling the task at hand. Potentially this may change as agents develop, but for now, we may have Artificial General Intelligence, but we do not have Artificial Sentience.

I taught my first course this May, and sought to outline the parameters of this new culture. It’s still very early, but network culture is finis, kaput. Even it’s last stages, wokeism and Maga, such products of social media seem spent. Last year, I thought that federated networks such as Mastodon were the future. This year, I am not so sure. Mastodon and Blue Sky sunk themselves early on by embracing the Left’s cynical culture of intolerence (if anything offends Lefties on Mastodon, they call for servers to be banned while the users on Blue Sky generally seem to be about as socially sophisticated as sixth graders, banding together to drive off anybody who isn’t far Left). The big “success” of 2023 in social media was Meta’s Threads, but a botched launch (no EU access and a focus on delivering news and entertainment rather than connecting with friends and colleagues) has seemingly ensured that there has no engagement on in whatsoever. Twitter, X, or Xitter (as in Martin Luther wrote his 95 Theses while sitting on the Xitter) muddles on, with a modern day Howard Hughes at the helm, babbling his drug-induced conspiracy theories even as he ponders never cutting his fingernails again and saving his urine in jars around the head office of X. Even with a presidential election upon us, the insane political frenzies of 2016 and 2020 are much diminished as users tire of politcs and social media networks actively bury news stories. This has, in turn, had a significant impact on news sources, which in fairness, have been slipshod and low quality for too long. Both legacy journalism and digital media are in trouble—the Los Angeles Times and the Washington Post laid off large numbers of staff while Vice News, Buzzfeed, and the brand new Messenger shut down (or basically shut down)—an “extinction-level events” according to some. In a Washington Post op-ed the former head of Google News (!) suggests that it AI will kill the news and begs for regulation, but this just noise. The real problem is that news wanted to be entertainment and abandoned sober reporting for clickbait and outrage. The replacement of journalism with shrill panic may have been jolly good fun for both the far Left and far Right but this led to outrage fatigue. More people mute stories about Gaza and Israel or Trump and abortion these days than pay attention to them (guilty as charged). We all want to be Ohio man. The news has only itself to blame. How we can have responsible journalism again is beyond me, although publications like the New Atlantis do

Network culture was millennial culture and that finally died in 2023. Skinny jeans and man-buns are now what out-of-touch parents wear, like tie-die shirts and bell bottoms in 1985. Gen Z has its own, seemingly inscrutible cultural codes, which often seem to be that of a studied fashion trainwreck. But high fashion has died. Nobody who isn’t an oligarch or a rap star wants Gucci, Prada, or Vuitton anymore. Young people are into drops from obscure online boutiques and thrifting. Once Russia and China catch up, the old fashion houses will swiftly go the way of the dinosaurs. The same may be happening in tech. Apple’s laptops are boring. I didn’t buy a single Apple computer or iPad this year. I did purchase my first high end PC ever, an Acronym ROG Flow Z-13. I’ve been a fan of obscure Berlin tech fashion brand Acronym for a while and since my youngest kid is studying game design at NYU next fall, it was time to learn about contemporary gaming. It’s been a joy to use in ways that Apple equipment just isn’t anymore. I also purchased a couple of Boox e-ink tablets. Whether they are better than iPads for one’s eyes is a matter of debate, but they are certainly more interesting. Instead of boring Apple crap, I bought a Kwumsy (Kwusmy!) keyboard with a built in panoramic toucshscreen monitor. It’s unimaginable that big tech would make something like this. Niche tech has personality, big tech does not. As tech fashion Youtuber This is Antwon stated in another brilliant video, “Weird Tech Fashion is FINALLY Cool Again.”

So a year in review that morphed into a year in tech. But tech is not just tech now, it’s really our culture—including our spatial culture, which was formerly the purview of architecture. Even taking a stand against tech, embroils us in it. I’d like to find a way past this monolith, but it’s not easy to think past it. I’m open to suggestions, as long as they don’t reduce everything to the god of Capital, which seems to be the other option.

I hope to be back soon, with more posts.

Leave a Comment