Hypernormalized AI

The push for normalizing AI is getting frantic.

[Rick Wysocki]

HyperNormalisation, a BBC documentary by the film-maker Adam Curtis, is interesting to think about amidst all this AI talk. So much AI talk. An embarrassment, frankly, of AI talk. Or, at least, an embarassment of AI talk that is, based on the arguments and ideas that tend to dominate the conversation, embarassing.

I won’t spend too much time outlining the film. It’s quite weird, and the connections to AI are a little oblique. Essentially, Curtis takes up Alexei Yurchak’s theory that toward the end of the Soviet Union, a phenomenon of “hypernormalization” occured where everyone was compelled to uncritically, rapidly, and consistently accept new social fictions. The argument of the film is that since the ~1970s, technologists have given up dealing with the problems of the real world and instead invented the virtual world (mass media and the Internet) to benefit the neoliberal order.

Contrary to what you might think, the connection isn’t that the powerful are going to trap us inside an LLM, or that AI is going to enslave humanity, or any of the other delusive misdirections that Silicon Valley doomers are, honestly or cynically, pumping out. The connection is speed, a condition where society is purposefully denied deliberative time on technologies that companies are heavily invested in gaining widespread use. Much of the world around us trying to hypernormalize AI.

One way that this is being done in the social sphere is through AI increasingly being adapted “subtly” as topics within media. While it didn’t gain much success (for unrelated, quality-based reasons), M3GAN 2.0 notably opens with very “random” connection to Iranian villains before presenting a narrative about an AI regulator coming to learn that the problem isn’t with AI technology; it’s with the humans who employ it. Similarly, the most recent season of Only Murders in the Building represents LSTR as an AI robot that ends up a sort of sub-hero within the plot. Like in M3GAN, LSTR has a monologue explaining that it’s humans that make each other’s lives unhappy, not technology. AI doesn’t hurt people, people hurt people. Or consider that AMC is showing an ad for Google Gemini that is designed to look like a movie trailer, and comes right between two real ones.

Just look at the technology you’re using. How many tools do you use that now have some GenAI feature popping up at you? I would hazard to guess that whether you like these tools or not, you weren’t consulted. They were, simply, there one day, normalized as part of the interface. Or think about Apple Intelligence. Or how ads for computers now emphasize being “Copilot+.” Or how AI is already at use in the military. Or how it’s used to target Palestinians for state execution. Or…

This is hypernormalization, accelerating because tech companies depend on these tools being socially normalized. It is increasingly reported that, as Rosenbush writes in the WSJ, “the economics of artificial intelligence have turned sharply against them, at least for now, and for reasons that weren’t widely anticipated.” Urgent concerns of AI tech companies include major capital spending, high valuations, high debt, and the circular nature of tech with “AI firms pouring money into other firms.” Investors are getting concerned by the increasing reality that AI is a bubble showing signs of bursting. As Rosenbush writes, “a handful of players would probably escape a sector collapse and go on to change the world, just like the dot-com survivors did. But even the most likely eventual winners in AI are losing billions of dollars right now.” While data centers are being positioned as the next big push for AI corporate value (with all the geopolitical and environmental concerns that accompany them), these companies are getting itchy, and their attempts to normalize the tech for the purpose of “productivity” will get faster and more frantic.

What’s sad, to close this brief post, is that intellectual work and scholarship in many fields–including my own–is participating in this hypernormalization. Critiques of universities’ exceedingly close relationships notwithstanding (those are table stakes), the amount of scholarship being put out that seems to accept the premise that GenAI is here to stay, right at the moment that its bubble seems most likely to pop is–to return to the beginning–embarassing. Speaking for my field, rhetoric, it’s true that since Plato’s time we’ve been a fundamentally reactive field that acquiesces to technological shifts. Perhaps it’s due to our historical status as a “meta-discipline” that, honestly, gives us little to claim as our own. Or, more cynically, maybe it’s the academic careers being born that, like tech companies, rely on their normalization…

Either way, it feels like much scholarship has gone beyond Sloterdijk’s enlightened false consciousness, the unhappy state where we know all the problems but have cynically given up finding solutions. Rather, we seem caught in an unstated but observable desire to accelerate the uptake of a new medium, not to study it or, much less, to change it. Same as it ever was, I suppose, but it would be nice to learn.