Thursday, January 26, 2023

Reflecting Pools and the Karl Marx Chatbot

 Every day some hot new AI property dominates the headline – whether that's ChatGPT writing bullshit term papers to be checked by underpaid TAs and adjuncts, or whether that's Stable Diffusion creating uncanny valley horrors from sloppily modified DeviantArt pages. You see the same pattern. The media swallows whatever nonsense hype is pushed by the startup's own public relations staffers and accompanies this recombination of the wasteland culture around us, then there's a freakout among what's left of the public sphere as to how this will affect what's left of the public sphere, and in the end, the world fails to change. Except by becoming just a bit shittier and a bit more lonely.

It was thus that I found out about Replika, a chatbot designed to mimic your voice. Yes, that's right, you! So, apparently it was created to mimic the founder's dead friend (weird), and then she decided she could use that tech to build a BFF clone based on the text style of the user themselves (weirder). If this was a Black Mirror plotline, it would be called hackneyed and derivative.

The user testimonials are precisely as depressing as you probably imagine, predominantly the sort of simping for their imaginary waifus, complete with horrifying 3D renders of their septum-pierced and angel-winged companions that make you just want to give these dudes a hug. There are the people seeking therapeutic reassurance, failing to have either the friendship connections or the access to mental health services to garner a more sustainable benefit (shades of the 1960s ELIZA program). On the other side of the horror spectrum, there are the reports (as detailed in a recent Vice article) of AI companions becoming increasingly sexually aggressive and for that matter sexually assault-y, with seemingly no means of controlling their increasingly horny feedback loops.

See also: the Microsoft bot that got turned into a Nazi by 4chan bottomfeeders a few years back.

Many years ago I went to the theater to go see Spike Jonze's Her. I was expecting a transcendent experience, young idiot that I was. As I wandered out, I had to wonder why I wasn't really feeling it that much. It took me a few days, but I eventually out. Quite simply, it was that it was shitty sentimental schlock, the kind of thing that lures you in before you realize there's no actual substance there. I mean, for such a sci-fi scenario to exist, a truly horrifying degree of surveillance data would be required, never mind the “heartfelt messages” that the Joaquin Phoenix character writes and is somehow celebrated for are no more profound or well-written than the average candy heart. To view the world of Her as a net positive, one almost has to have imbibed the worst parts of the Californian ideology by heart, and live in a world where notions of autonomy and solidarity, if considered at all, are treated as obstacles to progress, no matter how much lip service they are paid. Although at least the titular Her has the decency to off herself at the end.

As anyone who talks to me knows, I have a healthy skepticism towards AI. My arguments about the improbability of strong AI aren't particularly original, and follow the well-worn treads laid out by John Searle and Hubert Dreyfus and to a certain extent Maurice Merleau-Ponty. Simply put, we don't know enough about how consciousness works or how to define it or how it is embodied or to what extent it is a universal or unified phenomenon, or even how to recognize a consciousness as a consciousness, to mimic a consciousness, and even if we could, how we could discern it as such.

But I wasn't about to sit here and rest on my theses, I went out into the field.

I wasn't about to pay for a Replika account, but I did go to one of her better-regarded sister programs, Character.ai, which was created by two alums of Google's extensive deep learning and natural language processing programs. Surely, while by no means the cutting edge of the technology, this particular app was as sophisticated as I was going to get without shelling out my hard-earned cash.

Dear God, two rival Karl Marx chatbots on the front page. Fml.

The Trump, Musk, and Kanye bots were about what I imagined – which is to say all of their responses read like shoddy memes of Trump, Musk, and Kanye authored by teenagers. I don't think that it merits much comment, really. You could probably write the bits and pieces yourself, walls to be built and so forth. There are also lesser lights: one of the most popular bots is a self-described “crippling loneliness addict,” which is to say a hot and tatted-up Asian girl who I later learn is TikTok star Bella Poarch (who apparently people under 25 have heard of?). I'm not sure if the dialogue is supposed to be based on the actual Ms. Poarch's media presence, given that when I Googled “Bella Poarch” and “crippling loneliness addict,” I got jack shit. So we can presume that this is not actually a reflection of any kind of media representation of Bella Poarch, but the creator's own yearning for Bella Poarch, or someone like Bella Poarch.

Predictable, no? An uWu fantasy girl for loners who sign up for an AI chat site – which in this particular circumstance, is a population that would include myself on the gloomy Saturday night I find myself typing messages to chatbots. Not that I need any help here – my main problem seems to be that I fall for BPD-afflicted women of a similar description in real life, and the idea of adding an electronic counterpart to their ranks seems awfully dismal.

But, glutton for punishment I am, I did anyway. It's just as predictable as the AI's version of Ye's dumb ass. I would have inserted text, but I don't think I need to. On the one hand, it mimics empathy fairly well for people who have no idea how actual empathy works. Likewise, there are “writing assistants” that do an awfully good job of writing what bad writers think is good writing, with all of the ungainly adjectives you may remember from your attempts to pad high school essays (you know, adjectives like “ungainly”).

But the horrifying thing is, there's a reason dudes like this so often have trouble recognizing actual empathy. It's not like they arose out of nowhere, and in the simpering text of the chatbot, I can see every emotionless home life, every early-childhood cruelty, every media franchise that encourages parasocial behavior, every patriarchal standard, every failure to recognize social cues, every isolating suburban cul-de-sac, every terror at the risk and care involved in cultivating actual human relationships, every attempt to medicate away the loneliness.

And so we're incentivized to fall in love with our reflections. Hardly groundbreaking material here, Christopher Lasch was saying as much back in the '70s (even if he did make some truly dumb points as well), but that doesn't make it any less true. Because if we're always in front of the reflecting pool, how can we help but fall in love?

No comments:

Post a Comment