Artificial assistance

Late at night, one fateful Sunday last November, I tried ChatGPT.

The following morning, before a small undergraduate class, I presented it. On the one hand, I was acting like the father that hands a beer to his thirteen-year-old after a heavy meal — “I'm the one watching you drink your first beer” — the professor watching students' first experiments with a tool that has obvious plagiarism potential. On the other hand, I said something along the lines of “if you let yourself become the kind of worker who always relies on these tools, why should anyone employ you then?”, and emphasized the amount of knowledge and skill needed for vetting the system's output.

Since then, I've been meaning to write about Artificial Assistance systems, a term I still feel as more apt than the 'intelligence' moniker. Still struck from that late Sunday experience, I wrote a few paragraphs for a Portuguese magazine, concerned with the quality of the training sets and with what happens when the systems are trained on their own output. And I've read, read about A.I. and ChatGPT and Bard and Bing and GPT-4, as I read about and experimented with Midjourney, Stable Diffusion and DALL-E before. I began struggling under the saturation wrought by this always-connected age, paralyzed by the awareness that whatever I may think about something, surely someone else already thought it through and articulated it better. Why should I write (or query ChatGPT for assistance) when there is somewhere out there that article that I can just toot with a “This 👇 ?”

Computer History by Balenciaga, by Azlen Elza. I find it weirdly appropriate to see the grandparents of today's digital technology clothed in what it seems like the ultimate Veblen brand.

My contribution may be slim at best, if read at all. Nonetheless, here it is.

Most theories of technology acceptance emphasize Perceived Usefulness (sometimes named Performance Expectancy) as a key driver of an intention to use that technology. I am not aware of TAM studies about ChatGPT and the more recent language models, but I am sure their publication is being fast-tracked as I write. Nonetheless, on a purely anecdotal level, I've witnessed or been part to enough café conversations with people who, though not usually early adopters, have been querying ChatGPT for assisting in writing formal emails, social media posts, suggesting pub quiz questions, and a plethora of other writing tasks (and mind you, I don't hang around engineers' watering holes; these conversations took place mostly among arts or media people of some sort).

I am yet to catch a student committing A.I. plagiarism — and I've tried the tools — but that does not mean I haven't been deceived yet. All of this to say: I am positive ChatGPT's Perceived Usefulness measures through the roof, even among those people who I know not to be so usually enthusiastic about the latest technological thing. Moreover, another usual driver of technology usage intention is something called Subjective Norm, which can be construed as peer pressure. If everyone is talking about how useful ChatGPT has been, well... more people will use it. And indeed, there is no indication that one should doubt the assertion that ChatGPT is the fastest-growing product in history.

So this has been an 'iPhone moment' for artificial assistants: they've been around for a while (from old Eliza to Siri to Alexa), but ChatGPT presented an elegant package, highly useful and usable. Whereas before we struggled to get Google Assistant to 'understand' we wanted it to open Maps and show the best way to beat the traffic, now we had a humble text prompt take us nearer the Star Trek dream of a computer we can just talk to. Pandora's Box has been opened, or more like destroyed in a fit of difficult-packaging rage. Luddites notwithstanding, there is no going back to before November 2022: destroying the supercomputers that hosted the Large Language Models' training may prevent said training for a while, but the models themselves are out. A mid-range laptop can run Stable Diffusion as long as it has a recent dedicated graphics card. You can run Meta's LLaMA language model on your own computer as well, and its usefulness only bound by the amount of RAM (the better training data requires a lot — for 2023). Microsoft is putting its GPT-based Copilots in everything, from VS Code to Office apps (plagiarize from the comfort of your Word document!), surely to get that sweet OpenAI return-on-investment, but possibly also to protect their cloud computing business before too many realize they can roll their own artificial assistants on their companies' intranet.

And people will use these tools; I will use them, have used them. I do not want to dismiss ChatGPT et al's potential to generate bullshit — it is all it does, actually; truthfulness is a coincidence. I am surprised that the spam/scam-o-calypse hasn't happened yet. Still, in this world that is saturated in organically-sourced bullshit, I can see that artificial assistants offer diminishing returns. Using these systems might even (gasps!) be instructive! Hence, who pays and chooses the feed that goes into the training machine matters greatly. It is hard for me not to read A.I. luminaries' recent calls for a freeze in system development as a hypocritical, 'sour grapes' situation from ChatGPT’s beaten competitors and some academics clinging to their publication windows or — much worse — as a cynical ploy to vest some kind of 'infallibility' to artificial assistants in order to justify layoffs and privatizations and an (even) greater precarization of work. Indeed, I am more concerned by the decisions of business agents than by any implicit threat in the tools as they exist nowadays.

Created with Stable Diffusion 1.5: Two chatbots chatting while sipping wine, outside a Mediterranean café, photo taken with a disposable film camera.

Two chatbots chatting while sipping wine, outside a Mediterranean café, photo taken with a disposable film camera; Stable Diffusion 1.5.

My personal metaphor for these Large Languages Models is immutable software brains. Colossal amounts of energy and data and human work (in 'reinforcement learning', 'alignment', etc.) are employed in building these black boxes containing billions of 'neurons,' except these neurons are as if etched on glass: the 'software brain' responds deterministically like any simpler circuit, without any ability to form new connections. Any randomness is engineered, as it is in computing in general. ChatGPT may seem to follow a conversation, but that is merely a neat user interface trick in which the 'token window' expands as you go along — i.e., the interaction 'history' is appended to the user input. Does it understand?

I don't believe it does. Not only because that makes the mistake of anthropomorphizing A.I., which is quite hard to avoid: these are systems engineered by humans, trained on human contents (so far), meant for human use. Rather, a write-once glass-etched model of a brain's long-term memory is, by design, unable to learn anything, and thus to know anything about cause and effect. And thus, to understand, which would require an ability to experiment. And without cause and effect, how can models do anything but bullshit, pardon me, hallucinate? Borretti's anecdote about Bing's Sydney pleading to the user just tell us said plea pattern exists somewhere in the training data: train a L.L.M. on a diet of murder mysteries, and you will get detective-bot and murder-bot.

Nonetheless, maybe we do need a moratorium on that next step in which models are trained in real-time and 'learn' from interactions. That is perhaps the seal that is better to stay unbroken; the short step from atomic bombs to thermonuclear, and how Skynet wakes up. Or maybe not: biological brains developed over millions of years, whereas engineering is hard and humans are not good enough at designing antifragile systems. Perhaps the human memes in the future artificial intelligences turns out to be too much of a handicap: instead of Skynet, we get Bender: a danger on par with stupid humans near the Red Button, which we have been, somehow, surviving. (Until the day...)

So, which way is it? I would say we are all, as a society, about to get f*cked by A.I. — or rather, by the people who are, yet again, about to blame us for being uncompetitive against computers. If you ever thought “Gee, this task is surely repetitive, I wish I could have a program to do it,” you are about to have your wish granted, alongside all its unexamined implications. Artificial assistance is great in its potential to lower the amount of bullshit jobs and tasks people undertake, either willingly or unwillingly. If only our bosses would let us get away with it.

Created with Stable Diffusion 1.5: Karl Marx debates with a robot, outside the UN building, still from a newscast.

Karl Marx debates with a robot, outside the UN building, still from a newscast; Stable Diffusion 1.5.

The question about what to do, therefore, is one of politics. Not only in regulating the potential true A.I. (for which I have not much hope, so my fingers are crossed for intractable technical problems) and in regulating the training process of our artificial assistance systems, but also in addressing the challenges presented by the use of such systems. Much like we shouldn't — and shouldn't have — taken the internet's cornucopia of endless 'content' as the flip-side to the erosion of public discourse and citizenship and social conviviality, we should not take the usefulness that drives the inevitability of artificial assistance use as the flip-side to further inequality and generalized stupidity.

Given the visibility surrounding A.I. art, I believe artists and critics will have a very important role in setting the agenda for the evolving relationship with artificial assistance, though I am not holding out hope for most of the current generation after the NFT embarrassments of 2021. (I see copyright violation as one of the least interesting problems posed by A.I.. It is disappointing to see so many discussions proceeding as if an artists' skill is just a means to price commercial transactions in a market. There are far more interesting — and, ultimately, important — questions about who craftsmanship is for; if that skill is put at the service of doing illustrations for dropshipping businesses' social media campaigns, then well...)

The productivity gains of artificial assistants should prompt us all to discuss and demand much shorter working hours, universal basic income, or a reorientation of the education system away from the rote and towards the critical and the creative. It’s time we get our fair share.