AI-Phobia? Why you don’t need to fear AI

Nobody knows how to code image or voice recognition, and as Pedro Domingos said back in 2015 in The Master Algorithm, nobody needs to know. Learning algorithms train themselves to accurately guess what an image is, or how a sound wave of speech can be represented in text. Now OpenAI applied those principles to conversational computing and created the frustrating, sometimes useful ChatGPT and GPT4, everyone is an AI researcher or critic. Many are signing this petition to pause AI experiments. They’d like OpenAI and their competitors to spend an “AI summer” to ponder the Jurassic Park meme: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

More than image or voice recognition, a computer appearing to comprehend language feels like machines treading on the turf of what makes us human. But what are humans afraid of? Terry Pratchett said before you can kill a monster, you have to say its name. A name for the feeling of dread in your stomach may help decide how justified it is.

What is the fear of AI called?

Many articles mention “AI Anxiety.” You might also see the neologism AI-nxiety, though that doesn’t seem likely to catch on. The closest dictionary word that encompasses the fear of AI is technophobia, the generalised fear and dislike of new technology, including AI.

That word has the connotation of irrationality, but is fear of tech irrational? Stephen King has admitted that commonplace tech scares him, but his flop eighties movie Maximum Overdrive, exploring that fear, can’t even take itself seriously. Skynet from the Terminator movies and Hal 9000 from 2001: A Space Odyssey are more embedded in the public psyche, partly because those are better movies, but also, AIs are different from trucks or toasters. The creators admit they can’t explain the logic behind ChatGPTs utterances. They wouldn’t be able to debug or modify it like handwritten software. A selling point of machine learning is also scary. Not only are we relieved of having to write or fix the tech – nobody can.

Understanding the unknown

Explainable AI aims at visibility into how AI makes decisions. OpenAI seems like they need it, because the way ChatGPT’s self-filtering works, for instance, seems rudimentary and has been bypassed using jailbreaking prompts. This demonstrates the filter is a separate superego layer, curbing the worst impulses of the core model, which can still produce offensive content.

This lizard brain of ChatGPT might be a black box even to OpenAI’s developers, but picture a future AI saying something inappropriate and a developer receiving the bug report and engaging in a therapy session with the bot, then talking it out of the toxic thought process that led to the offensive statement. The layperson might also understand a read-only visualised explanation of the logic behind AI decisions better than conventional source code, which might demystify AI. But your fear of AI might be replaced with disappointment when you see “thinking” is not happening, and forget about AI taking your job.

Will AI take your job?

Not if your job requires consciousness. Your boss who wants a working website may not ponder the nature of consciousness, till her new robo-employee, who never needs sick leave or holidays, makes bad costly choices a conscious being wouldn’t.

Despite the flaws, we see high-profile developers tweeting that GPT has been helpful to their coding productivity. It can’t generate correct code on its own yet – but only a matter of time before it can work independently, right? Well..

What seems like a small gap between what we have today and strong AI might be a chasm. Chomsky calls ChatGPT a glorified autocomplete and argues that the focus on machine learning is an obstacle to the research that might lead to machine comprehension. If you remove the “chat” from ChatGPT and directly use the older basis for it, “davinci,” which works like an autocomplete, you see that if you start an essay, GPT completes it with a plausible mashup of similar writing it has been trained on. You can tweak randomness parameters and GPT will find completions using words it knows can go together but are less likely, which sometimes leads to more “creative” responses, sometimes nonsense.

ChatGPT doesn’t expose arcane parameters to the user. Instead, it has another step in its training in which human contractors manually labelled “good” responses. That feedback was incorporated into more iterations of training. OpenAI has yet to disclose how GPT4 was trained, but it seems plausible that a motivator for making ChatGPT public was to crowdsource further data useful to “Reinforcement Learning With Human Feedback,” so it’s no surprise that GPT4 seems “smarter.” It’s more advanced smoke and mirrors but if the educated brute force approach is fundamentally incapable of understanding, the trending approach to AI will hit a brick wall with tasks requiring comprehension.

In The Emperor’s New Mind Roger Penrose explains problems that can be solved logically but not algorithmically and argues that consciousness can’t be replicated with computers, speculating that it might be based in the physics of the brain in a way current science cannot describe. If that’s true, machine learning chatbots are a distraction from work on machine comprehension, which might require a new type of computer.

But the results of OpenAI’s research have produced results the average person can appreciate. No more AI winter, and even if machine learning chatbots turn out to be a wild goose chase, the world talking and thinking about AI might lead a researcher to strike gold.

Would that have to be a dystopia? Philip K Dick is associated with dystopian visions, but in an interview he said he imagines a “bifurcated society with somebody who’s going to make it off the idea and somebody who’s going to be victimised by the idea.” Even to the master of dystopia, whether the future sucks will depend on who you ask. You can be on either side of that equation. To again reference Pedro Domingos, “The best way not to lose your job is to automate it yourself. Then you’ll have time for all the parts of it that you didn’t before and that a computer won’t be able to do anytime soon.”

Share on :

Related Posts