The AI Apocalypse isn’t Coming

After all the hype and scary talk about AI taking over the world, it is becoming clear that it is nothing but hype. As I predicted.

First bit of evidence comes this article “Scary ‘Emergent’ AI Abilities Are Just a ‘Mirage’ Produced by Researchers, Stanford Study Says“. All of these people hyping up a Microsoft produced article that GPT-5 is showing signs of Artificial General Intelligence (AGI), despite the actual study barely mentioning it at all, got peer reviewed by Stanford, who conclusion is: No signs of AGI.

In a new paper, Stanford researchers say they have shown that so-called “emergent abilities” in AI models—when a large model suddenly displays an ability it ostensibly was not designed to possess—are actually a “mirage” produced by researchers. 

Microsoft researchers, too, claimed that OpenAI’s GPT-4 language model showed “sparks of artificial general intelligence,” saying that the AI could “solve novel and difficult tasks…without needing any special prompting.” Such concerns not only hype up the AI models that companies hope to profit from, but stoke fears of losing control of an AI that suddenly eclipses human intelligence.

“What we found, instead, is that there’s no giant leap of capability,” the authors continued. “When we reconsidered the metrics we use to evaluate these tools, we found that they increased their capabilities gradually, and in predictable ways.”

The authors conclude the paper by encouraging other researchers to look at tasks and metrics distinctly, consider the metric’s effect on the error rate, and that the better-suited metric may be different from the automated one. The paper also suggests that other researchers take a step back from being overeager about the abilities of large language models. “When making claims about capabilities of large models, including proper controls is critical,” the authors wrote in the paper. “

As I have stated previously, the flavors of GPT being labeled as “artificial intelligence” is a misnomer, a sales tactic, and should be called “Artificial Impersonation” instead. GPT is a chatbot. Because it is designed to impersonate human speech, it can seem intelligent to us, but it is purely an illusion.

That is not to say GPT is not useful or a complete waste of time and development. There are definitely some practical and helpful uses for it, but any thoughts of it taking over the world are unfounded.

And it is not Stanford saying this. OpenAI itself had to admit there are limits to the technology. A recent interview with Sam Altman CEO of OpenAI he said:

When OpenAI co-founder and CEO Sam Altman speaks these days, it makes sense to listen. His latest venture has been on everyone’s lips since the release of GPT-4 and ChatGPT, one of the most sophisticated large language model-based interfaces created to date. But Altman takes a deliberate and humble approach, and doesn’t necessarily believe that when it comes to large language models (LLM), that bigger is always going to be better.

Altman, who was interviewed over Zoom at the Imagination in Action event at MIT yesterday, believes we are approaching the limits of LLM size for size’s sake. “I think we’re at the end of the era where it’s gonna be these giant models, and we’ll make them better in other ways,” Altman said.

What he was implying is that larger Large Language Models (LLMs) which is the driving force of GPT improvements up to now are not showing significant gains proportional to their size. Multiplying the size of the LLM by 10 does not make the LLM smarter by a factor of 10, in fact we may be reaching the limitation of what can be done with this approach, and further improvements will need all new approaches.

Unlike CPU/GPU technology, there is no Moore’s Law for AI improvements, no exponential growth, no “singularity” in our near future. More and faster computers just produce the same GPT responses faster, and having bigger databases to pull from only improves the quality of those responses by a little.

Yes GPT-4 is a noticeable improvement over GPT-3.5, but the currently not public GPT-5 is not showing the same noticeable improvements.

AI systems can write by impersonating millions of writers.

AI systems can do art by impersonating millions of artists.

The goal of AGI is to create an AI Einstein.

The problem is that there are not millions of Einsteins to impersonate.

ChatGPT sucks at writing

A lot of my favorite YouTube channels have done the experiment of having GPT write a script for their video and then doing a video based on GPT’s script. Here is John Green of Vlog Brothers, Peter Zeihan of Geo Politics, and Jill Bearup, doing the experiment.

Unanimous opinion: It is scary similar to a typical script they would do, but it is dull, repetitive, and factually incorrect most of the time.

This was my conclusion as well when I attempted to have chat GPT write a scene for my latest visual novel. It had good ideas to put in the scene, good examples that I never would have thought of, but the tone was all wrong, and I had to literally rewrite every line to get it to read like real dialogue.

It is so bad, I wouldn’t be scared of it writing student essays if I were still a teacher. Anything it produces will be right on spelling and grammar, but at best a “C” on research and fact checking. If a student asked why, I’d say “It’s as dull and boring as a ChatGPT essay”, which fails to sound accusatory, but if that is what they did, they would know they need to do better.

ChatGPT’s usefulness as a tool to get over writers block and provide good examples is unmatched, but its ability to grab the reader and write something entertaining just isn’t there. May as well write it yourself.

Ultimately the problem is that the AI, just doesn’t understand what it is doing.

Further Reading:

AI Doomerism Is a Decoy By Matteo Wong
Big Tech’s warnings about an AI apocalypse are distracting us from years of actual harms their products have caused.

One comment

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.