2025: Where AI Went Wrong

Time magazine announced “The Architects of AI” as the person of the year. I had to laugh. Yes, these tech bro fools were one of the most influential in making headlines this past year. And I think they will be influential in most of the headlines next year, too. Just in the wrong direction.
Because as I will demonstrate, AI is not getting better! We have spent billions, maybe even trillions to find this out. Probably the most expensive lesson in history.
There’s a general pattern with new tech. It’s expensive to research, but it gets cheaper. At first it doesn’t do very well, but then it gets better with time. We find uses for the technology, this spurns interest and popularity. Then the uses are so convenient it’s a must have for everyone.
The train, the light bulb, the electric grid, radio, cars, TV, the computer, the internet, the smart-phone, it all follows the same pattern.
But there have been failures, especially recently. Technology that followed the same pattern until it went wrong: Crypto never took off beyond some libertarian unsecured investment not really backed by anything. NFTs turned out to be either fraud or a joke. Blockchain technology has never been fast enough to do more than handle crypto exchanges, and “Virtual Reality” and “Augmented Reality” has been largely a bad investment unless you’re one of those rare gamer hobbyists who still like to hang out in 3D chat rooms.
AI, or as I call it Artificial Impersonation, has so far proved popular enough to be in the “good” tech, but it has yet to cement its permanence as a useful tech. Everyone assumes it will follow the same pattern, and that is getting investors excited, but it has been nearly 3 years since ChatGPT-4 was released, and the technology stagnated. A year ago every AI company created their next generation and there were no improvements over the last generation. OpenAI released GPT-5 and it landed with a thud, they’ve gone back to GPT-4.
Did we reach the theoretical limits of LLM improvements with more data? Did scraping new sections of the internet filled with AI generated slop cause model collapse? Unknown, but something like that occurred for sure.
Since the failures last year, AI has only improved thanks to programming tricks like “reasoning” models, which is running the query multiple times and pick out the best answers. These tricks improve answers, but still make mistakes. They are also much more expensive to run.
You have multiple companies offering LLM services, and the LLMs across the board are not differentiating from one another. Any improvements in the models are largely found in the LLM’s container software.
I wonder if the data models have gotten TOO big. The more data you collect ,the more shit you collect, and it shows in increased “hallucination” frequency.
Some more of AI’s Greatest Mistakes
Which introduces the next place where AI is not improving: It is not getting cheaper
In the US, we have 7 companies developing LLM technology right now, and they are competing to be the BEST!, but users are seeing all the models largely indistinguishable, so it really doesn’t matter which you choose, you’re still going to get a lot of mistakes.
OpenAI is the most popular, but Gemini 3 by Google is the fastest growing, much to the chagrin of Open AI, not because its better than ChatGPT, but because the interface around Gemini 3 is more user friendly. Suddenly, OpenAI is looking at their own Armageddon. They have 800 million users, about 760 million are not paying for it, and it is likely they wont if they were forced to.
Meanwhile, Microsoft is trying to push AI on all their products, causing massive revolts by it’s user base who just want an operating system that runs programs, not one that talks to you.
I know what you are thinking, “AI is very cheap, it’s practically free”, except that is a lie that the AI companies are hiding from you. Every time you query ChatGPT-4 with anything, it’s costing OpenAI money.
Yes, OpenAI is expected to earn $12 billion this year, but its expenses are in the $40 to $50 billion range. It continues to use investor dollars to supplement users, but for how long?
To meet all its goals, OpenAI needs $400 billion next year. It needs more data centers to meet demand.
No AI division is profitable right now, they are using investor funds to pay for it so they can give the illusion it’s cheap or free, betting heavily on the idea that it will be so ubiquitous in the future, that they will make trillions.
There is no evidence to back this up, and investors are finally starting to figure this out. Hence all the talk of a AI Bubble about to burst.
Yet, it has become practically a religious belief in Silicon Valley, and this “theory” is what is driving the data center boom right now.
And so impossibly expensive data centers are being built.
To that end, Kupperman works out rough calculations for the genuine cost of a data center, factoring in the inevitable breakdown of parts over time. Each data center, he says, is essentially made up of three components: the chips, which become obsolete in just a few years; the systems connecting the chips, which need to be replaced every decade or so; and the building itself, which should last for quite a while.
Add it all up, and time is not on the data center’s side. The finance guru estimates that the “AI datacenters to be built in 2025 will suffer $40 billion of annual depreciation, while generating somewhere between $15 and $20 billion of revenue.”
And it’s not just OpenAI. Anthropic, the makers of Claude have seen similar losses to earnings ratio. No one is making money, and the cost of building AI Data centers with thousands of AI processors is getting more expensive.
Claude’s big selling point is it’s usefulness to AI code generation. “Coding” was supposed to be a key job of AI, and that isn’t talked about much anymore. The savings from AI writing code – the cost of the AI service – the cost of human debugging of the AI code, more often than not is below zero. Humans may write code slower, but it is less buggy and easier to maintain. Human coders are simply cheaper too.
Who is this AI Person Anyway?
Which bring up the next problem: AI is declining in popularity across the board. The number of people that like AI are declining fast, largely limited to investors who have sunk so much cost into it, they can’t afford to lose.
Part of the problem is that we haven’t found any actual useful uses for AI that can’t already be done by human. The few things it can do, it does faster, but the quality isn’t there. It generates slop.
I first saw creative types vehemently against AI, calling it a “Plagiarism” machine. They were the first.
Now people are finding out data centers are the reason their electricity bills are so high, and why huge water shortages are likely to happen more often.
Gamers are seeing prices in formerly cheap computer products are double, even tripling in price, in the last few months, and it also driven by data centers.
Businesses are finding out they don’t save money on AI if they give their employees the tools. In fact, use of AI in business is going down, not up.
The number of people discovering hate for AI is growing, making once “potential” customers into “never” customers.
People hate the slop destroying the Internet.
People hate the AI tools being jammed into every program they use.
Well that about wraps it up for AI
In the last few years, scientists and academics have found cases for using “machine learning”. About the same time, LLM technology was becoming available. For some reason “Machine learning” and generative LLMs have been grouped together and labeled AI. Neither really are.
The key goal of all of this is AGI, the theoretical point when LLMs become smarter than humans. This is what every investor believes in, and spending billions to acquire.
The theory was that if we build the LLM’s large enough they would soon be more intelligent than people. It was proven wrong about a year ago.
The other theory was that building AGI could change the world in either a better way or a worse way. No it won’t.
The AI industry refuses to define AGI, so they can use it as a marketing tool, or a scare tactic. I’ll define it then:
If an AI company fires all their programmers because they don’t need them anymore, that means they achieved AGI.
It is not likely to happen anytime soon. My wager is that AI companies will go bankrupt before it happens, and will fire all their programmers because they can’t afford them anymore. Anyone who says AGI will get here by 2030 is living in a fantasy world.
LLMs are incapable of reaching AGI. They “guess the next word” based on context from the millions of documents being fed into their models, and yet the AI industry is betting a couple of trillions that it can.
Most cases of success with AI, like protein folding, are not LLMs but “Machine Learning”: software trained on specific topics and using trial and error to improve. Machine Learning has proven more successful than LLMs but there is limited usefulness, and it’s not a big money maker. The limited number of researchers needing this tech has already bought the tech.
So we are left with the fatal flaw of the AI industry: The one new technology that would make a profit (AGI) is not possible with the technology they are using (LLMs).
Stories of AGI being here by 2030 and threatening humanity’s future are fantasies. This will become a widely recognized fact probably in the next year. Much to the embarrassment and wealth of the “Architects of AI”.
