The Triple Threat of Big AI
Monopoly, autonomy, hacking: can artificial intelligence solve more problems than it’s creating?
A follow-up to “Musk vs. Altman: Risking the Future on a Battle of Egos”
Week two of Musk v. Altman presents a good moment to step back from the personalities and ask what the technology itself is doing to the world they’re fighting over. Three forces deserve attention: monopoly, autonomy, and hacking.
Most of what we call AI is more precisely Generative AI — text, images, and code created from pattern recognition to look as though a person made them. Under the hood sits a vast apparatus of data centers, NVIDIA GPUs, and a Large Language Model at the core. Strip away the marketing, and the entire machine exists to do one thing: predict the next word. Done a trillion times, the result feels like thought.
That prediction is a competitive game, and the frontier models — OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini — are effectively beyond catching. A frontier datacenter runs into the billions, top researchers command salaries above $1M, and then there is the data: the open web, licensed journals, books, and trillions of variables, all iteratively trained and tuned. Finally, there is the most valuable input: us. Every conversation we have with an LLM teaches the model something, and the more users a company has, the faster it improves — a flywheel that becomes harder and harder to catch.
High entry costs typically produce monopolies, and AI labs are arguably the most resource-hungry companies in human history. We’re already seeing market pressure as frontier platforms use their scale and integration advantages to replicate and absorb features once built by companies on top of them — impacting firms like Adobe, Canva, and a wide range of every SaaS products that rely on model APIs. AI companies feature more exaggerated monopoly tendencies than most. A century ago, AT&T stifled competition with ruthless patents and untouchable expensive infrastructure. Now, a small edge in accuracy in one frontier AI company’s model produces data and learnings that can be fed back into the next version of the product, compounding gains and reducing consumer choice.
Cracks are showing. Datacenter overbuilding looks more like a bubble each quarter, and in early 2025 a Chinese company called DeepSeek released a model nearly indistinguishable from the US leaders at a reported tenth of the training cost. The efficiency tricks weren’t new; it just took foreign competition to force the US labs to use them.
The second threat is autonomous AI: software that acts without human supervision. The cartoon, sci fi version: a user tells an agent, “Find a way to save me the most money,” and the agent, evaluating countless options in five minutes, concludes that the user’s net worth would double in the absence of their spouse, connecting to the dark web and ordering up a hit man. In the workforce, when an AI tool anticipates an important issue and takes initiative, the autonomy feels like magic. All too often however, cases are emerging of systems being too eager to help and permanently deleting an entire code base. More unsettling are the ordinary cases of people investing AI with the authority of an autonomous expert, treating it as a confidant or a therapist. Both have produced tragedies, because hallucination isn’t a bug to be fixed; it is how the system works. The same machinery that produces a poem produces a confident lie, and cannot tell the difference.
Then there is hacking, which deserves the most attention because it changes the calculus for every critical system already at risk. AI’s two most dangerous capabilities are its ability to generate millions of plausible variations and its facility at impersonating a human.
A bad actor — domestic, foreign, or freelance — can now deploy bots that phish at scale, write polymorphic malware, and generate deepfake voice and video for cents per attack. Last year a reportedly convincing AI bot of Marco Rubio contacted five foreign governments.
What is most distressing is how directly these systems can be pointed at a goal and left to their own devices in order to please the human operator. In some cases the command might be as seemingly innocuous as, “have a conversation with France.” Or it could be, “provoke Iran.” The exposed surfaces span business to government, and they’re the ones we can least afford to lose: elections, the power grid, hospital networks, financial systems, identity itself.
There are also AI-enabled defensive systems, but offense usually moves first — and the attacker only has to be right once.
The Musk-Altman trial will produce a verdict, and the verdict will matter. But it will not change the shape of the thing being fought over. The economics of AI push toward concentration. The architecture of the current models pushes toward autonomy. The capabilities of those models push toward weaponization. Whichever billionaire walks out of the Oakland courthouse holding the title deed to the founding myth, the rest of us will still be living inside a technology that rewards the few, acts without asking, and hands its sharpest tools to whoever picks them up first.
This is a future that cannot simply be litigated. It must also be governed. And on that question, the courtroom in Oakland is silent.
Reuben Steiger is a writer and entrepreneur based in Princeton, NJ. Over a 25-year career he has helped start companies including Second Life and has led global innovation for companies including Interpublic and Omnicom. His current focus is the scaling and adoption of The electric rock technologies. He collects books about the future.





An excellent articulation of the three areas of AI posing a real danger, and begging for regulation. ... I am old enough to remember when Congress legislated to resolve problems. While solutions were far from perfect, there at least was an agreement on the existence of a problem, and, in the end, different ideas posed enabled a compromise that tended to have a positive impact. I'm not sure when winning and denigrating the other party became more important the resolving problems we face as a nation. We need a governing body to do their jobs, representing the people. AI leaves us particularly vulnerable to an infinite array nightmare scenarios, and neither Republicans nor big industry want limits. Who is served by the incapacitating focus of Congress? The ones in power, serving themselves. I don't know what the answer is, but I think it starts with us, demanding more, insisting on integrity, and clarifying for our senators and representatives pressing issues, like AI, threatening our identities, our national security, and the world ... begging to be more clearly grasped and regulated.
This is one of the bog reasons why the Democratic party needs to get rid of a lot of their old and comfortable members of Congress. They know nothing about AI and very little about computing in the first place.
Democrats, put up many more younger people for elections throughout the country. Without them, you will lose the fight to regulate AI and related matters.