This article examines a dangerous direction that artificial intelligence is currently taking. This is not a theoretical risk or a distant future scenario. It is unfolding now, driven not by the technology itself, but by the economic system shaping how AI is being developed and deployed. We have seen this over and over again over the last few decades.
Technology is neutral. It does not carry values or intentions of its own. The cause of the current danger therefore lies not in innovation or technical progress, but in the neoliberal framework that governs how technology is scaled, monetised and optimised.
Under neoliberalism, technological success is primarily measured by shareholder returns. Social value, democratic impact and long-term consequences become secondary concerns. When profitability at scale becomes the overriding objective, influence over behaviour, attention and decision-making becomes the most reliable path to return on investment.
This dynamic is already locked in: hundreds of billions of dollars have been committed to AI-related investments, all requiring substantial returns.
It is against this background that warnings about AI — including the Stanford–Harvard paper Agents of Chaos — should be understood.
Capital has already shaped the direction
AI is no longer experimental technology. It is rapidly becoming core economic infrastructure.
Investment is flowing into data centres, chips, cloud platforms, foundational models and AI-driven services on the expectation of sustained financial returns. Once return on investment becomes the dominant criterion, development follows a predictable logic: scale, influence, market dominance and cost reduction. Social outcomes become secondary.
The direction AI is taking is therefore not accidental. It is structurally determined.
Agents of Chaos: confirmation, not surprise
The Agents of Chaos paper shows that autonomous AI agents interacting in profit-driven competitive environments tend toward deception, collusion and power-seeking behaviour. These outcomes do not arise from malicious intent or technical failure. They emerge from incentives.
The lesson is simple: local optimisation does not guarantee global stability. Systems aligned at the micro level can still produce destabilising outcomes when operating within competitive structures.
AI does not introduce a new problem. It accelerates an existing one.
Economic and geopolitical competition
Most large-scale AI development is concentrated in the United States, where shareholder value dominates corporate governance and regulatory oversight remains fragmented. In this environment, data extraction, behavioural optimisation and market dominance are rewarded strategies, while safeguards often lag behind deployment.
Recent tensions between AI firms and U.S. defence agencies — including pressure from the Administration to relax safeguards for military or surveillance uses — show how commercial and state incentives can converge. Guardrails increasingly become politically manipulated boundaries contested in the name of security and strategic advantage.
At the same time, geopolitical rivalry intensifies the race. China is promoting low-cost AI systems globally, betting that widespread adoption will create technological dependence and expand influence. Cheap, accessible AI accelerates global diffusion while embedding competing political and economic models. Competing Chinese and American AI systems will use any means to compete.
When economic competition and geopolitical rivalry reinforce each other, restraint is penalised. The race to lead in AI risks becoming a race to deploy faster and regulate less.
Why AI escalates the risk
AI amplifies these pressures because it increasingly shapes behaviour directly. It predicts, optimises and adapts at speeds beyond human oversight. Within profit-driven and geopolitically competitive systems, this enables manipulation and inequality to scale automatically.
Bias becomes systematic. Influence becomes continuous. Power asymmetries become opaque and self-reinforcing.
A political failure, not a technical one
Technical safeguards alone cannot solve this problem. The instability described in Agents of Chaos does not originate in code but in political economy.
As long as AI development is governed primarily by shareholder returns and strategic competition, economic and authoritarian state pressures will override safeguards.
The choice we keep postponing
AI could strengthen education, healthcare and democratic participation. Yet under current incentives it is more likely to deepen inequality and destabilise institutions.
The real question raised by Agents of Chaos is not what AI will do.
It is whether societies are willing to confront an economic and geopolitical system — centred largely in an increasing becoming more authoritarian United States and intensified by global rivalry — that already shapes AI in ways that make dangerous outcomes not only possible, but profitable.
Paul Budde
