New OECD data shows that more than one-third of people across OECD countries used generative AI tools in 2025. Among students aged 16 and over, usage rises to around three-quarters. Artificial intelligence has moved from novelty to everyday utility at extraordinary speed.
Australia is unlikely to be lagging. If anything, we are probably above the OECD average when it comes to individual use. Australians are enthusiastic adopters of new digital tools, particularly when they arrive embedded in familiar platforms. But beneath these headline figures lies a more troubling reality: Australia’s AI uptake is largely unplanned, uneven and strategically hollow.
This is Australia’s AI paradox — mass adoption, minimal strategy.
The real divide is institutional, not generational
The OECD highlights a sharp age divide in AI usage, and that matters. Younger Australians are using generative AI instinctively, while many older Australians remain hesitant or excluded. As I have argued before in the context of digital exclusion and ageing, technology adoption without structured support risks widening social and economic gaps rather than closing them.
However, the more consequential divide is not generational — it is institutional.
Students are using AI at scale, yet education systems are scrambling to adapt assessment, curriculum and teaching methods. Workers encounter AI tools informally, but reskilling pathways are fragmented or absent. Employers experiment without guidance, while regulators remain largely on the sidelines. Once again, Australia is allowing technology to race ahead of policy.
We have seen this pattern before — with the internet, with social media and with cloud computing.
Heavy use, zero sovereignty
As I have written previously, Australia has a chronic weakness when it comes to digital sovereignty. We consume digital services enthusiastically, but we exert almost no influence over the infrastructure on which those services run. That weakness is now becoming critical in the age of AI.
Australia currently has almost no control over its data infrastructure. The vast majority of data generated by Australians — including data used to train, fine-tune and operate AI systems — flows through offshore platforms and cloud services owned by a small number of US hyperscalers. Decisions about data access, pricing, model behaviour and risk management are made elsewhere, under legal and political frameworks that do not reflect Australian democratic priorities.
As I have argued before, data control increasingly equates to policy control. When governments lack leverage over the infrastructure layer, regulation becomes reactive and symbolic rather than effective.
Europe chose a different path
Other jurisdictions have taken a more deliberate approach. Europe’s Digital Services Act and related digital regulation offer a more democratic and privacy-focused alternative to the largely market-driven model that dominates in the United States. While imperfect, the European framework explicitly recognises that digital platforms — and now AI systems — shape public discourse, economic power and civic life, and therefore require public-interest governance.
Australia, by contrast, has largely deferred to US regulatory norms without possessing the scale, bargaining power or legal leverage of the US market itself. We have adopted the technology, but not the governance model that might protect citizens, institutions and democratic accountability.
In the context of generative AI, this leaves Australia exposed. We are embedding AI systems into education, business and government while surrendering control over the data, infrastructure and rules that govern how those systems evolve.
Business adoption exposes a deeper weakness
OECD data also shows that firm-level AI adoption remains concentrated in ICT and knowledge-intensive industries. Just over 20% of firms reported using AI in 2025, with growth beginning to moderate.
For Australia, this is a warning sign. Our economy is dominated by small and medium-sized enterprises across services, construction, health, education and local government — precisely the sectors where structured AI adoption could lift productivity, and precisely where policy support is weakest.
As I have argued before in relation to digital infrastructure and productivity, Australia too often assumes that “the market will sort it out”. In practice, without sector-based programs, shared frameworks and public investment, adoption remains shallow and uneven.
Education is improvising — again
The OECD figures confirm what educators already know: AI is now a permanent feature of learning. Yet Australia’s education response remains largely reactive.
Teachers are expected to manage AI use without adequate training. Universities oscillate between embracing AI and policing it. Assessment systems designed for a pre-AI world are being stretched beyond their limits. This mirrors the early days of the internet, when institutions were left to improvise while policy lagged years behind reality.
The risk is not that students use AI. The risk is that we fail to teach them how to use it critically, ethically and responsibly.
Strategy is about capability, not control
Australia does not lack discussion papers or advisory bodies on AI. What it lacks is execution.
A credible national approach would include sector-based AI programs, public-interest digital infrastructure, reskilling pathways that include older workers, and clear data-sovereignty rules for public-sector AI use.
In relation to energy systems and digital networks, strategy is not about resisting technology. It is about retaining agency.
The OECD data should be read as a warning. The window for shaping how AI is embedded in Australian society is narrowing. Without institutional leadership, Australia risks becoming a nation of enthusiastic users — and strategic bystanders.
Paul Budde
