Before we ask what AI can do, let’s ask what kind of AI we’re talking about


There’s no shortage of headlines heralding the power of artificial intelligence. From promises of economic productivity booms to fears of mass job displacement, AI is being hyped as both saviour and threat. But amid all the noise, one critical question often goes unasked: what kind of AI are we even talking about?

For most people, “AI” now means generative AI — systems like ChatGPT, Gemini, Claude, and others that can generate human-like text, images, or even code. But generative AI is just one branch of a much broader field, and importantly, it’s one of the least deterministic. This matters a great deal, especially when we’re dealing with high-stakes decisions in sectors like health, justice, education, or energy. I mentioned such developments before when discussing Deepseek.

Generative means non-deterministic

Generative AI models, by design, are probabilistic. That’s why they can write poetry, generate recipes, or simulate debate — but also why they sometimes hallucinate facts or contradict themselves. These systems don’t “know” anything in the human sense. They generate outputs based on patterns in vast datasets, without true understanding or awareness. That’s not a bug — it’s the defining feature of their architecture.

Compare this to rule-based expert systems or computer vision algorithms. These are often deterministic, transparent, and purpose-built. An expert system might be designed to diagnose medical symptoms based on formal logic. A vision algorithm may determine whether an object is a car or a tree with high reliability. These systems aren’t flashy, but they’re often far more suited to critical applications.

So when someone proposes “AI” as a solution, the immediate question should be: what kind of AI? Are we looking for pattern recognition? Reasoning under uncertainty? Natural language generation? Or something else entirely?

The second question: What are its strengths and limitations?

Once we’ve identified the method, we must confront its capabilities — and its blind spots.

Generative AI, for instance, is a master of imitation. It can mimic tone, suggest plausible arguments, and summarise huge volumes of content. But it’s not a search engine, a calculator, or a source of truth. It has no grounding in facts unless those facts are explicitly built into its training or accessed through an external database.

This is something I’ve observed firsthand in an organisation I know well — a services and advice provider currently developing its own AI system using existing tools as a foundation. Rather than adopting a general-purpose model, they are building a customised AI tailored to their operations. They’re feeding it internal reports, data, contracts, industry statistics, public research, and more — all with the aim of creating a system that truly understands their work. It’s a large and evolving project that requires constant fine-tuning. In the process, they are discovering gaps in their own documentation and seeing the need to feed in more detail. Interestingly, they also report learning a great deal about their own organisation in the process — the strengths, the weaknesses, the assumptions embedded in their own systems. It’s a revealing and rigorous exercise, not just about training an AI, but about better understanding their own internal complexity.

Meanwhile, more traditional symbolic AI systems excel in domains that require clear rules, traceability, and repeatability. But they lack flexibility. They don’t improvise. They’re not designed to handle ambiguity.

Too often, policymakers and tech executives conflate these very different systems under the same “AI” umbrella, leading to overpromises and misapplications.

The question that should come first

Before we even get to methods or tools, there’s a more fundamental question we should ask: what convinces you that any form of AI is the right answer here?

We’ve seen this play out in education, where AI is suddenly touted as the fix for teacher shortages or student disengagement — without any serious interrogation of whether the real problems are social, not technical. Or in policing, where predictive models are used without accounting for bias in the training data. In such cases, AI doesn’t solve the problem — it reinforces or obscures it.

In fact, part of the allure of AI may be its vagueness. It gives the appearance of innovation without the discomfort of institutional reform. But if we apply the wrong kind of AI — or apply it where no AI is needed at all — we risk not just wasting resources but entrenching flawed systems under the guise of progress.

Closing thoughts

AI is not magic. It’s a set of tools — some probabilistic, some deterministic — that require clarity, context, and critical evaluation. Asking “what can AI do?” is the wrong place to start. We should begin by asking: what problem are we trying to solve, and do we understand it well enough to even pick the right tool?

Until that becomes the default approach, the public debate around AI will continue to swing between utopian hype and dystopian fear — with little grounding in how these systems actually work.

Paul Budde

Scroll to Top