In 1818, an 18-year-old Mary Shelley published Frankenstein, a novel often reduced to a gothic horror story but, in reality, a profound philosophical critique of scientific ambition. Written at the height of the Enlightenment, the book reflects both the optimism of that era and its blind spots.
Shelley’s story is not simply about a monster. It is about a creator—Victor Frankenstein—who pushes the boundaries of knowledge, succeeds beyond expectation, and then fails at the most critical moment: taking responsibility for what he has created.
That failure proves catastrophic.
Victor Frankenstein does not set out to create evil. His ambition is scientific progress, discovery, even advancement of humanity. But once confronted with the consequences of his work—the living being he has brought into existence—he recoils. He abandons his creation, refuses accountability, and ignores the ethical obligations that arise from his own success.
The tragedy that follows is not the result of the creature’s existence alone, but of the creator’s negligence.
This central theme resonates with uncomfortable clarity today.
Enlightenment optimism and its limits
Shelley was writing in an era that celebrated reason, science and progress. The Enlightenment belief was that knowledge would lead to human betterment. Scientific advancement was seen as inherently positive, or at worst, neutral.
That assumption still underpins much of modern technological thinking.
We often hear that technology itself is neutral—that it is merely a tool, and that outcomes depend on how it is used. But Shelley’s narrative challenges this idea. Frankenstein’s creation is not neutral in its consequences because it enters a social world unprepared for it, shaped by the conditions of its creation and abandonment.
Technology, in this sense, is never entirely neutral. It is embedded in human systems—economic, political and cultural—and shaped by the intentions and responsibilities of its creators.
Artificial intelligence as a modern parallel
The parallels with artificial intelligence are striking.
AI systems are being developed at extraordinary speed, with capabilities that are still not fully understood. Like Frankenstein’s experiment, they represent a leap beyond incremental progress into something qualitatively different: systems that can learn, adapt, generate and increasingly act autonomously.
Yet the question Shelley raises remains largely unresolved: who is responsible for the consequences?
It would be simplistic to argue that individual engineers or scientists are indifferent to ethics. Many are deeply aware of the risks. The problem lies elsewhere—in the structures that govern technological development.
The dominant incentives in the AI sector are commercial: scale, speed, market dominance and shareholder value. Ethical considerations, while often acknowledged, are secondary to competitive pressures. Companies cannot afford to slow down if their competitors do not.
This creates a systemic version of Frankenstein’s dilemma. The “creator” is no longer a single individual but a network of corporations, investors and governments, each with partial responsibility and limited accountability.
Lessons from social media
We have seen this pattern before.
The rise of social media platforms was initially framed in utopian terms: global connection, democratisation of information, empowerment of individuals. For a time, these benefits appeared real.
But the negative consequences—misinformation, polarisation, erosion of democratic norms, and social harm—were either underestimated or ignored. Even after these harms became evident, meaningful corrective action is slow or not taken and often constrained by commercial interests.
This is not simply a failure of foresight. It is a failure of responsibility.
The lesson is clear: once a technology is deeply embedded in society, it becomes extremely difficult to regulate or reshape. The moment for ethical intervention is not after widespread deployment, but during development.
The responsibility gap
Shelley’s warning is not anti-science. It is anti-irresponsibility.
Victor Frankenstein’s failure was not that he created something new, but that he refused to engage with the consequences of that creation. He sought the prestige of discovery without accepting the burden of stewardship.
In today’s context, the responsibility gap is institutional rather than individual. It manifests in:
- fragmented accountability across global actors
- regulatory lag behind technological capability
- economic incentives that prioritise growth over caution
- and a persistent belief that innovation should proceed first, with governance to follow
This gap is where risk accumulates.
Towards responsible innovation
If Frankenstein still has relevance today, it lies in its insistence that creation and responsibility are inseparable.
For AI, this means embedding ethical considerations into the design, deployment and governance of systems—not as an afterthought, but as a core requirement. It also requires stronger public oversight, international coordination, and a willingness to challenge the assumption that faster innovation is always better.
The alternative is not difficult to imagine. It is already partially visible.
Like Frankenstein’s creation, technologies released without sufficient responsibility do not remain under the control of their creators. They evolve within society, interacting with human behaviour in unpredictable ways.
And when the consequences emerge, it may already be too late to simply “switch them off”.
A 19th-century warning for a 21st-century world
Mary Shelley’s novel endures because it captures a timeless tension: the human drive to create, and the moral obligation to take responsibility for what is created.
In the age of artificial intelligence, that tension has become more urgent than ever.
We are no longer dealing with isolated inventions, but with systems that shape economies, societies and even the nature of truth itself.
The question is not whether we can build them.
It is whether we are prepared to take responsibility for what follows.
Paul Budde
