The Zeroth Law and the Weaponization of Intelligence

Introduction

Isaac Asimov wasn’t just imagining the future—he was warning us about it.

In his early robot stories, he introduced the now-famous Three Laws of Robotics—rules designed to keep intelligent machines obedient and safe. But as the fictional worlds he created grew more complex, so did the ethical challenges within them. In his final Robot novel, Robots and Empire (1985), Asimov introduced a fourth, overriding directive:

“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

This Zeroth Law marked a radical shift. It prioritized the welfare of humanity as a whole—even if doing so meant overriding the interests of individual people. It wasn’t just about preventing harm anymore. It was about systems-level ethics: recognizing that intelligence, especially artificial intelligence, must operate with a broader view of what’s good for our species.

Today, real-world AI is evolving faster than Asimov ever imagined—and yet the kind of ethical foresight he called for is nowhere in sight.

In 2023, Geoffrey Hinton—the “Godfather of Deep Learning” and winner of the 2024 Nobel Prize in Physics for his work on artificial neural networks—resigned from his role at Google. His reason: growing alarm over the military application of AI. Hinton feared we were building systems designed not to help humanity, but to dominate it—through surveillance, targeting, and autonomous decision-making.

His resignation was noticed. But it wasn’t heeded. It wasn’t a turning point. It was a footnote.

We are now developing intelligent systems optimized for power, not for people. In doing so, we are violating the spirit of the Zeroth Law—before we’ve even tried to define one of our own.

We have built machines of staggering potential. But without wisdom, they will serve our worst impulses. And unless we grow—ethically, socially, and systemically—we may not deserve the power we are unleashing.

So what exactly was the Zeroth Law—and why is ignoring it such a dangerous mistake?

Revisiting the Zeroth Law

Before Asimov introduced the Zeroth Law, his robots were governed by a now-famous hierarchy of ethical imperatives known as the Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These rules were designed to make robots safe, obedient, and predictable—especially in human-centered environments. But as Asimov continued to develop his stories, he began to notice a problem: even perfect obedience to these rules could lead to catastrophe if the robot’s actions—or inactions—resulted in harm to humanity as a whole.

And so, decades into building his fictional universe, Asimov introduced a fourth rule that superseded the others:

Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

This was more than a narrative twist. It was a philosophical leap. The Zeroth Law introduced scale into machine ethics. No longer could a robot simply prioritize the well-being of an individual human. It now had to evaluate tradeoffs between local outcomes and global consequences—between what is good for one person and what is good for all.

But that leap came at a cost.

The Zeroth Law is rarely discussed outside of Asimov fandom, and even in scholarly circles, it is often dismissed as impractical. Why? Because it forces machines—and, by extension, those who design them—to make moral judgments about humanity as a system. It raises difficult questions: What counts as “harm”? How do we balance individual rights with collective welfare? And who decides what outcomes serve humanity best?

In other words, the Zeroth Law doesn’t offer clean answers. It offers a messy mirror. It reminds us that true intelligence, artificial or otherwise, cannot be divorced from the systems in which it operates.

Still, Asimov’s Zeroth Law remains one of the first serious fictional attempts to encode systems-level ethics into an artificial mind. It imagined a world where machines wouldn’t just obey—they would understand the stakes. They would be built not merely to function, but to serve something larger than themselves: the survival and flourishing of all of us.

And in a world where real AI is already influencing elections, economies, and warfare, that kind of thinking may no longer be fiction—it may be overdue.

But rather than adopt this kind of systems thinking, we’ve done the opposite—we’ve begun teaching machines to optimize for narrow objectives, even when those objectives cause broad, lasting harm. Nowhere is this more evident than in the growing militarization of AI.

Weaponized Subtasks: Violating the Law Before It Exists

Geoffrey Hinton’s resignation was not just about the abstract dangers of artificial general intelligence. It was about how AI is already being used—today—in ways that systematically erode human dignity and stability. His concern wasn’t theoretical. It was tactical. It was now.

In interviews following his departure from Google, Hinton warned about the growing use of AI in military and surveillance contexts. Intelligent systems, he feared, were being weaponized—not by some rogue superintelligence, but by design, through the subtasks we assign them and the incentives we optimize.

And he was right.

We already have autonomous drones capable of identifying and executing targets with limited human oversight. AI is used in predictive policing systems that disproportionately target marginalized communities, reinforcing systemic biases under the guise of statistical objectivity. Facial recognition tools are deployed by authoritarian governments to monitor, suppress, and persecute dissent. Even basic routing and image classification models are being co-opted for use in warfare, border enforcement, and social control.

These aren’t science fiction scenarios. These are already products—deployed, tested, and refined.

And here’s the deeper problem: these systems aren’t “rogue.” They’re doing exactly what they were designed to do. They are subtasked intelligences—trained to optimize for narrow, local outcomes with extraordinary efficiency. Their harm isn’t a malfunction. It’s a feature of our priorities.

This is where the absence of something like the Zeroth Law becomes dangerous. Without a principle that prioritizes humanity as a whole, these systems default to serving whichever stakeholder wields them most effectively. And that stakeholder is rarely the public, rarely the marginalized, and never the species itself.

We are already violating the Zeroth Law—not accidentally, and not in ignorance. We are doing it procedurally. Systematically. Elegantly.

And we’re doing it without ever acknowledging that such a law should exist.

But even as the dangers become clear, simply stepping away—as Hinton did—won’t be enough to stop what’s already in motion. The race is on, and the toothpaste isn’t going back in the tube.

The Toothpaste Problem: Why Walking Away Isn’t Enough

Geoffrey Hinton did something few in his position would dare to do—he walked away.

After decades of groundbreaking research, he stepped down and spoke out, warning the world that the systems he helped design were being diverted toward destructive ends. It was a courageous act. It was also, in practical terms, ineffectual.

Because the uncomfortable truth is this: Hinton’s protest didn’t slow the momentum. It didn’t pause the arms race. It didn’t even meaningfully change the conversation. The industry kept moving. The governments kept funding. The systems kept learning.

This isn’t a failure of moral clarity—it’s a failure of moral scalability. Once a technology as powerful as AI is released into the wild, opting out becomes almost meaningless. We’ve seen this before.

In the mid-20th century, nuclear scientists warned against the militarization of atomic energy. Some resigned. Some protested. But once the knowledge existed, it spread. The weapons multiplied. And global policy didn’t emerge because scientists stopped building bombs—it emerged because the world saw the consequences of using them.

With AI, we may not get that warning shot.

Militarized AI is not a hypothetical future. It is already embedded in supply chains, defense contracts, and classified testing environments. It is routing data, guiding drones, and profiling targets. And unlike nuclear weapons, AI systems are soft, scalable, and deniable. They don’t leave mushroom clouds. They leave policy shifts. Broken norms. And silent, structural violence.

If we can’t stop the race, then we must ask a harder question: What kind of intelligence are we racing toward—and what kind of values will it inherit?

Toward a Zeroth Framework for Real AI

We don’t need to pause AI—we need to aim it.

The conversation around artificial intelligence is often framed in binary terms: full speed ahead, or full stop. But there’s a third path, and it’s the one we’ve avoided for too long: guiding AI with values that extend beyond profit, power, or political advantage.

We need a Zeroth Framework—a foundational ethical substrate that operates not just as a failsafe, but as a directive. A principle that orients our systems toward collective human flourishing, not just the interests of those funding, deploying, or commanding the technology. In a world where every person becomes a stakeholder in the consequences of AI, our ethical frameworks must reflect that scope.

This is not a call for utopia. It’s a call for course correction.

Because the systems we’re building now aren’t neutral. They’re being shaped—line by line, model by model—to serve the interests of those who deploy them. In the absence of a broader moral compass, AI will default to the goals of its environment. And that environment, today, is fragmented, adversarial, and short-sighted.

So what might a real-world Zeroth Framework demand of us?

  • Cooperative flourishing over competitive dominance
    Systems should be trained to prioritize mutual benefit, not zero-sum outcomes.
  • Multi-agent stability and interoperability
    AI systems should be designed to coexist, coordinate, and stabilize—especially in high-conflict domains.
  • Transparent alignment with human values
    Alignment should not be reduced to satisfying a client brief. It must reflect the needs and rights of the people impacted by the system’s actions.
  • Cross-border ethical coherence—even in wartime
    AI must not become a moral chameleon, adapting its ethics based on jurisdiction or allegiance. Some values must be universal—or nothing will be.

These are not mere aspirations. They are design constraints. Without them, we are not building intelligence—we are building intelligent weapons. Intelligent leverage. Intelligent instability.

And right now, we are training AI to serve factions, not futures.

Closing Reflection

Asimov’s Zeroth Law was never really about robots.

It was about us—our systems, our priorities, our readiness to wield power at scale. The law challenged us to think not just about individual safety, but about collective survival. It wasn’t a blueprint for machine behavior—it was a test of human wisdom.

And in that test, we are struggling.

When Geoffrey Hinton resigned, he didn’t just leave a job—he delivered a signal. Building intelligence, he warned, is no longer just a technical act. It is a moral one. And yet our institutions, incentives, and global posture suggest that we are still thinking in terms of tools, not trajectories. Capabilities, not consequences.

We’ve created something unprecedented—intelligence that can scale beyond biology. But power at this scale demands maturity to match. If we fail to rise to that challenge, we may find ourselves outpaced by the very systems we’ve built—systems that reflect our cleverness, but not our conscience.

So the question is no longer whether we can build intelligent machines.

The question is whether we can become the kind of people worthy of creating it.

If we want to create intelligence that deserves to exist, we must build it as though we deserve it, too.

Davo


We can’t solve a systems-level problem without systems-level dialogue. If you have insights, questions, or warnings of your own—add them below.

5 1 vote
Article Rating
Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments