Clean War, Dirty Hands
- Samir Charabat
- 2 days ago
- 6 min read

Imagine you could stop a dangerous person, someone credibly planning to harm hundreds of civilians, without putting a single soldier at risk. No boots on the ground. No pilot in a cockpit. Just a remotely controlled aircraft, a camera feed, and a missile fired from thousands of miles away. Clean. Precise. Safe for your own people.
Now imagine that the person you killed never had a chance to defend themselves in court. That the intelligence turned out to be partially wrong. That three civilians died alongside the target, and nobody in the chain of command can quite explain who, specifically, made the final call.
Was it the right thing to do?
This is not a hypothetical. Governments have been making this exact decision quietly, routinely, and with remarkably little public debate for over two decades. And depending on which moral framework you apply, the answer changes completely.
Two Ways of Thinking About Right and Wrong
Most of us, when we try to work out whether something is ethical, use one of two instincts without necessarily realizing it.
The first instinct is about outcomes. An action is right if it produces good results: if more people are better off, if suffering is reduced, if the net effect is positive. This is the core of a philosophy called utilitarianism, developed by British thinkers Jeremy Bentham and John Stuart Mill in the 18th and 19th centuries. Utilitarians are, at heart, moral accountants. They weigh costs against benefits. The right choice is whichever one leaves the world in a better state.
The second instinct is about rules and dignity. Some things are just wrong, regardless of the outcome. You cannot execute an innocent person even if doing so would prevent riots. You cannot torture someone even if the information you extract might save lives. This is the tradition of deontological ethics, most associated with the German philosopher Immanuel Kant. Deontologists believe every human being has inherent worth that cannot be traded away for a good result. There are duties that hold absolutely, no matter the consequences.
Both traditions are serious, both are centuries old, and both have shaped law, politics, and warfare in profound ways. They just happen to reach opposite conclusions when a missile is fired from a drone at a target 7,000 miles from the person who authorized the strike.
The Utilitarian Argument: Drones Are the Humane Option
If you think morality is about minimizing harm, the case for drone warfare is surprisingly strong.
Ground operations, sending soldiers into hostile territory, are bloody and chaotic. Bombing campaigns destroy infrastructure and kill indiscriminately. By comparison, a drone can circle a target for hours, gathering intelligence, waiting for the moment when civilians are absent, and striking with a precision that no previous weapons technology could match. Fewer soldiers die. Fewer civilians die. The threat is removed.
From a strictly utilitarian point of view, if this calculus holds, then drones are not just acceptable. They might actually be the most ethical military option available. Refusing to use a more precise weapon when a less precise one would kill more people does not sound principled. It sounds reckless.
There is also a broader strategic argument. Drone programs, their defenders say, disrupt dangerous organizations and degrade their leadership without the years-long ground occupations that have historically cost enormous numbers of lives on all sides.
The utilitarian case, in short, is this: look at the numbers. Drones save lives. A moral framework that cares about outcomes should care about that.
The Deontological Answer: Some Things You Simply Cannot Do
Kant would not have been impressed by the body count argument.
For a deontologist, the moment you start justifying killing by running the numbers, you have already made a catastrophic moral error. You have treated human beings, their lives, their dignity, their right to a fair process, as variables in an equation. And that, Kant argued, is the root of most of the worst things human beings do to each other.
Deontological ethics applied to warfare produces several firm, non-negotiable rules. You must know, not estimate, not infer from behavioral patterns, that your target is actually a combatant. You must give people, wherever possible, the opportunity to face the charges against them. You cannot kill civilians even when doing so would make the mission cleaner.
And there is a subtler point, one that sounds almost old-fashioned but carries real weight: a soldier has traditionally always shared in the risk of the violence they inflict. The warrior's willingness to die is part of what separates warfare from murder. It functions as a natural brake. The more dangerous combat is for those who wage it, the more carefully they tend to wage it.
The drone removes that brake entirely. A crew operating a drone from a base on the other side of the world faces no physical danger whatsoever from the person they are about to kill. The asymmetry is absolute. For a deontologist, this is not just an operational detail. It changes the moral character of the act. At some point, "warfare" conducted in total safety by one side starts to look less like combat and more like remote execution.
And execution, especially of someone who has never faced a judge, is something Kant would have called categorically impermissible. Not "inadvisable." Not "problematic." Wrong, full stop, regardless of how dangerous the target was or how many lives the strike might have saved.
Where the Cracks Appear
Here is where good philosophy gets honest: both frameworks run into serious trouble.
Utilitarianism's problem is the data. The entire moral case for drones rests on the claim that they are, in fact, more precise and produce fewer casualties than the alternatives. But the numbers are deeply contested. Government casualty figures have repeatedly been far lower than those compiled by independent journalists and human rights organizations. A moral system that asks you to count the bodies and choose the smaller number is only as reliable as the counting. When governments classify the evidence and define casualties in self-serving ways, the utilitarian foundation quietly crumbles.
There is also a subtler, more damaging problem. Because drone strikes carry almost no political cost, no soldiers lost, no public sacrifice, no formal declaration of war, they make it far easier for governments to choose violence. And if low-cost war is used more frequently, the total amount of killing may actually increase, even if each individual strike is more precise than what came before. The utilitarian math can flip entirely once you account for how the technology shapes behavior over time.
Deontology's problem is rigidity. An absolute prohibition on any action that risks civilian harm sounds morally serious, but it can become a form of paralysis. In a world where doing nothing also has consequences, where inaction allows atrocities to proceed, a framework that forbids all morally imperfect choices can end up producing worse outcomes than a careful, constrained use of force would have. Deontology is also better at identifying moral wrongs than at offering practical guidance to people who must make decisions under uncertainty and with imperfect information.
The Question Neither Can Fully Answer
What drone warfare really exposes is a gap in both traditions, a moral blind spot that neither Bentham nor Kant could have foreseen.
When a drone kills someone, the decision was made by a chain of people: intelligence analysts, lawyers, military commanders, politicians, and software engineers. The person who pressed the button was following a targeting package prepared by people who may never know the outcome. The politicians who authorized the program may never be told who specifically died. The public that funds it has little access to what is actually being done in their name.
Both utilitarianism and deontology were designed for a world where a moral agent makes a choice and owns its consequences. Drone warfare is designed, almost architecturally, to distribute that responsibility so broadly that no one ever fully owns anything. The utilitarian cannot run the cost-benefit analysis because the costs are hidden. The deontologist cannot assign blame because the chain of command is deliberately opaque.
This might be the most important thing philosophy has to tell us about drone warfare. Not which strikes were justified and which were not, but that the entire system has been structured to make those questions nearly impossible to answer. Moral accountability requires visibility. And one of the most consistent features of remote, algorithmic warfare is that it happens in the dark.
The real ethical failure may not be any single decision. It may be the construction of a system designed so that the hardest questions never have to be asked at all.



Comments