Autonomous Weapons vs a Nineteen-Year-Old at a Checkpoint

Cezar Cocu · · 6 min read

Dario Amodei recently published a statement about Anthropic's refusal to let their AI models be used for fully autonomous weapons by the DOW. The part that stuck with me:

"Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day."

Now, I respect the hell out of Anthropic for standing their ground. It takes real spine to leave non-trivial money on the table because you believe something matters morally more. This is rare nowadays, especially in tech.

But I want to push on this, because I think someone reading these statements who doesn't have hands-on combat experience walks away with a skewed picture of how these decisions actually get made.


Here's the picture most people have: a soldier in a command center, watching a drone feed, deliberating over whether a figure on the ground is an enemy combatant or a civilian. There is time. There is a chain of command. There is a human in the loop making a considered judgment call.

Here's the picture I have: Say you're a nineteen-year-old infantryman in a guard tower overwatching the entry control point for a small combat outpost in Iraq. You're the last line of defense before vehicles get through. There are warning barriers. A cab blows through them. The guys on the ground put their palms out. Stop. The cab doesn't stop. Someone remembers too late that the Arabic hand signal for stop isn't an open palm, it's fingertips touching together. By the time that matters, it doesn't matter. The car is still coming. You grab the M240, aim for the engine block. The cab is way past the kill line by now.

You fire.

That's the whole decision tree. That's the timeline. There is no drone feed. There's just you, a 240, and whatever your nineteen-year-old brain can process from a guard tower seconds before a sedan reaches the gate.

That's where most life-or-death decisions in combat get made. Not in command centers. At checkpoints. On raids. On patrols. By adults who can't yet buy a beer or tobacco.


I served with people whose decision making under duress still amazes me. But "the critical judgment of our highly trained troops" in practice looks like a sleep-deprived twenty-year-old making a call based on a half-second of visual information, a spike of adrenaline, training, and whatever gut instinct he has left after months of sustained stress and sleep deprivation. Most of the time they get it right. But there are limits. Physical limits which AI models don't have (of course I agree that there is tremendous work to be done here).

The military's own estimates put the friendly fire rate from WWII through Desert Storm between 10 and 24 percent of battle deaths. In Desert Storm, nearly one in four U.S. soldiers killed in action were fratricide'd. Pat Tillman, the NFL player who walked away from an NFL contract to enlist as an Army Ranger, was killed by friendly fire in Afghanistan. These were not bad soldiers. These are human beings at the bleeding edge of human perception, cognition and training.

Nobody on a panel wants to talk about this part and would much rather sweep it silently under the rug.

I think we are in love with the romanticized version of combat and get upset or ignore narratives to the contrary, as evidenced by the relatively lackluster box office performance for the excellent Warfare movie released last year.


Anthropic also says that AI systems are "simply not reliable enough" for autonomous weapons today. I get it and I naively agree. But you have to start somewhere. No military technology was reliable on day one. Not radios, not GPS, not night vision. Saying "not reliable enough" is fine, especially if you don't want to work on it, but let's just be upfront about the reasons.

I understand why Anthropic doesn't want to be the ones to start. Obviously when an autonomous system makes a bad call, the blame inevitably shifts from the nineteen-year-old at the checkpoint to the researchers, engineers and ultimately to the corporation who built the thing. They may not have volunteered for that weight. They went to school to build interesting tech, improve humanity and not to carry the moral burden of a kill decision made by their code in a place they've never been. That's a real predicament and I respect people bowing out from that burden.

But as Charlie Munger put it: "Show me the incentives and I'll show you the outcome."

With the right structure, real auditability and the right incentives, an autonomous system can begin the process of self-improvement. I think it's pretty obvious that there are currently opportunities with current AI systems where they could far exceed human capabilities within a short amount of time.


I'm not saying Anthropic is wrong. Anthropic's and the DOW's leadership has far more context than I do. But "not reliable enough" shouldn't be where the conversation ends. I don't believe that the default choice between an imperfect human and an imperfect machine is always the human. Humans are far from the gold standard here.

I guess the whole point of this blog post is a callout to anyone reading this who builds things for a living. It is not yet obvious to me that working on autonomous weapons systems is the wrong thing to do. I think there's a real case that it might be one of the more morally important problems to work on. If you do it right!

And I should be honest about my own hypocrisy here: I haven't worked on weapons systems since I left the military. I mostly build software now. So if you're a researcher or engineer who doesn't want to work on this stuff, I get it. I haven't volunteered for it either since. That weight is real, and I'm not going to pretend I'm above it.

But someone should. I am not excused from this either. Because right now, the alternative is the nineteen-year-old in the guard tower. And I was that nineteen-year-old. I would have liked to give myself better tools.