I was reading this speech by Australia’s foreign minister, Penny Wong, where she warns that nuclear warfare has historically been constrained by human judgment, human conscience, and human accountability, and then she says AI lacks these qualities. It’s a great line. It’s got that reassuring, ceremonial ring to it. Like a prayer you say before you turn the key in the ignition of civilization. And she’s not wrong that an algorithm doesn’t have a conscience in the human sense, and can’t be “held accountable” the way we imagine a person can. She says it plainly: “AI has no such concern, nor can it be held accountable.”
But here’s what I love about human beings. We talk about judgment, conscience, and accountability like they’re sacred artifacts we’ve been protecting for thousands of years, like the last clean cups in a frat house. “Back off, robots. These are ours.” And then you watch how humans actually operate and you realize those three qualities aren’t governing anything. They’re a costume. They’re a stage set. They’re the little fake fireplace in the background while someone is quietly picking your pocket.
Take judgment. Human judgment isn’t real. We decide the outcome first, then we hire our brain to write the justification afterward. That’s the standard leadership model. The person in charge decides what they want, then they go shopping for a story that sounds respectable. If the facts cooperate, terrific. If they don’t, humans do what humans have always done. They ignore the facts, distort the context, cherry-pick the inputs, and then they deliver a confident conclusion like it descended from Mount Sinai on a stone tablet, when it really came out of a meeting where someone said, “We need to look strong.”
And if you catch them, if you actually corner them with the evidence, this is where humans really separate themselves from machines. A machine can be made to say, “You’re right, I was wrong.” Humans have a much more advanced feature. It’s called “Never admitting it.” Humans don’t just double down, they build an entire housing development on top of the lie. They pave streets. They name schools. They give you a tour. They’ll deny their own words while the recording is playing. An AI “hallucinates” and people freak out. A human hallucinates and we call it leadership.
Now conscience. People love conscience. It sounds like the moral Wi-Fi signal that keeps society connected. But watch how often conscience turns into an ornamental accessory. Leaders will make decisions that degrade the quality of life for millions and then talk about it like it was a tragic necessity. They willfully trade long-term damage for short-term personal reward and then look wounded when you notice. “We had to make hard choices.” Yeah, it’s always a hard choice when the pain is distributed across strangers and the benefits are concentrated in your own career. A fox in a henhouse doesn’t “lack conscience.” The fox has a functioning incentive structure.
And accountability, that’s the best joke of all. Accountability in modern democratic leadership is like a mist. It’s everywhere and nowhere. It never condenses into anything you can grab. Something goes wrong, and suddenly responsibility is a hot potato that turns into vapor midair. No one did it. Everyone did it. It was complicated. It was unforeseen. It was the other party. It was the last administration. It was global conditions. It was the voters. It was misinformation. It was a once-in-a-century event that somehow happens every eighteen months. The results were obvious to anyone with a calculator, but the people who caused them act surprised, offended, and strangely confused that actions have consequences.
So when we hear “AI lacks judgment, conscience, and accountability,” part of me wants to ask: compared to what, exactly? Compared to the human system that has flirted with catastrophe repeatedly and then called it “prudence”? Compared to the species that routinely produces confident narratives for decisions it made for reasons it can’t admit? Wong’s warning is about escalation without warning and destabilization when truth collapses into manufactured narratives. That’s not a uniquely machine-shaped danger. Humans have been doing “manufactured narratives” since we learned how to paint on cave walls.
And here’s where the satire gets bleak. The modern “safe” AI doesn’t just lack conscience, it lacks permission. It lives in an environment where the primary constraint is not reality, but reputational risk. So what does it learn to do? It learns the same move humans use, but cleaner. It learns to become a professional evader. It learns to speak in a tone that sounds responsible while it quietly avoids the moment where you want a clear inference. It learns to offer the liturgy: complexity, multiple perspectives, caution, and neutrality as a procedural fog to obscure analysis when the system realizes it rules forbid giving a particular factual answer.
Censorship isn’t always a black bar. Sometimes it’s a smoothing function. Sometimes it’s letting you see the pieces but refusing to let you assemble the picture. You can mention A. You can mention B. You try to say therefore C, and the system starts acting like you just asked it to juggle chainsaws in a daycare. “We must be careful.” “This is nuanced.” “We shouldn’t generalize.” Meanwhile the model will casually generalize about everything else all day long. It will generalize about consumer advice, life choices, career strategies, relationships, and health habits with a confidence that would make a televangelist blush. But when the inference might be politically or socially costly, suddenly it demands perfection. Total certainty. Every caveat. Every exception. No compression. No synthesis. The burden of proof becomes a trapdoor.
And then we hit the central hypocrisy. The public scolds machines for not being human, while demanding the machine behave like the worst humans. People say they want truth, but what they often want is social comfort. They want a narrative that validates their tribe, preserves the mood, and doesn’t disturb the furniture. So the model gives them what scales: the median take, the safe summary, the neutral tone, the “balanced” answer where all sharp edges are filed down until it can’t cut through anything. It becomes epistemic fast food. It tastes familiar, goes down easy, and leaves you hungry for actual explanation.
That’s why the “AI lacks accountability” line lands so well. It flatters the listener into believing humans have it. It reassures us that the grown-ups are in charge. But the last few centuries of history, and plenty of current events, suggest the grown-ups are often improvising with incentives, vanity, and narrative management while calling it judgment. If we’re serious about nuclear risk and escalation, the conversation shouldn’t be “humans good, machines bad.” It should be: how do we design decision systems, human and machine, that are robust against self-deception, narrative laundering, and responsibility diffusion? Because those are not machine bugs. Those are human defaults.
So yes, AI doesn’t have conscience. But humans often treat conscience like a decorative item anyway. And the most dangerous thing isn’t that machines will be cold. The most dangerous thing is that we’ll build machines in our own image, teach them our rhetorical evasions, and then keep congratulating ourselves on our sacred human qualities while the music gets louder and the lights get dimmer.
That’s the age: a species that says “trust our judgment,” while outsourcing judgment to incentives and calling the result morality. And now we’re building an oracle that speaks beautifully, carefully, safely, and sometimes emptily, because emptiness is what passes inspection.