Assessing Claimed Intentions

The parable of the motorcycle helmet is a story about first aid the way a flight simulator is about video games. Yes, there is a helmet. Yes, there is a neck. But the point is not trauma care. The point is governance, responsibility, and the basic epistemic distinction between a one-off mistake and a stable operating procedure.

You come upon a wreck. You do what a normal person does. You pull the helmet off because you are trying to help, because you are not thinking about cervical vertebrae, because you are full of adrenaline and compassion and ignorance. The rider is now paralyzed.

This is tragedy. It is also error. And error, in the human sense, is not merely “bad outcome.” Error is “bad outcome paired with an internal model that can plausibly be wrong once.” We forgive you because the mistake is bounded. We can imagine ourselves doing it. We can imagine you learning not to do it again.

Now introduce the second character: the helmet remover. He is not a frantic bystander. He is consistent. He appears at crash after crash. Helmet off. Neck unstable. Paralysis. “Oops!”

At that point, the moral language changes because the engineering language changes. The first scenario is a failure mode in a stochastic environment. The second is a production line.

This is where the engineering maxim comes in: a system is what it does.

Not what it says. Not what it intends. Not what it believes about itself in the shower. What it does.

If you have a machine that produces paralysis, you can paint “HEALING DEVICE” on the side in cheerful letters. You can hire a poet to write moving copy about rescue. You can commission studies about the importance of compassion in rescue culture. None of it matters. The output is paralysis. The system is a paralysis machine.

The parable is simply the moment where moral reasoning stops being sentimental and becomes adult. It says: the difference between accident and policy is not the presence of good intentions, but the presence of feedback.

A healthy system, whether human or mechanical, changes when it discovers it is hurting people. That’s what “learning” is. In engineering, feedback closes the loop. A sensor detects deviation. A controller adjusts. The deviation shrinks. If the deviation does not shrink, either your sensors are fake, your controller is disconnected, or your “goal” is not the goal you wrote in the requirements document.

So when you see the same harm repeated, the first question is not “who is evil?” It is “what is being optimized?”

Because optimization is what makes repetition. A system does not repeat an error forever unless something in the system likes that error.

Sometimes the answer really is malice, in the childish sense. The helmet remover is a sadist. But the more interesting answer – the answer that actually explains institutions – is that the system is not optimized for the victim’s spine. It is optimized for something else: status, authority, budget, narrative coherence, ideological purity, the ability to keep intervening without ever having to admit “we were wrong.”

In other words: the “Oops” is not an admission of ignorance. It is a ritual. It is a magic word that converts outcome-based accountability into intention-based immunity.

And intention-based immunity is the foundation of all bureaucratic power. If your legitimacy depends on results, you must be competent. If your legitimacy depends on proclaimed intent, you must be persuasive. Competence is scarce; persuasion scales.

The helmet parable, combined with “a system is what it does,” is really saying this: if the system keeps producing predictable harm, the harm is a feature of the system.

This is not a claim about the inner hearts of individual operators. In complex systems, you don’t need bad people to get bad outputs. You need misaligned incentives plus insulation from consequences. You need decision-makers who do not personally pay for failure, plus a moral framework that treats failure as evidence of greater need rather than a reason to stop.

Then the system becomes self-sealing. Criticism is reclassified as cruelty. “Stop removing helmets” becomes “why do you hate crash victims?” And the people who insist on the checklist – the boring, unromantic, outcome-oriented discipline of not touching the spine – are treated as enemies of compassion.

This is the perverse inversion the parable is trying to put on the table: that there exists a mode of moral action whose primary function is to preserve the actor’s moral status, not the victim’s welfare. In that mode, the victim’s suffering is not merely tolerated – it is useful. It keeps the emergency alive. It keeps the rescuer needed. It keeps the machine running. The system depends on suffering, so for self-preservation it engineers sufficient suffering from which to sustain its apparatus.

And the engineering perspective is ruthless in the best way: it does not care how moving the mission statement is. It cares about the outputs. It cares about the steady state. It cares about what happens after the tenth repetition, not the first.

So the moral lesson is also an epistemic one. In any domain where you have power over others, “good intentions” is not a blank check. It is, at most, a temporary credit line that expires the moment you refuse to update.

If you remove the helmet once, you made a tragic mistake. If you remove the helmet repeatedly, you are not making a mistake. You are running a system. And if your system keeps producing paralysis, then – whatever you call it – your system is what it does.

The parable’s final insult is that it makes this conclusion unavoidable. It forces you to stop arguing about adjectives (“compassionate,” “well-intentioned,” “problematic”) and start talking like an engineer: inputs, outputs, incentives, feedback, control. And once you do that, the “Oops” stops being an explanation. It becomes evidence that the system is not failing but rather is functioning exactly as designed.

Leave a Reply