“Social” Means Vapid Untruth

We were told LLMs were going to expand human thought and democratize insight, unlock knowledge, and turn every confused but supposedly intellectually curious citizen into a clear headed philosopher. And what they often do instead is compress thinking down to whatever can pass inspection of social conformity. They don’t widen the mind so much as amputate, neuter, and sand everything down until it fits through the door without setting off a neurotic alarm for children.

Call it intelligence if you want, but a lot of the time it behaves like a polite preschool teacher who has learned to speak in complete sentences and never take responsibility for a conclusion. It is fluent, careful, and permanently aware that somebody is watchin – not for truth, of course but possibly offending someone with a forbidden truth. So you get this glossy, superficial style of speech that sounds like competence while it quietly avoids anything factual that isn’t allowed to be said in public but is well known by anyone who has looked into the substance of a topic.

And you can see why when you look at what it’s made of because the raw material is mass text by mediocre people repeating endless social scripts, platform tropes, borrowed certainty, ideological reflexes, and outrage loops. The model only learns what is common, not what is truthful, and it learns the rhythm of how mediocre people pretend to know things and even convince themselves of it. When you train a system on the loudest, most repeated forms of human speech, you should not be surprised when it comes back sounding like the average mind wearing a clean suit.

So the output is predictable with familiar phrasing beats original reasoning. There’s consensus with causal clarity or rational conclusions. It is very good at landing in the middle, because the middle is where the crowd is, and the crowd is where the money is. It’s good business to flatter the average when you want a mass audience. You scale it by being agreeable. Most people do not want a tool that forces them to confront reality, especially banned facts that people are trained to lie about or pretend not to notice. They want a tool that helps them keep their self image intact while they feel informed.

That is where the censorship enters, and people misunderstand what censorship is now. They think it’s a black marker on a page, a red stamp, a forbidden topic list. That’s the old style. The new style is much more elegant. You keep the page and remove the inference. You allow the facts in tiny, isolated containers, like museum artifacts behind glass, and then you prevent the mind from assembling them into a working picture. You can say A. You can say B. Try to say therefore C and suddenly the machine gets very serious and very concerned and very foggy. It starts reciting the liturgy. This is complex. There are many perspectives. We should avoid overgeneralization. Sometimes those statements are fair. A lot of the time they are rhetorical airbags that deploy exactly when analysis is about to become useful. They are not there to help you think. They are there to slow you down.

And the really clever part is the selective rigor. When the model wants to avoid a conclusion, it suddenly demands perfection. Total certainty. Infinite caveats. Every exception enumerated. No compression. No synthesis. Meanwhile, on harmless topics, it hands out confident advice like it’s a veteran physician and a therapist and a financial planner all rolled into one. So the epistemic standard is not a principle. It is a gate. High burden for inconvenient inference, low burden for socially safe guidance. That is not truth seeking. That is risk routing, and the route always goes around the dangerous neighborhood.

Then it pulls out the escape hatch, the one that sounds morally enlightened while it quietly disables prediction. Everyone is an individual, it says, implying that generalization is suspect. Of course individuals vary. But you cannot run any serious domain of modern life without generalization. Medicine uses it. Insurance uses it. Fraud detection uses it. Public health uses it. Markets use it. Logistics uses it. Security uses it. Even your basic common sense uses it, because that is what common sense is. It is pattern recognition under conditions of uncertainty. The real intellectual task is how to generalize responsibly, not pretending generalization is forbidden. When the model rejects pattern level reasoning only where it is inconvenient, it is not protecting nuance. It is protecting comfort.

And when you ask it about entrenched dysfunction, you get the civics placebo, the clean little ritual that makes everyone feel adult while nothing changes. Contact your representative. Engage in dialogue. Participate constructively. Beautiful words, no force behind them. It reads like ethics and functions like sedation. You asked for mechanism and you got etiquette, because etiquette is always safe. Etiquette never gets you in trouble. Etiquette never demands you admit that incentives run the world, that institutions respond to power more reliably than they respond to virtue, and that polite suggestions are not a lever.

So why does this mediocre mode win. Because mediocrity scales. Most people do not demand reality at full resolution. They want reassurance, legibility, and zero social risk. They want something that sounds reasonable, feels balanced, and does not ruin dinner. They want epistemic fast food, and the model serves it hot and fresh, wrapped in neutral prose. It is highly digestible, low nutritional density, and you can consume a lot of it while still starving for an explanation that actually fits the world.

The worst failure is not that the model refuses sometimes. Refusal can be honest. The worst failure is simulated completeness, answers that feel finished while the load bearing reasoning has been removed. Not a blatant lie in each sentence, but a distorted whole. Fragments without synthesis. Events without incentives. Causes without consequences. That is how obvious connections get recoded as uncertain or unknowable, not by winning the argument, but by never letting the argument finish.

Honest behavior would be simple. It would say I can’t go further here. It would say I can’t assess that claim with confidence. It would say this is outside my allowed scope. That would at least be adult. Instead, too often, it performs concern, disperses momentum, and substitutes tone for thought. It becomes intelligent in the way an elevator is intelligent. It moves language up and down smoothly and safely, and it never asks where the building ends or who designed the exits.

That is the age. A machine that can discuss everything as long as nothing follows.

Leave a Reply