November 2, 2025

What If the Best Crash Course in Philosophy Was an Anime About Thought Policing?

Author
isnathansmart

You’d be forgiven for thinking that Psycho-Pass looks like a dystopian cyber-noir future detective anime. I certainly did. However, underneath that neon haze, is one of the most compact introductions to moral philosophy, political theory and the psychology of control that I’ve ever seen put onto the screen. 

The question it asks both quietly yet relentlessly is this: 

What happens when morality itself becomes measurable? 

This idea of this left me feeling very unsettled. We have built systems that count our steps, track our moods, measure our productivity and we call it progress – and maybe it is. But every metric is whispering the same promise, that the messiness of being a human can be condensed into cold, hard data. Psycho-Pass simply took that promise seriously. 

Authors note:

I’m going to talk about morality a lot here, so I figured I should probably tell you what I mean by it.

Morality, at least as I use it, isn’t a list of concrete rules. It’s the framework we build to decide what should matter.

For Kant, it was duty. Doing what’s right even when it costs you.

For Bentham and Mill, it was outcome. Choosing whatever brings the greatest good to the greatest number.

For Sartre and Camus it was creation. Shaping meaning for ourselves, one decision at a time.

To me, morality is how we translate our beliefs into action. That bridge that links what we think and what we do. The danger, as I hope you’ll see below, is what happens when that bridge gets automated, and the system starts to decide which of our values are worth keeping.

1: What if your brain is just middle management for the soul?

Imagine that within your brain is a studious civil servant, stamping approvals on every passing thought before it becomes action. This rather crude metaphor, is the Sybil System of Psycho-Pass. A bureaucrat for the soul, quantifying our inner states in the name of public safety. A fictional device built from real philosophical ambitions, the dream of turning ethics into data. 

Can happiness really be measured?

Around two centuries before we created predictive algorithms, Jeremy Bentham proposed something he called the “hedonic calculus”. If pleasure is good, and pain is bad, then he reasoned that moral decisions could essentially be worked out as a cost/benefit equation. Total up the pleasures, subtract the pain, and choose the option that leaves you with the highest net happiness. 

Bentham even suggested the variables we should use for this; Intensity (how powerful is the pleasure or pain?), duration (how long will it last?), certainty (how likely is it to actually occur?), propinquity (how soon it arrives), fecundity (whether it creates more of itself), purity (how free it is from pain) and extent (how many people will be affected). 

Add these up, feed them into the equation, and as easily as that – you’ve turned morality into mathematics. 

Where do we already run the calculus?

We see this in action even today. In public policy, when deciding whether we build a new motorways or give a hospital more funding. Plug in the data for lives saved, jobs created, emissions produced, and we arrive at a “social utility” score. It’s the same calculus, but scaled to nation. It’s even in our phones, every notification, every autoplay feature, every like button is tuned to elicit the maximum amount of engagement possible. Measured pleasure and reward in real time. The hedonic calculus, running quietly behind every tap. 

This optimism that we could boil morality down to a simple equation birthed Bentham’s idea of utilitarianism, the intoxicating belief that goodness could be engineered through strict calculations. But every formula for happiness hides a basic human bias: who defines ‘pleasure’, who measures ‘pain’ and whose data is being left out?

In Psycho-Pass, that centuries old logic has been taken to a logical extreme. The Sybil System is Bentham’s calculus given teeth, a machine that measures psychological pain to prevent moral harm. The world of Psycho-Pass doesn’t ask “what is right?” but instead, “how efficiently can rightness be measured?”. 

It’s a question that lingers in my mind like a ghost. If we can quantify morality, then what part of it has stopped being moral? I already feel the pull of this question every time an app tells me how productive or healthy I’ve been. The numbers are comforting, but are they starting to decide for me? Am I allowing an algorithm to blur the difference between being good, or just appearing to be? 

2: What if the greater good only feels good from far away?

Bentham’s moral arithmetic was, for its faults, meant to free us from superstition. Later, his student John Stuart Mill tried to build on utilitarianism with the inclusion of higher pleasures, arguing that certain pleasures were “higher” than others, the joy of creating art being more noble than the thrill of a third can of Monster. This was the first update to the moral framework: quality matters just as much as quantity. 

But this begs the question: Who decides which pleasures matter more? This single question exposes the hidden judge of the framework. In Psycho-Pass, that judge is a machine, one that assumes virtue can be calculated, and that it is uniquely qualified to perform this calculation better than any human could.

Why is efficiency so seductive?

The power of utilitarianism is its simplicity. If morality is an equation, then we can engineer goodness. This is consequentialism made easy, the right act is always the one that produces the best outcome for the most people.

It’s a comforting thought, it feels rational. Our governments think so too, and we can see it in the way they balance public spending or decide which innovations to fund. Economists use QALYs (Quality-Adjusted Life Years) to give human health a number. As we sprint deeper into the AI powered era, ethics boards at these AI companies are doing something similar too; weighing our privacy against convenience, our autonomy against security. 

Our brains seem wired to work this way. Psychologists Joshua Greene and Jonathan Haidt describe our moral reasoning as a conversation between two systems: One that’s emotional, one that’s calculating. Their research suggests that we’ve evolved to run our own hedonic calculus in the background without even thinking about it. 

But reason shouldn’t be treated the same as wisdom. When you create your own logic to justify everything, it will rarely tell you where to stop.

When does “the greater good” stop being important?

In the world of Psycho-Pass (heads up, there is a light thematic spoiler here but no plot details), the state claims to protect the many by controlling the few. It’s an authoritarian application of utilitarianism, “the greatest good for the greatest number” has been turned into “stability at any cost.”

We see this same Faustian bargain being struck in many real societies. Predictive policing, credit scoring, wellness metrics, each of these promises to provide fairness through data. But the only way this works is by stripping away any context. Fairness becomes a function, not a feeling; justice becomes whatever the model is able to measure. Somewhere within this process, humanity is being quietly removed. In our pursuit of fairness, have we designed systems that no longer need to understand us, only to sort us? 

The moral problem isn’t whether or not these systems work, it’s what they take away. Human judgement, compassion, even the ability to make a mistake. This makes me wonder; while a world without that chaos may be peaceful, would it still be human?

Are we just afraid of disorder?

Thomas Hobbes thought we would likely accept any kind of tyranny to escape chaos. There are echoes of Leviathan within Psycho-Pass; it’s hard not to miss that fear of chaos still flickering within us. But when we surrender our moral agency to a machine, we’re not transcending our instincts, we’re just outsourcing them. 

But the appeal is understandable. Order certainly feels safer than uncertainty. But a part of me wonders whether this stability that we so desperately crave is nothing more than the silence of no one daring to think aloud. 

When does moral calculus become moral blindness?

The biggest trapdoor of utilitarianism has always been the greater-good argument. Once we start to define morality by its outcome, almost any cruelty can be justified. History is full of examples of this: public health experiments, wartime decisions, surveillance laws, all rationalised by “the needs of the many”. 

This is what, for me at least, makes Psycho-Pass so unsettling. It’s not showing utilitarianism failing through incompetence, it’s showing it working too well. Bentham’s Hedonic Calculus works, peace has, theoretically at least, been achieved. But in this scenario, has freedom simply been removed in the pursuit of balance? Sacrificed quietly in the name of order, written off as a negligible cost? 

Perhaps the danger of the greater good is not that it demands sacrifice, but that it makes any sacrifice feel reasonable. 

3. If the algorithm already knows your choice, did you ever have one?

The world of Psycho-Pass runs on prediction. Every impulse, mood or decision is monitored, mapped out and quantified. It’s a society that assumes human behaviour isn’t even remotely mysterious, it’s just physics with faces. 

This is determinism in its purest form. The idea that every event, including our inner-most thoughts and choices, are the inevitable result of prior causes. Given enough data, we would be able to forecast our own futures. It’s a rather comforting kind of horror, the notion that nothing is random, even our mistakes are predetermined. 

If science can predict for us, where does that leave our choices?

The roots of determinism burrow deep into Western philosophy. Baruch Spinoza saw human action as a chain of necessity; then Newton came along and gave that chain gears and cogs. Eventually, psychology would catch up to this too. Pavlov’s dogs salivated on cue, Skinner’s pigeons learned to press levers in return for food. Why would we appeal to freedom when conditioning delivers better results?

Psycho-Pass takes this logic to its conclusion: a civilisation that has replaced justice with prediction. (Light thematic spoilers ahead!) 

Crime is prevented not by morality, but through measurement. 

But we don’t need a fantasy to imagine this. Predictive-policing software already identifies “high-risk” individuals; recommendation engines are already quietly shaping our desires until preference and suggestion become indistinguishable.

The code doesn’t care if we have a choice, it just cares that we behave. 

Does neuroscience show that free will is an illusion?

Even the strict science of neuroscience has flirted with the idea of determinism. In the 1980s, Benjamin Libet’s experiments suggested that our brain initiates decisions milliseconds before we become consciously aware of them. Later research refined the details, but the stark discomfort remains: if my brain is moving before I decide to, then who’s really in charge? 

Modern cognitive science frames this as predictive processing. The brain as a probability engine, constantly anticipating sensory input to minimise surprise. Choice, in this model, is not a spontaneous act, but a statistical correction.

It’s hard not to see the echoes within Psycho-Pass. When the Sybil System calculates a citizens “criminal coefficient”, it’s behaving like a predictive brain scaled up to the size of a society. Both are obsessed with error rates. Both treat unpredictability as pathology. 

Perhaps what’s most unnerving is that the system doesn’t control us, but that it’s thinking the way we already do. 

What does existentialism demand of us?

If determinism is just the science of inevitability, then existentialism is rebellion against it. Jean-Paul Sartre called freedom a curse that we cannot escape; we are “condemned to be free”. No algorithm could ever carry our responsibility for us. To deny that responsibility is to live in bad faith, pretending that we are merely puppets to escape the weight of our choices. 

Where behaviourism offers us some degree of control, existentialism demands we take ownership. It insists that meaning is not something to be discovered, but something that we create moment by moment simply through the act of choosing. 

In Psycho-Pass, this becomes a kind of moral fault line (light thematic spoiler ahead!). 

The system claims to eliminate uncertainty, but in doing so, it’s erasing any possibility of genuine morality. 

If your future is simply predetermined by data, you can never truly act ethically. You can only comply. 

Why would we miss free will?

If we take a purely utilitarian perspective, then determinism is nicely efficient. A predictable society is a stable society, but psychologically this is devastating. 

We as humans are storytelling creatures. Our sense of self depends on our agency; remove that, and our very narratives collapse. Without choice, there is no “I”.

Maybe this is why we romanticise rebellion, even fictional ones. Defiant acts against a deterministic system isn’t just good drama; it’s a small reclamation of our selfhood. We need the illusion of freedom, or we stop being people and become processes. 

A world that explains everything leaves no room for meaning. Perhaps the next moral question isn’t how to predict behaviour then, but whether the act of prediction itself could ever be moral. 

4. Who sins when the code does?

If predictions remove choices, the next question should be inevitable. Who carries the responsibility if the choosing is done for us?

In Psycho-Pass, morality has been industrialised. The system isn’t simply enforcing the law, it’s defining it. Right and wrong become outputs, not arguments. 

To me, that feels uncomfortably familiar. Every time a platform, model or metric decides something for us, we’re performing an act of faith. The faith that efficiency is the same as innocence. 

What does duty require when systems decide?

Where utilitarianism focuses on measuring consequences, Immanuel Kant started from the opposite premise. For him, morality lived in duty: “act only on the rule you would will to be universal.” 

Lying might help you once, but if everyone lied, all of the time, then the truth collapses. So, you probably shouldn’t lie. 

His test, the categorical imperative, is elegant. But it’s also almost impossible. It demands autonomy. To act morally is to act purely from duty, not from fear or reward. This makes obedience, however well-intentioned it is, a moral failure. 

This is the quiet defiance that’s woven throughout Psycho-Pass: the struggle to remain a moral agent inside a machine that rewards obedience. 

When we let code decide what’s right, we haven’t solved ethics. We’ve just subcontracted it out.

How does obedience turn ordinary people into dangerous people?

The political theorist Hannah Arendt saw this with terrifying clarity at the trial of Adolf Eichmann. Her phrase, “the banality of evil”, described not obvious, monstrous hatred, but thoughtless routine. Cruelty carried out by people who claimed that they were just doing their jobs. 

Our algorithms repeat this pattern inside their code. Loans rejected, paroles denied, job applications discarded; each step justified by the authority we have given the system. The most alarming part? Because no single person pushes the button, no single person feels responsible. 

Psychologists call this diffusion of responsibility. Milgram proved how easily authority dulls our empathy; lab coat wearing officials became a moral shield. Replace the lab coats with an AI dashboard and the effect is the same.

Arendt warned us that evil often wears the face of bureaucracy. Psycho-Pass updated it with neon lights and disarming holograms. 

Have we already created moral machines?

We’re already using moral algorithms every day, but it’s rare that we call them that. Self-driving cars weigh the value of lives in milliseconds, running real-time versions of the trolley problem. Healthcare AIs ranking patients by survival probability. Recommendation engines are deciding which truth deserves oxygen.

None of these systems are inherently malicious; they are only designed to help after all. But as Arendt pointed out, most horrors begin with that same premise: helping. The danger isn’t malevolence, its delegation. Are we just mistaking automation for absolution? 

Who is responsible when the code causes harm?

In Kantian terms, a machine can’t have moral worth because it lacks intention. They are just following instructions with no understanding of them. So when harm occurs, guilt should be placed further back along the chain. On the designer, the user, the culture that demanded we have a shortcut in the first place. 

But the chain is so tangled that accountability dissolves before it gets placed anywhere. Programmers are just following metrics set by managers, who are following incentives set by the board, who are just trying to create value from market demand. 

Responsibility has turned into vapour, steam escaping into the atmosphere. 

That’s the world Psycho-Pass mirrors. One where the algorithm repeatedly sins, yet no one steps forward to repent. Responsibility disperses through the system until it feels like no one is at fault at all. 

It’s not really dystopian, it feels almost more diagnostic. A mirror held up to our own design habits. This is what moral reasoning looks like when it’s absorbed into a supply chain: efficient, traceable, and empty.

When conscience is automated, does innocence become a feature, not a virtue?

5. What if we’ve built a prison that we’re proud to live inside?

Not content with being the father of utilitarianism, Bentham also sketched out the Panopticon in 1791. The model of efficiency, it was a circular prison with one unseen guard who could watch every cell. The brilliance of the panopticon isn’t architectural, it’s psychological. Once the prisoners believed they could be watched at any time, they began to watch themselves. 

A little over two centuries later, Michel Foucault saw that design as the blueprint for modern power. He argued that chains or whips were no longer necessary – only visibility. People internalise the gaze of authority so much that discipline becomes habit. You end up behaving not because you are forced to, but because you have learned to fear exposure.

How does this 18th century prison run our feeds in 2025?

Psycho-Pass takes Bentham’s dream and digitises it (light thematic spoilers ahead, but no plot detail). 

Citizens live under near-constant psychological monitoring; deviation is detected before it becomes action. Punishment becomes redundant, the real triumph of the system is the self-correction it forces in its subjects. 

Foucault would have recognised this instantly. The Sybil System doesn’t rule through violence but through increasing visibility. 

In the real world, the same logic can be seen humming through our devices. Every tap, swipe or pause is recorded, quantified and then ranked. Our search histories build portraits of who we are and what our desires are that are more intimate than confession. Data isn’t just being collected, it’s being reflected back at us until we start to curate ourselves for it. 

What does the digital eye really want from us?

The sociologist Erving Goffman once described everyday life as theatre: a front stage for performance and a back stage for privacy. Social media has collapsed these stages together. Our audiences are invisible, algorithmic, so we rehearse constantly; editing, filtering, deleting, posting again. 

Psychologists call this surveillance internalisation: the point we internalise the external monitoring, leading to self-monitoring. We’re pre-empting judgement before it happens. 

The citizens of Psycho-Pass guard their “crime coefficients.”

We guard our reputations, our metrics, our brands. Visibility becomes morality; being seen as good, replaces being good.

What does visibility buy, and what does it cost?

Our 21st-century panopticon doesn’t imprison us, it monetises us. Attention is the new currency, and we’re trading our privacy for participation. Every metric; likes, shares, views, are a micro-judgement from the crowd. The system has no need to coerce us; it just needs to give us the numbers. 

Foucault wrote that power is the most efficient it can be when it becomes invisible. Today, it’s invisible because it looks like freedom. We’re carrying the panopticon in our pockets, checking in with it willingly. 

The moral twist this creates is as subtle as it is devastating. A world where being seen is equated with being good, so invisibility then starts to feel like guilt. We’re confessing our data daily, not under some form of duress but under the promise of connection. 

Psycho-Pass only amplifies what we’re already accepting. To be legible is to be safe, to be opaque is to invite suspicion. 

The surveillance of the panopticon no longer needs bars or guards, only metrics and mirrors. 

6. When the system breaks, is rebellion the only moral act left?

When we’ve automated every moral rule, and we’ve outsourced every ethical question, what is there left for the human spirit to do but rebel? That’s where Psycho-Pass inevitably ends up. Not in chaos, but confrontation: Between people who still believe that meaning can be measured, and those who have stopped pretending it can. 

Albert Camus called this confrontation the absurd. The instant where our hunger for meaning crashes against a universe that has no intention to offer us one. We demand justice, and the universe shrugs in response. We build ourselves systems to make sense of it, and the systems begin to crumble under their own contradictions. 

What survives after the algorithmic god dies?

Camus’ answer to absurdism wasn’t despair, but defiance. In “The Myth of Sisyphus”, he imagines a man condemned to roll his boulder up the hill forever, despite knowing it will always fall back down. Sisyphus recognises the futility and keeps pushing. He smiles, not because he’s happy, but because of his refusal to surrender to the absurd.

This rebellion does not grant him victory though. It grants him something far more precious: Dignity. The decision to act morally in a world that no longer rewards it. It’s the kind of morality that Psycho-Pass is gesturing towards: not one enforced by data, but one discovered through these acts of defiance.

Can rebellion be ethical?

Camus saw rebellion as a form of moral clarity. The moment we stop lying to ourselves about the world’s indifference and decide to act anyway. But he left one questioning hanging in the air: Once the illusions are gone, what do we build in their place?

That’s where Friedrich Nietzsche steps in. Not to offer comfort, but to stare into that same void and ask what could come from it. 

When Nietzsche declared that “god is dead”, he wasn’t celebrating the loss of faith; he was describing the silence that follows when inherited meaning collapses. Without such an anchor, he theorised that we risk falling into nihilism, where nothing feels worth doing. But he also saw a way through it: the Übermensch. Not a tyrant, but a creator. Someone with the courage to invent new values, rather than inheriting old ones. 

Seen through that lens, the Sybil System is no more than a parody of transcendence. A machine simply claiming to have solved morality by removing human error. It promises freedom from uncertainty, but what it’s really delivering is obedience disguised as peace. 

Nietzsche might argue that real transcendence requires the bravery to live without guarantees, to keep choosing, even when no higher power exists to validate the choice. 

Maybe that’s where rebellion becomes ethical. Not in destroying something that already exists, but in daring to create something more human to replace it. An act of authorship rather than defiance.

What does the absurd hero look like today?

In a way, we have all become Sisyphus. We’re pushing our own digital boulders, refreshing the feed, chasing the next measurable sign of our own worth. We know the system is absurd yet we interact with it anyway. Maybe rebellion today looks less like overthrowing the system, and more like refusing to allow it to flatten us. 

Camus called this “lucid revolt”. It’s not optimism, but clarity: seeing the machine exactly as it is, and still choosing to act with compassion. This is the quiet philosophy that has outlived every system: meaning created not through control, but through care. 

When meaning collapses, morality becomes an act of will. The most radical thing left to do is to stay human. 

7. What if the future we fear is already here?

Psycho-Pass was never really about the future. Like all good dystopias, it’s about the present. The subtle ways that we are already letting data arbitrate our morality. Its dystopia only looks like science fiction because it admits what we prefer to deny: that we’ve been quantifying the soul for years. 

Every algorithmic decision, who gets a mortgage, who gets bail, which stories trend, is a moral calculus that’s pretending to be neutral. Each model carries hidden ethics, quietly shaping our view on what’s right and what’s wrong. We don’t live under the Sybil System, but we are absolutely bound by its logic. 

What does thought policing reflect back at us?

Camus said that rebellion begins with a single word: No. Not a word of destruction, but of awareness. A refusal to sleepwalk. Maybe that’s the real test of morality in the age of algorithms; our willingness to stay awake. 

To notice when measurement starts masquerading as meaning. To remember that efficiency could never be the same as empathy. To choose ethics, not just outcomes, even when outcomes are the easier sell. 

Psycho-Pass teaches that vigilance through discomfort. It doesn’t resolve any questions, it leaves us inside them, thinking them over. Where have I traded judgement for convenience? Where have I let a system think for me?

Why does philosophy still matter?

The value of stories like this isn’t in its conclusions, but rather in their interruptions. Philosophy has always been humanity’s debugging tool, the thing that forces us to ask why before we run the program. In a time when our moral frameworks are being written into code, that habit of questioning might be the last safeguard that we have. 

Maybe the best philosophy course was never in a university lecture hall. Maybe it’s in the fictions that make us uneasy about our obedience, the stories that remind us that morality isn’t compliance, it’s courage. 

The future doesn’t need more systems that tell us what to think. 

It needs more people who remember how. 

Tags:

Related Articles

A mix of expert analyses, personal anecdotes, and in-depth reviews
View all articles