October 24, 2025

What Happens When Your Brain Stops Trusting Itself?

Author
isnathansmart

Have you ever remembered a conversation so clearly, only to later learn that it never happened? 

I have. Many times throughout my life. I have been certain, to the point of near insanity, that I have spoken to someone about something, only to find out later that I had merely meant to have that conversation. How can you trust a brain that lies to you so well? How fragile is the boundary between what we think is real and what’s actually real? 

When your brain starts doubting its own perception, perhaps it’s not malfunctioning; it’s revealing just how fragile reality really is. 

What if reality isn’t something we see, but something we build?

I think we all like to believe our brain is a camera. We open the shutter, and it records what’s out there. However, neuroscience has spent the better part of the last two decades dismantling this idea. 

Neuroscientist Karl Friston’s Free Energy Principle (2010, Nature Reviews Neuroscience) describes our brains as prediction engines. They are continually generating expectations around what it will encounter, and then adjusts those expectations if the world throws a curveball.

In other words, our grasp on reality is less about passively absorbing facts and more about how we navigate a shifting landscape between what we expect to see, and the evidence of what’s actually there. 

The philosopher Andy Clark later called this a controlled hallucination: perception is just the brain’s best guess, constantly being tested against the sensory evidence it’s being provided. 

In this view, what you see isn’t really reality itself, but your mind’s ongoing simulation of it. When that simulation matches the world, everything feels stable. When it slips through things like déjà vu, false memories or derealisation, you start to glimpse the man behind the curtain of your consciousness. 

When Can We Trust Our Memories?

The Mandela Effect

Take the “Mandela Effect”: if you ask a roomful of people whether or not they remember Nelson Mandela dying in prison, you might be shocked to find that many of them believe he did, despite an abundance of evidence to the contrary. You could ask about the Berenstain Bears, or “Berenstein,” as some swear it was called. 

Every now and again, an entire group of people will so confidently remember something that never happened. The Mandela Effect, named after the erroneous mass recollection that Nelson Mandela had died in prison decades before his actual death, is the perfect cultural demonstration of how memories and expectations blend. 

Loftus and False Memories

Cognitive psychologist Elizabeth Loftus, who pioneered research on false memories, has shown in dozens of experiments that our recollections are reconstructive, not archival. When we recall information from our memories, we often fill in the gaps with inferences and social cues. 

In the 1990s, Loftus’ “Lost in the Mall” study found that around a quarter of those participating accepted an entirely fake childhood story as true after it was repeatedly suggested. Brain-imaging work since then (Schacter et al., 2011, Trends in Cognitive Sciences) shows that real and fake memories use almost identical networks in the hippocampus and prefrontal cortex, suggesting that our brains can’t easily tell the difference between remembering and imaging. 

The Mandela Effect scales this private quirk of ours into a collective illusion. Social reinforcement compounds the false trace until we truly believe that it’s a historical fact. It serves as a reminder that memory, like perception, depends on predictive processing: we remember what should have happened according to our model of the world. When those models end up synchronising across communities, that shared error then becomes the truth, and once again we find ourselves having to question how much we should trust our own brains. 

If everything can be faked, how do we know what’s real?

Digital Misinformation and Illusory Truth

The digital world has taken these factory-standard vulnerabilities we’ve evolved with and scale them right out. Our ancestors evolved in small groups where seeing was believing. But today? The pixels lie. 

In 2018, researchers at the MIT Media Lab analysed 126,000 stories shared on Twitter (now X). Published in Science, their study found that alarmingly, fake news spread further, faster, and deeper than anything based in fact, and that humans, not bots, were responsible. Falsehoods apparently spark a stronger emotion in us (such as surprise or disgust), which in turn drive sharing. The reward becomes a new social currency: being first, not being right. 

Psychologists have been known to call this the illusory truth effect. In experiments by Lisa Fazio (2015, Journal of Experimental Psychology), repeating a claim, even one that you know is fake, makes it feel more true. Familiarity ends up becoming a stand-in for accuracy. On social media, repetition becomes almost infinite, so it tracks that belief seems to follow exposure, not evidence, sadly. 

Deepfakes have magnified this crisis too. Synthetic voices and faces exploit the cues we instinctively use: eyes, micro-expressions, tone. Largely as humans, we make very unreliable deepfake detectors. The very existence of convincing fakes has fuelled what legal scholars are calling the liars dividend: the ability for a real offender to claim any evidence is fabricated. When everything can be faked, even authenticity itself starts to feel suspect. 

It’s not only human minds that struggle to separate these truths from an invented fiction. Modern AI models themselves are infamous for “hallucinating”: generating plausible but entirely fake images, stories or facts. When even our machines can’t tell the difference between real and artifical, the challenge for our own minds becomes all the more urgent. 

This results in an almost epistemic anxiety, a culture-wide inability to trust our brains, even when it’s whispering that this looks real. 

The glitch of derealisation: What happens when our brain’s model fails?

When perception slips through glitches like déjà vu, false memories or derealisation, you start to glimpse the man behind the curtain of your consciousness. 

A 2020 Frontiers in Psychiatry study found altered activity in the orbitofrontal cortext and caudate nucleus in people experiencing derealisation. These are the regions that integrate our predictions about our body and the external world. When the coordination between these fails the brain still produces it’s internal model, but no longer believes it completely. This leads to the awareness that you are seeing a model of the world rather than the world itself, a phenomenon some philosophers call metacognitive vertigo

Maybe that wrongness isn’t an error at all. Perhaps it’s your brain performing maintenance, checking it’s own internal map against the terrain. So what happens when this maintenance fails? 

When this predictive coding falls apart, “realness” falls away too. The room you are in still has shape and colour, but it’s lost it’s emotional gravity. It would be wholly untrue to say you’ve stopped perceiving. Instead, you’ve stopped trusting your brain to interpret what it’s seeing correctly. 

What Does Doubt Cost us?

Doubt is metabolically expensive. Functional MRI studies of cognitive dissonance, such as the 2009 paper by van Veen et al. in Nature Neuroscience, have shown that holding contradictory beliefs actives the dorsal anterior cingulate cortext and anterior insula, regions that detect and resolve our internal conflicts. 

Our brains take up around 20% of the body’s energy despite making up around 2% of it’s mass (Clarke & Sokoloff, 1999). So when uncertainty persists, the demands of neural processing increases. 

EEG studies, including Polich’s 2007 review in Clinical Neurophysiology and follow-ups by Käthner et al. (2014), show that this additional mental workload and fatigue increase the amplitude of the P300 wave, signifying attention and cognitive updating. 

That energy drain goes some way to explaining why prolonged doubt feels so exhausting. We’ve evolved to prefer coherence over accuracy because the science is suggesting that coherence is cheaper. It’s better to accept a plausible story than burn valuable energy reserves questioning it. Over time, this metabolic shortcut solidifies into bias. 

How Do Our Brains Shape Our Beliefs?

When our predictive models start to crack, sadly it’s not just our perception itself that suffers; belief itself starts to feel that pain too. The same neurological wiring that misfires in déjà vu seems to reappear across ideology and identity. 

As Daniel Kahneman describes in Thinking, Fast and Slow, our brains rely on two thought systems: one that’s fast and intuitive, and one that’s slower and more deliberate. The fast system (System 1) is especially prone to cognitive biases like confirmation bias and anchoring, shortcuts that favour speed and coherence over accuracy. This helps us make quick decisions, but often at the expense of rationality and an objective truth. 

The psychologist Ziva Kunda’s concept of motivated reasoning (199) describes how we process new information through a filter of existing beliefs, rewarding coherence over anything that contradicts them. Neuroimaging by Jonah Kaplan and colleagues (2016, Scientific Reports) showed that any evidence that could be considered “belief-threatening” activates the conflict and emotional regulation areas more than the reasoning networks in our brains. Our brains literally try to defend our convictions. 

The mind, just like our bodies, is seeking equilibrium. It’s not malice, its just homeostasis. 

Can We Live With Uncertainty?

Perhaps this means that trusting your brain doesn’t mean believing everything it tells you. Maybe, it’s knowing when it’s likely to be lying to you. 

Socrates (through Plato) would probably agree. “I know that I know nothing” was said not as despair, but as discipline. In philosophy, this is referred to as epistemic humility. I think it’s fair to assume that a biologist might call this adaptation. 

In a 2007 Nature Neuroscience study, Tim Behrens and colleagues showed that people adjust how quickly they learn something based on how volative their environment is. Quick certainty can logically only happen in a stable world. But when the world is unpredictable, cautious doubt wins. The anteriour cingulate within the brain tracks this volatility around us. Doubt, if tuned properly, is a survival trait. It keeps our ability to learn flexible. 

So when certainty begins to fray, perhaps it’s not a sign of decay. Maybe it’s our minds way of performing a form of quality control. Mistrust might not be failure; it’s just maintenance. 

What Happens When We Lose Our Shared Truths?

If our senses are indeed as fallible as the evidence seems to be suggesting, if our memories are reconstructive, and our information environments are corruptible, how on earth should we live?

Philosopher Lee McIntyre, in his book Post-Truth (2018, MIT Press), claims we are living in an era of epistemic crisis. The collapse of our shared standards for what counts as real. People are no longer made to feel obliged to align their beliefs with evidence. It’s belief itself that signals belonging. 

Hannah Arendt saw where that could potentially lead. In The Origins of Totalitarianism (1951), she warned that when the line between fact and fiction blurs, people often retreat into cynicism. To her, the ideal subject of authoritarian rule is not the zealot, butt he person who believes nothing can be true. Cynicism, not control, creates the opportunity for control. 

Our modern algorithmic culture accelerates that criticism. Social feeds reward outrage and confirmation. We each occupy an incredibly well personalised reality tunnel. When the collective “we” can’t agree on basic facts, cooperation collapses entirely into a tribal performance. 

How Can We Learn To Trust…But Not Too Much?

So how are we meant to trust our brains in a world that seems designed to confuse it? By recognising that trust can never mean certainty. Instead, it has to mean calibration, a confidence that’s proportionate to the evidence. 

Epistemic humility doesn’t require paralysis. Instead it invites curiosity over conviction. The philospher Bertrand Russel observed, “Fools and fanatics are always so certain, and wiser people are always so full of doubts.” Uncertainty, when handled well, is a sign of intellect, not weakness. 

Recent behaviour work by Gordon Pennycock and David Rand (2021, Trends in Cognitive Sciences) supports this. Encouraging reflection on accuracy before sharing seems to reduce falsehood spread. 

Practical steps to build epistemic humility

  • Pause before sharing information that evokes a strong emotion. 
  • Ask yourself, “How do I know this?”
  • Seek out reputable alternative sources for verification. 

To live with the glitch is to stop demanding that the map IS the territory. To stop punishing our brains for guessing and instead observe how it updates when surprised. The goal isn’t perfect trust, but instead cultivating a resilient doubt. A relationship with uncertainty that neither collapses into cynicism nor retreats into denial. 

What If Noticing the Glitch Is the Point?

Reality is only as trustworthy as as the mind perceiving it. When that trust falters, our instinct is to panic, to force coherence back into place. But certainty has never been real; it was just an evolutionary convenience. 

When your brain stops trusting itself, it doesn’t always mean it’s breaking down. It’s waking up. The ability to notice the glitch is proof of your awareness, not a failure. 

So perhaps we should trust our brains a little more. Not to be right, but to keep questioning. Because noticing, not knowing, keeps the lights on in our consciousness. 

Sources

Tags:

Related Articles

A mix of expert analyses, personal anecdotes, and in-depth reviews
View all articles