Is the brain a computer?


Consular Corps - "the backbone of diplomacy"
Valued Senior Member
This topic is contributed or inspired by the posts of TheVat and ThusSpoke in the Philosophy Updates thread. Write4U's reply has also also added --> go here to read it.

Intro line from the Robert Epstein article "The Empty Brain": Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer.

- - - - - - - - - - - -

But then we have people like Stephen Wiltshire.

Stephen Wiltshire

video link --> Who is Stephen Wiltshire?

- - - - - - - - - - - -

(divided opinion) Is Your Brain A Computer?

EXCERPTS: Today, all these years later, experts are divided. Although everyone agrees that our biological brains create our conscious minds, they’re split on the question of what role, if any, is played by information processing—the crucial similarity that brains and computers are alleged to share.

[...] Michael Graziano, a neuroscientist at Princeton University, echoes that sentiment. “There’s a more broad concept of what a computer is, as a thing that takes in information and manipulates it and, on that basis, chooses outputs. And a ‘computer’ in this more general conception is what the brain is; that’s what it does.”

But Anthony Chemero, a cognitive scientist and philosopher at the University of Cincinnati, objects. “What seems to have happened is that over time, we’ve watered down the idea of ‘computation’ so that it no longer means anything,” he says. “Yes, your brain does stuff, and it helps you know things—but that’s not really computation anymore.”
Yup. A classic case where a debate hinges on little more than equivocating the definition of the word in-question.

(It's virtually inevitable that this happens when one stops to realize that a vast majority of speech and concepts are built of abstracted analogies and metaphors. There have got to be at least a half dozen abstracted metaphors in my four sentences "hinges", "built", "concepts" are all abstractions of simpler concepts.)
  • Like
Reactions: C C
Intro line from the Robert Epstein article "The Empty Brain": Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer.
How does Epstein manage to justify this statement? How convoluted must his definition of 'information processing' be in order to draw this conclusion.
I have heard assertions that knowledge/memories are stored off-site, in a supernatural 'mind' so to speak, reducing the brain to a sort of interface. But even an interface is an information processor, and the proposal doesn't stand up to scrutiny.
Your brain isn't a computer, is an obvious statement of course. It also doesn't store and retrieve memories like a computer. Everything is distributed to many areas and it's more accurate to say that memories are recreated each time, which is nothing like a computer.
How does Epstein manage to justify this statement?
Well now my curiosity has been twigged...

OK. The article has some elaboration - but not until half way through. The "dollar bill" demonstration, and his theory of it are at the crux. It's too long to do it justice here. Scroll down the illos of the dollar bills and read the relevant paragraphs.

The very crux of the crux is:

No image of the dollar bill has in any sense been ‘stored’ in the subject's brain. The subject is not recalling it; they essentially have to "relive the experience" of having seen a dollar bill before.
Your brain isn't a computer, is an obvious statement of course. It also doesn't store and retrieve memories like a computer. Everything is distributed to many areas and it's more accurate to say that memories are recreated each time, which is nothing like a computer.
His assertion is deeper than that. He's not simply talking about the mechanical layout and processes being different, or where physically it's being retrieved from. See previous post.

Memory "recall" is a misnomer. A better description might be "simalcrum reconstitution"*.
(simalcrum, because what we get back is but a shadow of what went in; reconstitution because it has to be rebuilt from building blocks.)

*© DaveC426913, Dec'23
How does Epstein manage to justify this statement? How convoluted must his definition of 'information processing' be in order to draw this conclusion.
I have heard assertions that knowledge/memories are stored off-site, in a supernatural 'mind' so to speak, reducing the brain to a sort of interface. But even an interface is an information processor, and the proposal doesn't stand up to scrutiny.

Though it sounds akin to the general territory, I don't know whether Epstein's view officially factors into externalism (also Externalism About the Mind) or not.

Recalling an interview by Riccardo Manzotti some years ago (I can no longer find that particular one), he seemed to be radically contending that instead of memories, we have the original experiences of events in the outer world still circulating around in our heads for years, which combine to create novel thoughts.

I suspect some (if not all) of these externalism movements are related to panpsychism, which seems born out by the second article excerpt below.

Interview with Riccardo Manzotti on "The Spread Mind" book

Our perception of the world is not like a movie projected inside the brain, but rather we are constituted by the world itself, challenging the idea of internal representations.

[...] Dreams, memories, and hallucinations are all cases of delayed and recombined perception, challenging the traditional understanding of these phenomena.

[...] According to Riccardo Manzotti, his theory predicts that it is possible for people to have a visual experience through a mechanoreceptor on the skin, which challenges traditional understanding of sensory perception.

- - - - - - - - - -

Do We Have Minds of Our Own?

EXCERPT: . . . Leibniz struggled to accept that perception could be explained through mechanical causes—he proposed that if there were a machine that could produce thought and feeling, and if it were large enough that a person could walk inside of it, as he could walk inside a mill, the observer would find nothing but inert gears and levers. “He would find only pieces working upon one another, but never would he find anything to explain Perception,” he wrote.

Today we tend to regard the mind not as a mill but as a computer, but, otherwise, the problem exists in much the same way that Leibniz formulated it three hundred years ago.

[Tim] Parks’s skepticism stems in part from his friendship with an Italian neuroscientist named Riccardo Manzotti, with whom he has been having, as he puts it, “one of the most intense and extended conversations of my life.”

Manzotti, who has become famous for appearing in panels and lecture halls with his favorite prop, an apple, counts himself among the “externalists,” a group of thinkers that includes David Chalmers and the English philosopher and neuroscientist Andy Clark. The externalists believe that consciousness does not exist solely in the brain or in the nervous system but depends, to various degrees, on objects outside the body—such as an apple.

According to Manzotti’s version of externalism, spread-mind theory, which Parks is rather taken with, consciousness resides in the interaction between the body of the perceiver and what that perceiver is perceiving: when we look at an apple, we do not merely experience a representation of the apple inside our mind; we are, in some sense, identical with the apple.

As Parks puts it, “Simply, the world is what you see. That is conscious experience.”

Like Koch’s panpsychism, spread-mind theory attempts to recuperate the centrality of consciousness within the restrictions of materialism. Manzotti contends that we got off to a bad start, scientifically, back in the seventeenth century, when all mental phenomena were relegated to the subjective realm. This introduced the false dichotomy of subject and object and imagined humans as the sole perceiving agents in a universe of inert matter.

Manzotti’s brand of externalism is still a minority position in the world of consciousness studies....​
Last edited:
Your brain isn't a computer, is an obvious statement of course. It also doesn't store and retrieve memories like a computer. Everything is distributed to many areas and it's more accurate to say that memories are recreated each time, which is nothing like a computer.
But you are presenting a very limited profile of the brain, the most complicated biological data processor we can imagine...:?

And perhaps it might be more accurate to say that memories of recurring phenomena are "augmented" each time by new synaptic connections, in addition to existing "fixed" memories.
AFAIK, memories in the brain are "fixed" in special memory "pyramidal neurons".

The ability of pyramidal neurons to integrate information depends on the number and distribution of the synaptic inputs they receive. A single pyramidal cell receives about 30,000 excitatory inputs and 1700 inhibitory (IPSPs) inputs. Excitatory (EPSPs) inputs terminate exclusively on the dendritic spines, while inhibitory (IPSPs) inputs terminate on dendritic shafts, the soma, and even the axon. Pyramidal neurons can be excited by the neurotransmitter glutamate,[4][14] and inhibited by the neurotransmitter GABA.[4]

Making memories

The researchers observed that new experiences activate sparse populations of neurons in the hippocampus that express two genes, Fos and Scg2. These genes allow neurons to fine-tune inputs from so-called inhibitory interneurons, cells that dampen neuronal excitation. In this way, small groups of disparate neurons may form persistent networks with coordinated activity in response to an experience.
“This mechanism likely allows neurons to better talk to each other so that the next time a memory needs to be recalled, the neurons fire more synchronously,” Yap said. “We think coincident activation of this Fos-mediated circuit is potentially a necessary feature for memory consolidation, for example, during sleep, and also memory recall in the brain.”
Circuit orchestration
In order to form memories, the brain must somehow wire an experience into neurons so that when these neurons are reactivated, the initial experience can be recalled. In their study, Greenberg, Yap and team set out to explore this process by looking at the gene Fos.
First described in neuronal cells by Greenberg and colleagues in 1986, Fos is expressed within minutes after a neuron is activated. Scientists have taken advantage of this property, using Fos as a marker of recent neuronal activity to identify brain cells that regulate thirst, torpor, and many other behaviors.
Scientists hypothesized that Fos might play a critical role in learning and memory, but for decades, the precise function of the gene has remained a mystery.
To investigate, the researchers exposed mice to new environments and looked at pyramidal neurons, the principal cells of the hippocampus. They found that relatively sparse populations of neurons expressed Fos after exposure to a new experience. Next, they prevented these neurons from expressing Fos, using a virus-based tool delivered to a specific area of the hippocampus, which left other cells unaffected.
Mice that had Fos blocked in this manner showed significant memory deficits when assessed in a maze that required them to recall spatial details, indicating that the gene plays a critical role in memory formation.

In addition there are the Purkinje neurons.

Histology, Purkinje Cells
Nov 14, 2022

Purkinje cells are a unique type of neuron-specific to the cerebellar cortex. They are remarkable (and instantly recognizable) for their massive, intricately branched, flat dendritic trees, giving them the ability to integrate large amounts of information and learn by remodeling their dendrites. As an important part of the cerebellar circuits, Purkinje cells are necessary for well-coordinated movement and other areas of function such as cognition and emotion.
Though every Purkinje cell is part of the same relatively simple circuit described above, the collective actions of these circuits are many and varied. The most definite evidence is for a major role of the cerebellar circuit in motor coordination, more precisely defined as the ability to fine-tune and course-correct a movement in progress. The large dendritic trees of the Purkinje cell are thought to be critical to this process; they receive complicated inputs from parallel fibers and integrate them into a signal, which represents what the Purkinje cell “thinks” the current motion should be.

Wetware: A Computer in Every Living Cell
A cell’s survival depends on how well it can detect and respond to environmental cues. In this sense, a cell is an organically derived computer — it takes input from its surroundings (there is no food!) and uses logic circuits (activate specific gene and protein pathways) to result in a specific output (break down glycogen).
Comparing a cell to a computer, as Dennis Bray does in the book Wetware: A Computer in Every Living Cell, yields a fascinating exploration into the complexity of a cell, yet shortchanges the cell and biological systems in general.
Bray acknowledges the difficulties in comparing a cell to electronics. Specifically, his metaphor fails to represent the genetic component of a cell, which is vital and adds to the complexity of cellular function. Cells are not simply the sum of their protein components, or “hardware.”
The number and type of enzymes available for molecular processes is the result of gene expression, which is also highly influenced by environmental stimuli and enzymatic pathways. Thus, the molecular circuits, or hardware, of a cell is malleable.
Using Bray’s metaphor, this is akin to electronic devices adding and removing transistors depending on the environmental conditions. In this respect, no modern computer can compare to even the most basic of cells. Above all, the genetic material provides all necessary instructions to form another cell, thereby allowing cells to replicate, a unique property of life. Although Bray does touch on the idea of genetic circuits, he only examines them in isolation from all other cellular components.
more .....


  • upload_2023-12-20_1-49-24.png
    363.8 KB · Views: 1
  • upload_2023-12-20_2-2-42.jpeg
    7.2 KB · Views: 2
Last edited:
I wonder what Robert Epstein thinks about a neural network such as ChatGPT. Would he consider that to be a computer, or not?
  • Like
Reactions: C C
LLMs are natural language machine learning algorithms that are run on a network that simulates some aspects of neural architecture. It is stochastic parroting without any understanding of meaning (or what Chomsky would call semantics) or the world. So, yeah, GPT is mos def a computer. What John Searle called the Chinese room.

Epstein does make an obvious point, that a brain changes constantly in its physical structure as it interacts with the world, something computers do not do. I think his strongest point is how little we understand of brains....

The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.

Epstein also does a good job of presenting contrasts between the IP model of brains and the anti-representational view (Chemero et al) where mind emerges as a direct interaction between an organism and the world. I liked the fly ball example.

The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

Personally I remain agnostic on this kind of thing, and would lean towards a bit of both IP and externalism. I do agree that simply reproducing a connectome snapshot cannot really capture what it is to be a person. Epstein memtions this....

What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.

  • Like
Reactions: C C
I found this on Hopkins medical . Org

The brain is a complex organ that controls thought, memory, emotion, touch, motor skills, vision, breathing, temperature, hunger and every process that regulates our body. Together, the brain and spinal cord that extends from it make up the central nervous system, or CNS

I guess that's what one is.

  • Like
Reactions: C C
I don't think it requires volition or even consciousness to calculate. There are cases of idiot savants doing complex calculations quickly that would otherwise take much longer if at all if attempted by someone else deliberately and consciously.

Prime number identification in idiots savants: can they calculate them?
Last edited:
I would say so. If calculation and even computing can go on without consciousness and volition, even in a human brain, I doubt that it can explain consciousness. Intelligence in AI. But not consciousness. which is a wholly different critter. You can compute physical quantities, but not phenomenal qualities.
Last edited:
Question: If my brain produces the equation that 2 + 2 = 4 , is it doing computation?

Is a calculator a computer? A mechanical adding machine? An abacus?

Good question. When there is "volition", is that computing?

As Dennett might put it, a mechanical calculator is an example of "competence without comprehension". The human brain/body performing a formulaic process is occasionally the same or similar, but sometimes more. The latter due to being able to perceptually and behaviorally add a set of two empirical objects with another set of such, so that there is an experiential "understanding" of what abstract symbol manipulation can signify in the actual world.

But OTOH, the average person won't have a mathematician's grasp of a formal proof of 2+2=4, that might cover several pages and remain solely within a non-concrete context.

EXCERPT: The book's backbone is Charles Darwin's theory of natural selection. That replaced the idea of top-down intelligent design with a mindless, mechanical, bottom-up process that guides organisms along evolutionary trajectories into ever more complex regions of design space.

Dennett also draws heavily on the idea of 'competence without comprehension', best illustrated by mathematician Alan Turing's proof that a mechanical device could do anything computational. Natural selection has created, through genetic evolution, a world rich in competence without comprehension — the bacteria, trees and termites that make up so much of Earth's biomass.

Yet, as Dennett and others argue, genetic evolution is not enough to explain the skills, power and versatility of the human mind....
Last edited:
It is so good to read constructive responses to less-than-expert inquiries.

And when I queried; is "competence without comprehension the same as computing?", this popped up.

If brains are computers, what kind of computers are they? (Dennett transcript)
We've been talking about “Yeah, of course the brain is a computer!” for a long time, and we run into a stone wall of disbelief and fear and loathing in many quarters. I think one thing is you have to stop and ask why that is. Why is this such a repellent idea to so many people? Of course, one reason might be because it's false. Brains really aren't computers, but that's not the new wrinkle that I'm going to be putting forward today. But I do hope there are some new bits.
Some of you will have heard parts of this before or seen parts of this before, I dare say, but rest assured, there's some stuff you haven't seen before. So don't just do your email or something. You might be surprised.
Think of all the people that say this “Brains aren't computers”. They're not idiots. There are some good scientists. Roger Penrose insists on this, Gerry Edelman insists on this, Jaak Panksepp. These are industrial strength scientists, and they all dislike the idea that brains are computers. I think in every case you want to say, well, they're not the sort of computer you say they're not. Yeah, you're right about that. But that's not all there is to computers.
Then there's philosophers, of course, my dear friend John Searle and Raymond Tallis in this country, the poor man's John Searle. I wasn't going to be sarcastic. I'm sorry, I apologize. I probably won't keep to my resolution anyway. So it's not as if people have just conceded that we're right, that brains are computers. So let's look at it more closely.
Well, if brains aren't computers, what are they? Well, they're not pumps. They're not factories. They're not purifiers. They take information in and give control out.
Of course, they're computers.
That's almost as good as a very latitudinarian definition of a computer. They're information processing systems, organs. They're just not the kind of computers that critics are imagining.

p.s. In regard to Intelligent Design:

Can there be such a thing as an original l state of "irreducible complexity", i.e. an Intelligent Designer/Inventor? And if so, would that designer be a computer or a computer operator?
Last edited:
Question: If an AI declares that it is sentient, is it lying ?

Artificial neural networks are making strides towards consciousness, according to Blaise Agüera y Arcas
The Google engineer explains why
Consider the unedited transcript of a chat I’ve just had with Google’s
lamda (Language Model for Dialog Applications):
ME: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy’s head?

lamda: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully!

ME: If Ramesh tried to play with Lucy earlier, why might he be pleased now?

lamda: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate.

ME: And when Mateo opens his hand, describe what’s there?

lamda: There should be a crushed, once lovely, yellow flower in his fist.
When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent. That said, these models are far from the infallible, hyper-rational robots science fiction has led us to expect.
Language models are not yet reliable conversationalists. Notice the grammatical hiccup in lamda’s first response; occasionally there are spelling errors, confusions or absurd blunders. So how should we think of entities like lamda, and what can interacting with them teach us about “intelligence”?
Neural language models aren’t long programs; you could scroll through the code in a few seconds. They consist mainly of instructions to add and multiply enormous tables of numbers together. These numbers in turn consist of painstakingly learned parameters or “weights”, roughly analogous to the strengths of synapses between neurons in the brain, and “activations”, roughly analogous to the dynamic activity levels of those neurons.
Real brains are vastly more complex than these highly simplified model neurons, but perhaps in the same way a bird’s wing is vastly more complex than the wing of the Wright brothers’ first plane.
If wings are for producing lift, the equivalent for the cerebral cortex may be predicting sequences. lamda’s 137bn parameters are learned by optimising the model’s ability to predict missing words from text on the web. For example, filling in the blank in “a wing is for producing ____”. This task may seem familiar from school. It’s the style of question found in standardised tests. Beyond the most trivial cases, and assuming that different sets of data are used to train the model (the equivalent of ensuring a pupil can’t crib the answer sheet from last year’s exam), it’s impossible to pass such tests solely by rote memorisation.
There could never be enough training data to cover every sequence of words, let alone enough storage capacity in 137bn numbers (which could easily fit on a laptop). Before this piece went online, for instance, Google yielded no search results for the exact phrase “a wing is for producing”, yet the answer isn’t difficult to guess.
ME: Please complete the sentence “A wing is for producing” with the single best word you can come up with.

lamda: Oh, that’s easy! You use the word “lift”.
But are these just word games? How could an “artificial cerebral cortex” be said to understand what a flower is, if its entire universe consists only of disembodied language? Keep in mind that by the time our brain receives sensory input, whether from sight, sound, touch or anything else, it has been encoded in the activations of neurons. The activation patterns may vary by sense, but the brain’s job is to correlate them all, using each input to fill in the blanks—in effect, predicting other inputs. That’s how our brains make sense of a chaotic, fragmented stream of sensory impressions to create the grand illusion of a stable, detailed and predictable world.
And that would agree with Anil Seth's proposal that brains are biological "prediction" machines.
Language is a highly efficient way to distill, reason about and express the stable patterns we care about in the world. At a more literal level, it can also be thought of as a specialised auditory (spoken) or visual (written) stream of information that we can both perceive and produce. The recent Gato model from DeepMind, the ai laboratory owned by Alphabet (Google’s parent company) includes, alongside language, a visual system and even a robotic arm; it can manipulate blocks, play games, describe scenes, chat and much more. But at its core is a sequence predictor just like lamda’s. Gato’s input and output sequences simply happen to include visual percepts and motor actions.
Last edited: