The Myth of the Unified Mind
Published: 2021-10-21 . Back to ≈

This blog post is about the mental models that we have of our own minds—and how these models affect our behaviors.

How do we think about our minds? This is a murky and fascinating question. When I was a very small child, I had a partial mental model of my mind: Inside my head, I imagined that there was another little person and something like a TV screen which displayed to them the contents of my vision. That this was how vision worked seemed unavoidable. Indeed, perhaps it isn’t as silly as it might sound that I would explain my mind to myself by positing the existence of another mind inside it. Minds are unique. They have properties shared only by other minds. To what else could I possibly have appealed to explain these properties than… another mind?

As we get older, some of us may find analogies for the mind that seem somewhat suitable, such as a tablet of paper or a computer. As models, these avoid the circularity of the five-year-old’s “mind within a mind,” but they also turn out to be very bad models and we do not usually rely on them to help us think about ourselves.

Where does this leave us? It may be that most of us never obtain a particularly good model of how our minds operate at a detailed level. We simply have thoughts, experience mental phenomena, vaguely attribute these thoughts and phenomena to some operation of the brain, and move on. But there is also something of about the five-year-old’s “mind within a mind” which survives and which is important for us to understand.

As children, there is a point in our lives when we develop what is called a theory of mind. This does not usually refer to the kind of model that we are talking about here, i.e. a model of one’s own mind. Rather, it refers to the gradual deduction over time that other people in our lives have their own minds and experiences which are distinct from our own. You can know that a toddler has not yet developed a theory of mind when they ask you questions like, “can you tell me the name of that thing; the one in the refrigerator,” with the implicit expectation that you know what exactly they are thinking about.

A theory of mind is born when we project attributes of our own mind, such as thoughts, feelings, intentions, moods, etc., onto another person. But there is also something of the reverse as well: Only through language and socialization do we even develop an understanding of many of these concepts, and often in the contexts of observing other people. We learn what an angry or selfish person is like, and we might later project those attributes back onto ourselves and our minds. An evolutionary perspective can help us to see why this is perhaps the most important direction: we can pursue our own wants and needs without much need for introspection; only when we need to predict other people’s actions does a theory of mind become critical. Thus, it happens that the models that we have for our own minds are hugely shaped by the requirements that evolution has levied on our models for other people’s minds.

What is it that we need to understand about other people in order to survive? Most importantly, we need to understand what it is that others are after and what they are likely to do in order to obtain those things. Let us use the term “agent” to describe an entity that has a set of wants and needs (generically, goals) which it seeks to fulfill, and a set of behaviors and responses (generically, policies) for obtaining these goals. Under this definition, it seems that evolution primes us to model other people, and consequently ourselves, as agents. (It may not be clear at this point what else a model of the mind could look like, but this should become clear in a little bit.) Importantly, there is something singular about this model. Just as we perceive others as singular, unified agents, so too we think of our minds as essentially singular and unified, whatever this means.

To illustrate what this notion of unification might mean to us, let’s consider a quote, attributed to F. Scott Fitzgerald. It goes, “The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time (…).”  The idea of the quote seems to be that, while it is easy to inhabit a single, consistent view of the world, it is harder to fit one’s mind into multiple conflicting, but self-consistent views; and takes still more mental agility to inhabit multiple conflicting views at the same time. While the focus of the quote is on intelligence, for our purposes it is more interesting to focus on what the quote takes for granted about our minds: That they don’t naturally allow room for opposing ideas; that they are designed to only allow room for one non-conflicting idea at a time, as it were. Let us call this the unified property.

Under the “agent” model of the mind, the unified property seems quite natural and expected. But to what extent is the property true? We will now undertake an exercise in introspection with the goals of obtaining a more nuanced understanding of the unified property as well as an adjusted mental model which more nicely explains the nuances. Let us suppose that I tried to convince you of some fact which is flatly at odds with your view of the world. For example, I find that many of my friends do not share my view that the earth is flat. To me, a scientist with extensive training in both deductive and inductive reasoning and the scientific method, this fact sits right alongside many other well-accepted theories such as general relativity.

What has happened now in your mind? Quite possibly, your mind has begun to suggest reasons why my view cannot be correct: Many satellites orbit the earth. There are pictures of the earth from space, and it is a sphere. A flat earth makes no sense in the greater context of cosmology, gravity, or anything. (If you only knew… ok, just kidding, I’m not a flat earther). This may seem to support Fitzgerald’s quote in a way, but does it support the “agent” view of the mind? After all, we may ask, where in my mind do these suggestions come from? I certainly didn’t, as a single unified agent, undergo some conscious process of eliciting these suggestions. Only once they are in my consciousness can I try to reason about these ideas in a sequential, linear manner as an “agent” would do. But as I do this, new suggestions, inductions, and inferences again avail themselves, as if they had been outsourced beyond my consciousness (…to other agents? We’ll come back to this).

Who or what is doing the work of providing these suggestions, performing the inductions, and working out the intuitive inferences? This is the piece that is missing from the “agent” model of the mind: the vast, unknowable subconscious. Because it is unknowable, we make the mistake of leaving it out of our mental mind model. But this is indeed a mistake. Our minds are not simple, unified “agents.” They are more like agents aided and abetted by a diverse set of fallible, subconscious processes, which attend to the contents of our conscious mind and intermittently interject with relevant thoughts and ideas. Indeed, it is only by the vigilant activity of these processes that our minds are protected from inconsistencies and incongruities. (It would seem that Fitzgerald’s quote ought to have another side: We can’t always take it for granted that we will detect when two ideas are opposing–this itself might require a first-rate intelligence.)

Having brought the subconscious into the picture, it is perhaps not a large leap to suggest that, even as the mind is not simple and unified (having both conscious and subconscious parts), neither is the subconscious mind. I have used the plural “processes” in the above to emphasize this idea of multiple parts, but it may be more appropriate to think of these parts also as agents. A post like this one is a good starting point for understanding this view from a biological perspective, and a good primer for the rest of this article.

We now get to the main point of this article, which is to ask the following question: What are the consequences of thinking of ourselves as singular, unified agents rather than as a collection of parts which actively work to achieve a sort of integration? In many situations, the answer might be nothing at all. But I have come to think that in other situations we may do ourselves a grave disservice by failing to consider the processes by which our minds achieve the illusion of perfect unity and integration.

In particular, I think that I’ve identified three classes of common mistakes (among possibly many more) which stem from our belief in the myth of a unified mind. I will briefly list them before discussing each in more detail and providing examples: 1) The first is the error of presumed integrity: We may assume that two of our espoused views or attitudes are consistent simply because we are able to hold them simultaneously. 2) The second I am calling reprojection error: this is the error of assuming that if we have a certain thought, desire, or inclination at a moment in time, that this thought is a reflection of our entire self rather than just a part. 3) The third is the non-mixture of experts error: we fail to take full advantage of the ways that we can access the different parts of our minds by different modes of processing information.

Let us now consider each of these error types in more detail, beginning with the error or presumed integrity. As mentioned, this error happens when we assume that some of our espoused views or attitudes are consistent simply because we are able to hold them simultaneously. A common outcome of this error is what we recognize as hypocrisy.

The myth of the unified mind provides an interesting perspective on hypocrisy: One can sit and think about some state of affairs and think charitable and generous thoughts about another person or group. If one is under the assumption that one’s attitudes and actions toward that group are and will be (must be) consistent with these thoughts, then the job is done. The subconscious chain of inference is something like: I am having charitable thoughts about this person => I am a charitable person with regard to this person => I act according to my charitable nature toward this person. In a word, I assume that my actions toward this person are aligned with my charitable thoughts. Is this something that people do? I think I have done it.

I think that this idea of inference is helpful. Our minds are constantly making inferences about themselves. What are the errors in that process of inference when it is subject to the fallacious assumption that the common cause of our thoughts, actions, and attitudes is something unified and consistent rather than a network of connected parts, exhibiting varying levels of integration? Hypocrisy may be one type of error.

Another type of error that may fall into this category is what has been called “belief in belief.” (More to be added on this.)

Next, let us consider “reprojection error,” which I have thus called since it occurs when we receive a thought from some part of our mind and then reproject the source of that thought onto the whole rather than the part. I believe that this error occurs commonly when we decide how to act on some desire or urge. For example, suppose that I am about to start on a difficult project, when (as usual) I feel an impulse to check social media or toggle through internet tabs. How do I interpret this desire?

  • Here is one way: I should be working on this project, but I sort of want to check social media instead.
  • Here is another way: part of me wants to work on this project, but part of me wants to check social media instead.
The great part about having a multi-part or multi-agent model of one's mind is that one can, to a large extent, take the second formulation of the question quite literally. Perhaps there is a subagent that is set on getting the reward from working on the project and another that is addicted to social media. The next question is then this: which agent do you want to encourage?

I personally find this question, especially when imbued with a biological backing, to be a much easier one than the first question. After all, if I want to check social media, who am I to deny myself. Thinking of myself in terms of a unified agent inevitably requires a much more complicated and effortful set of mental exercises in order to convince me to forego social media. Am I really going to analyze my policy for achieving my overall objectives and ask what I should be doing in this particular instant to maximize my utility? No! That’s too difficult. I’m just going to quickly check social media and move on.

Finally, the third error. Thinking of our minds as unified prevents us from making full use of our minds’ resources. We can put this better in positive terms: When we dispel myth of unification, it becomes more obvious that the way in which we interact with some question or issue may be quite different depending on whether we are thinking about it, reading about it, writing about it, or talking about it, as these different activities engage the different parts of our mind in different ways.

I personally find that I often have breakthroughs on a hard problem when I’m talking through the problem with someone else. I know this as an empirical fact. But I’m sometimes tempted to think that overcoming this is something that I should be able to do merely through organizing my thoughts better. Ceasing to think of my mind as a fictional, unified agent quickly dispels this notion.