The Meaning of Thought Book Cover

The Nature of Thought: Why Our Minds Cannot Be Replicated in Machines

The human ability to think—and to think about thinking (metacognition)—has been the subject of philosophical debate for millenia. Yet despite the fact that we all engage in thinking on a continuous basis, few of us pause to consider what a thought really is, the nature of the relationship between thoughts and objects, and the metaphysical, political, and ethical implications for our chosen self-conceptions as thinking beings. 

In The Meaning of Thought, philosopher Markus Gabriel explores the nature of thinking and how, in the modern world, our false conceptions of thought are actually creating the conditions for civilizational and cultural demise. Gabriel investigates how we came to hold these false beliefs and proposes how we can replace them with a richer, more humanistic conception of thought.

Let’s start with a simple definition of what we can know is real. Reality, according to Gabriel, is anything we can be wrong about. I might believe, for example, that the earth is shaped like a flat disc, but the act of believing this does not make it true; the preponderance of evidence—and the existence of irrefutable scientific reasoning—forces me to conclude, beyond any reasonable doubt, that the earth is, in fact, an oblate spheroid. This is an established fact that we can either believe or else falsely deny. If you think the earth is flat, your thought itself—the very fact that you believe it—is indeed a fact, but the belief itself, is false. Our shared perceptual and rational faculties can be used as a means for establishing the veracity of facts in our shared reality.

Likewise, since thought itself is a part of reality, there is no reason to suppose that it is also not a subject in which we can be wrong about. And in fact, according to Gabriel, most of us are very wrong about the nature of thought, and in a very specific way: we think that thinking is nothing other than information processing. 

The common view that thinking is simply an act of information processing—along with the idea that we can simulate thinking in machines—has resulted in, among other things, the belief that the only relevant knowledge of any value in the modern world is scientific and technological. In contrast to what used to be a richer conception of the human being in the ancient world—with robust traditions of public philosophical discourse—the modern technocratic world is defined by its emphasis on scientism and reductionism and a lack of any substantial philosophical dialogue at all. 

To understand how we got to this point is to understand how our conceptions of thought have changed. As Gabriel explains, our thinking about thought changed alongside our advances in logic and mathematics that resulted in modern computer science. Ever since we’ve compared the process of thinking to information processing in machines, we’ve further eroded and diminished our self-conceptions as human beings. 

To see why human thought cannot be reduced to information processing alone, we need to think more deeply about what thought really is, including its relationship to the world. 

As Gabriel explains, all thinking can be thought of as a form of “complexity reduction.” As we navigate the world, we are confronted by an infinite variety of information and stimulation, some of which registers in our conscious awareness and some of which does not, depending on what our sensory apparatus is primed to pick up. Consequently, there is no objective “view from nowhere”; rather, different perspectives must be adopted to simplify the infinite complexity of reality. Humans and bats, for example, while inhabiting the same reality, are only able to cognize this reality partially, based on their respective biological makeup. It turns out that biology is key; rather than thinking that different species inhabit different realities, it’s more accurate to say that different species inhabit the same reality but are attuned to different aspects through selective perception. Thinking is our biologically-programmed way of navigating through the infinity of reality in a simplified and manageable way, based on our unique biological makeup. But we should not forget that what we can directly experience is only a small sliver of reality (e.g., visible light constitutes only 0.0035 percent of the entire electromagnetic spectrum). 

Gabriel’s key insight here, I think, is to recognize that the act of thinking, along with the act of thinking about thinking, is essentially a sixth sense—along with our senses of sight, hearing, smell, tase, and touch. (In fact, there are far more than five senses—proprioception and temperature sensitivity being two examples—but the overall point remains the same.)  

This helps, I think, to explain why computers are not conscious and do not “think” in the sense that humans can. If thought is also a sense, then it becomes more obvious as to why we cannot create artificial intelligence (AI) that can truly think like humans. While computers are very good at processing information, it’s hard to imagine what the computer code would look like to make the computer feel pain, taste food, or see an image, in the sense of actually experiencing subjective color and texture. Likewise with thought; how can any conceivable computer code get the computer—a series of transistors in on or off positions—to reflect on its own information processing without the support of biological tissue and billions of years of evolution? (Could even humans think without the other senses of sight, sound, touch, etc., that all seem to be integrated in a synergistic way?)

When we think, we experience our thoughts in the same way we experience sound, for example, and this is entirely dependent on, but not necessarily exclusive to, our biology. Our brains, equipped with hundreds of trillions of synaptic connections built over hundreds of millions of years of evolution, are a necessary prerequisite for the sense of thought to occur, and the idea that we can reproduce this using silicon alone is a misconception of the highest order. But it’s not only that we are misguided in our belief that we can build computers that can “think”; it’s also that in holding this belief, we diminish our own self-conceptions, thinking of ourselves as robotic information-processors without direction or a sense of morality or purpose. 

A better way to think about AI, as Gabriel writes, is as follows:

“Given that we have no real evidence that AIs are thinking (although we have recently acquired a manner of speaking about them in this way), it is more rational to think of AIs as unintelligent, unthinking devices used in human contexts than as truly autonomous agents who compete with us in cognitive tasks.”

Computers can process information as an extension of our own intelligence, but the output still requires a human to interpret. For example, the statement that a computer can “play” chess is not entirely true; the computer can simulate moves based on its algorithms, but it is not conscious of its choices, and the computer is only “playing” chess to the extent that a human is present to interpret the computer’s moves according to the human-invented rules of chess. The computer can beat the human player in chess, but it cannot desire to win or become pleased with itself for doing so. Since human thought cannot be separated from our other senses and emotions, computers will never be able to think unless we can simulate and integrate these other senses as well. 

The bottom line is that if thinking is truly a biological sense tied to our other senses, then the prospect of creating thinking beings without the complex biology developed over eons of evolution (which is far too complex for us to understand completely) is extremely unlikely, to say the least.

Notice, however, that Gabriel’s arguments do not conclusively refute the idea of substrate-independence, or the idea that thinking and consciousness could occur in non-biological material. It could be the case that information processing of sufficient complexity can create consciousness without the prerequisite of biological evolution—we’ve just yet to accomplish it. While one must consider this possibility, I would have to agree with Gabriel that, due to the complexity involved, even if this were a possibility, it seems rather unlikely we will ever achieve it. 

Let’s not forget that we do not have a satisfactory scientific explanation of our own consciousness, so the idea that we can build something that we do not understand in a machine is borderline ridiculous.

This doesn’t mean that AI is not dangerous. As Gabriel repeats throughout the book, the way AI is created and deployed can change the way we think, the way we think about thinking, and our very self-conceptions and values. In fact, it has already done so, helping to exacerbate an already philosophically-stunted society by further limiting public discourse to technological-scientific debates without consideration paid to the philosophical, moral, artistic, and humanistic aspects of our lives and reality. We must first transcend this limited technocratic view of human nature before we can hope to restore a sense of common humanity and create the conditions for more substantial political dialogue.  

Overall, Gabriel’s New Realism and enlightened humanism is a refreshing alternative to the superficial discussions of AI that are far too commonplace among contemporary scientists and AI researchers. Rather than thinking of our thought as somehow “unreal”—a cheap imitation of an actual reality that is “out there” and removed from our minds—Gabriel posits that our conscious experience is real in the fullest sense, and that it’s the only reality we can ever have access to. Even if, for example, the Interface Theory of Perception is true—and our minds are simply desktop interfaces, with our perceptions acting as icons in an artificial graphical user interface—this interface is the only reality we can access. Whether we perceive the “real world” as it is or only as a series of representational icons that distort some underlying reality, we can never transcend our own consciousness to study it and compare it with the “actual world.” Consciousness is our reality, and our subjective experience is every bit as real as the physical descriptions of the natural sciences. 

Once we understand that consciousness is a fundamental, irreducibly complex part of our reality as biological beings, we can begin to see through the fantasy of creating consciousness in machines. And only then can we can re-shift our focus away from the imaginary dangers of AI superintelligence and on to the ways that humans are using AI today to insidiously perpetuate a value-system that jeopardizes our very self-conceptions as autonomous, thinking beings with moral value and purpose.  


The Meaning of Thought is available on Amazon.com.