Konrad Urban. This is a draft and work in progress. Feedback welcome.
The Knowledge Argument argues that non-physical properties in conscious experiences exist. Frank Jackson (1982) constructed a thought experiment featuring Mary. Mary is a brilliant colour scientist and neurophysiologist. All her life, she has been confined to a completely colourless room. She possesses all physical information about colour vision (wavelengths of colours, which neural connections are at work under which conditions etc.). Jackson claims that Mary would learn something new after her release from the room and first experience of colour. More formally,
(KA P1) Mary knows all the physical facts concerning human colour vision before her release. (KA P2) But there are some facts about human colour vision that Mary does not know before her release. (KA C1) Therefore, there are non-physical facts concerning human colour vision. (Nida-Rümelin, 2009)
An influential strategy against the Knowledge Argument rests on the Ability Hypothesis (Lewis, 2004; Nemirow, 2007, 2008). It states that knowing what an experience is like is an ability (e.g. to remember, recognise etc.). If the Ability Hypothesis is true, then Mary does not gain new knowledge but a new ability. This saves physicalism from the Mary case. I would like to show that ability of a different kind is a problem for the Knowledge Argument: Mary is unable to know all relevant physical facts (KA P1 is impossible). I am not defending physicalism, I am only stating that either Mary is impossible and should not be used as a case in either direction, or (if construed such that Mary is possible) the case collapses into a different case, better described by Thomas Nagel’s problem of objective/subjective perspective (1974).
A lot of attention has been directed at what Mary’s case entails and how misguided our initial intuitions are. For instance, Daniel Dennett argues that Mary’s is not readily imaginable, and that Mary’s case is bound to misguide us because of that (Dennett, 1993, p.399; Robinson, 1993). He also argues that a proper understanding of Mary would lead us to believe that Mary already had been able to imagine everything about colours before her release (Dennett, 2007). George Graham and Terence Hogan (2000) claim that she would not be able to imagine everything, leaving some unimagined phenomenal residue the content of which would be new to her after being exposed to colour. The defining point of the debate is to be able to tell what Mary is able to know before her release. I agree with Dennett’s preliminary analysis that Mary’s case is not readily imaginable. I agree that the case is constructed to mislead intuitions: Jackson’s misleading “brilliant scientist” is actually Super Mary, at least a demi-god, with abilities far beyond humans'.
Experimentally describing Mary’s case in a different terminology, such as demi-god, daemon or Super Mary, would be illuminating. In fact, I am convinced that if the Jackson’s version used such descriptors instead of human(oid) Mary, initial intuitions would not be anti-physicalist. As much fun “Super Mary” would be, it is probably enough to mention this and stick to her original name, but disentangle what Jackson demands of us. My motivation is different from Dennett’s. I hope to show that Mary’s case is not readily imaginable because knowing all relevant physical facts is conceptually impossible. The brains Mary can know all physical facts about are necessarily simpler than hers, which collapses the case into Nagel’s problem of perspectives.
First, I will show that “all physical facts relevant to colour vision” either must include all facts about Mary’s brain state when experiencing colour, or the thought experiment is unwarranted as it collapses into Nagel’s qualia problem (1). Then, I hope to show that Mary must be able to operate on the facts she knows in a specific way that I call emulation (1.2). Then, I will explain why an informationally poorer system cannot emulate (perfectly simulate) an informationally richer system (2). Finally, I will show that Mary’s brain cannot emulate itself, because it would have to be informationally richer than itself (3). I shall conclude that: either the Mary case is conceptually impossible and therefore should have no weight in supporting physicalism or anti-physicalism, or it collapses into Nagel’s qualia problem.
1 Knowledge
Mary investigating simpler brains (such as human brains) would prevent the Knowledge Argument from working. Mary must be able to emulate her brain because it is Mary’s phenomenal experience that she is after, not that of some simpler brain. Simply put, if Mary were to analyse only her visual cortex, she would not be analysing Mary’s experience. On a dialectical level, if she were after the phenomenal experience of a simpler brain, the whole case would collapse into Thomas Nagel’s subjective/objective gap in “What Is it Like to Be a Bat?” (1974). This posits that the jump from subjective to objective is impossible. Conversely, if Mary were a non-human demi-god with full physical knowledge about human brains, her question would be about what it’s like to see red for humans, not whether there are new (physical or non-physical) facts. After all, Jackson asks whether pre-release Mary would know what it’s like to see red. He does not ask whether she knows what it’s like for something else (e.g. a simpler brain) to see red. This is not a problem for the physicalist, but it makes Mary dialectically superfluous (and more confusing) as a case separate to Nagel’s problem of perspective.
A key term in my discussion is “emulation”, a term I borrow from computing. Let an emulation be perfect simulations or perfect mental representations of object. An emulated brain is not just a model of a brain, because models necessarily simplify (Cartwright, Shomar, & Suárez, 1995). An emulated brain is not just a simulation, because simulations can use representational shortcuts (e.g. a simulation of a billiard game does not need to contain representations on the quantum level given that observable billiard-relevant results are equivalent).
I avoid familiar terms like “imagining”, because Mary’s mental processes are unfamiliar to us. We usually approximate, model and abstract, losing data in favour of comprehension. For instance, redness could be described by some cultural model (“a symbol of aggression”) or some physical model (“wavelengths of 620-700nm”). Neither of these theories can cover the content of the other. Mary, by design, never faces this payoff, because her knowledge is full. In Parts 1.1–2, I will elaborate on what knowledge Mary must possess for the Knowledge Argument to make sense.
1.1 The Scope of Relevant Knowledge
Mary is defined to know all physical facts relevant to colour vision. The key word is “relevant”. Depending on commitments in metaphysics and theory of relevancy, Mary must have, minimally, knowledge about Mary’s brain states when exposed to colours, or maximally, knowledge about all physical facts including possible worlds.
The minimum cannot be reduced, because the Mary case needs at least this result for its intended anti-physicalist intuitions. It is about the phenomenal experience of being someone like Mary and seeing colour for the first time. Despite all her factual knowledge, she was unable to imagine what it’s like. This minimum is of course implausibly optimistic, because it is unlikely that brain states alone fix phenomenal experiences, if they exist. Furthermore, counterfactual theories of causation such as David Lewis’ (Menzies, 2009) require possible worlds analysis. Under such commitments, Mary must also know about them. For my argument, we only must accept the plausible premise Mary must at least know the state of her brain when experiencing colours. However, most likely, she must know much more.
1.2 Knowledge and Emulation
For full knowledge, all relevant facts must be known — a truism underappreciated ever since Dennett’s case (Dennett, 1993, p.399–401). If Mary had only physical (or encyclopaedic) knowledge without full understanding, the Knowledge Argument would not work. The physicalist could easily argue that her ‘phenomenal’ experience are just emergent physical properties that are beyond Mary’s comprehension (rendering KA C1 a disjunctive: there exist physical facts or facts inaccessible to Mary). For instance, knowing all the coordinates of all trees in Europe does little to establish knowledge of the Bialowieza forest. This would require the cognitive power to map such a list and applying auxiliary knowledge of how forests emerge from trees. For full knowledge of the forest, one would have to effectively reproduce the forest in their mind. This is what I refer to as emulation.
2 Emulation Capacities
A key premise of my argument is that an informationally poorer system cannot emulate an informationally richer system. Imagine a binary two-node net where every second nodes randomise their states. This allows for four two-bit states, A={{0,0},{0,1},{1,0},{1,0}}. At any second, the network only produces two bits (e.g. {0,1}). However, to emulate the network one needs information beyond two bits and beyond A (8 bits). The emulator must at least include the procedures of the emulated network. Because of these auxiliaries the emulator is informationally richer than the emulated network (call this the Auxiliaries Assumption). Depending on one’s metaphysical commitments (1.1), the auxiliaries can be a much larger set than S, making my case even more intuitive.
But what does this have to do with Mary? Let us use an implausibly minimal example to make the case. Let Mary-Emulator be the system capable of emulating Mary’s brain. I hope to show that Mary-Emulator must be informationally richer than Mary’s brain. Recall (1.1–2), Mary’s knowledge must at least encompass knowledge of the state S, i.e. the state her brain would be in if she were to experience colours. S can be produced by Mary-Emulator, just as elements of A were produced by our previous emulator. To do that, Mary-Emulator must invoke the same procedures that guide colour vision in Mary’s world and apply them to an emulated version of Mary’s brain.
So far, the Knowledge Argument remains undisturbed by the Mary-Emulator. After all, Mary needs something like the Mary-Emulator to have full knowledge of all physical facts relevant to colour vision. However, what the Knowledge Argument needs is to let Mary-Emulator be Mary’s brain. Otherwise, as discussed previously, Mary would be investigating a different brain, which collapses the case into Nagel’s perspective problem.
Note that my argument is neutral about multiple realizability, the idea that mental kinds can be realised by various physical kinds (Bickle, 2016). Brains in my argument could refer to brain tokens but also to types realised in different ways as tokens. Even when I say “Mary’s brain” I can refer to any system that realises Mary’s brain; depending on commitments this could be fixed by what it’s like to be Mary, brain-related properties, powers etc. What matters for my argument is the informational richness of the brain, i.e. how much information there is about it metaphysically (not how much it stores). My argument relies on the assumption that for any brain, the informational richness is always the same (call this the Equality Assumption). So, whether realised in silicone, neurons or magnets, whether largely active or inactive, the information richness about whatever makes it a brain is equal.
The Equality Assumption implies that the informational richness of all of Mary’s brain is equal. It may seem that some states contain more information. For example, say, in state B1, Mary’s brain is mostly inactive and state B2 it is very active. It might seem that B1 is informationally poorer, but that is only because B1 could be described using less information, not because there is less information about it. E.g. set C={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16 … 99} could be described as C = {x|x∈N, x < 100}, which is much shorter. Yet, both sets are the same. The mistake here is conflating description with content. I claim that set of carbon atoms has the same amount of information no matter whether it is arranged into a neat crystal that is elegantly describable or chaotically arranged. In both cases, there is still information about every atom. Conversely, there is information about every neuron.
3 Mary-Emulator’s Capacities
Mary-Emulator cannot be contained in Mary’s brain, because it is informationally richer. Recall, minimally, Mary-Emulator must produce a brain in S. Producing S can be done either by externally constructing or by adopting S. These distinguish the hotly debated third person (objective/perspectival) and first person (subjective/sympathetic) perspectives, i.e. Nagel’s objective/subjective distinction. Perspectival emulation views the brain from an external perspective, not unlike a scientist. Sympathetic emulation simply puts the emulator in the emulated state with the information that it is an emulation. According to Dennett, Mary can emulate sympathetically and be unsurprised by colour after her release as she had a priori access to the experience (2007). Regardless of whether these sympathetic emulations are possible, I will deny that any emulations of any of Mary’s brain states are possible for Mary.
In both cases, Mary-Emulator’s information must contain S plus all the relevant auxiliary facts about emulating (call this the Auxiliaries Assumption), e.g. that it is an emulator. Without these facts, the sympathetic emulator would simply be a copy, whilst the perspectival emulator would not know how to interpret the data.
Mary’s brain cannot be the Mary-Emulator, because it would be a set of its size plus something. I will show that Mary’s brain cannot be Mary-Emulator by a reductio ad absurdum. Were Mary’s brain to be Mary-Emulator, it would be in the state of Emulating-Mary’s-Brain. Yet, S is a brain state of Mary’s already. Both, S and any other Mary brain states (including Emulating-Mary’s-Brain-State), are informationally equally rich, because both are confined Mary’s brain and thus there is as much information about them. More generally, S and any other brain state (including any state when she emulates her brain) of Mary’s are informationally equally rich. Yet, we have shown that emulator must contain more information than the emulated system, which leads to a contradiction (with S and Emulating-Mary’s-Brain being informationally equally rich). More colloquially, Mary cannot create a conceptual copy (emulation) of her brain, because such a copy would require more brain than Mary has.
4 Conclusion
I hope to have shown that Mary cannot emulate her own brain. After all, even a minimally conceived set of Mary’s knowledge must include all physical facts about Mary’s brain. To fully know Mary must emulate her own brain. Yet, an informationally poorer system cannot emulate an informationally richer system, and a system emulating Mary’s brain is informationally richer than Mary’s brain. It follows that Mary cannot know all the relevant physical facts (rendering KA P1 impossible). However, if Mary were to look at simpler brains to make the case work, the thought experiment would add nothing new to Nagel’s subjective/objective problem, making it a better pivot for the debate. Unfortunately, we will never hear whether (Super) Mary was surprised to see red, because she is conceptually impossible.
5 Bibliography
Bickle, J. (2016). Multiple Realizability. Stanford Encyclopedia of Philosophy, (Spring).
Cartwright, N., Shomar, T., & Suárez, M. (1995). The Tool‐Box of Science. In Theories and Models in Scientific Processes [Poznań Studies in the Philosophy of the Sciences and the Humanities 44] (pp. 137–149).
Dennett, D. (1993). Consciousness Explained. London: Penguin.
Dennett, D. (2007). What RoboMary Knows. In Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness and Physicalism. https://doi.org/10.1093/acprof:oso/9780195171655.003.0001
Graham, G., & Horgan, T. (2000). Mary Mary, Quite Contrary. Philosophical Studies, 99(1), 59–87.
Jackson, F. (1982). Epiphenomenal Qualia. The Philosophical Quarterly, 32(127), 127. https://doi.org/10.2307/2960077
Lewis, D. (2004). What Experience Teaches. There’s Something about Mary: Essays on Phenomenal Consciousness, 77–103. https://doi.org/10.1017/CBO9780511625343.018
Menzies, P. (2009). Counterfactual Theories of Causation. The Stanford Encyclopedia of Philosophy, 2011(Dec. 16), 1–47. Retrieved from http://plato.stanford.edu/archives/fall2009/entries/causation-counterfactual/
Nagel, T. (1974). What Is It Like to Be a Bat? The Philosophical Review, 83(4), 435. https://doi.org/10.2307/2183914
Nemirow, L. (2007). A Defense of the Ability Hypothesis. In Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness and Physicalism. https://doi.org/10.1093/acprof:oso/9780195171655.003.0002
Nemirow, L. (2008). Physicalism and the Cognitive Role of Acquaintance. In Mind and Cognition: An Anthology (pp. 490–499).
Nida-Rümelin, M. (2009). qualia: the knowledge argument. In Stanford Encyclopedia of Philosophy (pp. 1–19).
Robinson, H. (1993). Dennett on the knowledge argument. Analysis (United Kingdom), 53(3), 174–177. https://doi.org/10.1093/analys/53.3.174