Drawn In Perspective

Fading qualia arguments can be more finely grained than the neural level

This is a blog post about the nomological claim that: in this universe, phenomenal consciousness supervenes on functional organisation. I don't think its conclusion has a strong bearing either way on the metaphysical or logical possibility of p-zombies or the question of the explanatory gap. Separately, I think the absent qualia argument provides reasonable evidence that p-zombies are metaphysically possible. I write more about what these terms mean in the philosophy of mind section of this post.

The fading qualia argument is an argument for the thesis that "experience is invariant across systems with the same fine-grained functional organization". Modulo making sense of what exactly "functional organisation" means, I think it is the most compelling argument for the position of computational functionalism about phenomenal consciousness. Specifically I think it is an argument for the position that the phenomenal consciousness of a being supervenes1 on its functional structure and organisation.

The fading qualia argument is normally presented as a "neural replacement argument". It asks us to imagine a scenario where the neurons in a human brain are replaced one by one by small functionally identical circuits made of silicon.

Many responses to the fading qualia argument therefore focus on the question of whether there might be something going on inside of human neurons that might be relevant to the brain's overall phenomenal consciousness in a way which cannot be captured by creating functionally identical2 silicon chips.

I think these are fair responses to the specific neural case. If some process takes place inside neurons which, when enough neurons are assembled, leads to the overall system being phenomenally conscious, and that process is not being carried out by the identical silicon chips then, by definition, the neural replacement cases would fail.

I still, however, find the overall form of the fading qualia argument convincing. My reason for this is that the argument was never specifically about neurons to begin with. To clarify this point, let's assume that we do live in a world where phenomenal consciousness in humans is due in part to some process going on inside neurons, say metabolic processes inside the cells, and therefore grant that neural replacement processes would fail to preserve phenomenal consciousness.

I claim that in such a case we can imagine running a similar process to neural replacement, where all the functionally relevant parts at some finer-graining than neurons are replaced with identical circuits; for example, the structures inside cells responsible for their metabolism. If, in response, someone were to argue that, in fact, lower level structures matter too, we can imagine a finer grained replacement as well. Indeed, Chalmers encourages us to consider this case in his paper, even though most responses in the literature focus primarily on the version of the argument which takes place at the neural level.

We might even run the experiment at a finer grain within the neuron, so that ultimately the replacement of a few molecules produces a sudden disappearance of experience. As always in these matters, the hypothesis cannot be disproved, but its antecedent plausibility is very low.

There might be a worry that the process becomes impossible at arbitrarily fine grainings. However, note that physical scale specifically is not the issue here. If physical scale were an issue, you might worry that we might hit a scale so small that no such replacement is physically possible. For example if we reach a fine graining which ends up depending on the specific motions of carbon molecules, there is just no way to physically implement a functionally identical substance in the same amount of space. This worry does not affect the argument however since there is no requirement that the circuits you build are not bigger than the circuits they are replacing. I do grant that speed may be an issue though; we would need to assume that whichever circuit we build can "keep up" with the other parts of the brain it is communicating with for precise intermediate stages the argument describes to remain sound.

I should also note that while I do think considerations like these might matter, in general the fading qualia argument does not depend on an assumption that we have the resources or expertise to actually carry out such replacements. The argument is not presented as the only recipe for building conscious minds, rather it is an argument that, since we are conscious, certain other kinds of minds that share their structural organisation with ours would be conscious too.

If fading qualia arguments need to be run at finer grainings than neurons, what does this show? I claim that:

  1. These finer grained versions of the argument would still provide good evidence in general for computational functionalism.
  2. With current knowledge, we may struggle significantly to know in practice which fine-graining is sufficient.
  3. In particular we should not assume that preserving neural structure is sufficient for preserving phenomenal consciousness.

Some questions about this argument that do remain open for me are:

  • Does the generalised version of this argument beg the question? That is, does it require that you already accept the premise that there is some level of functional organisation on which phenomenal consciousness supervenes?
  • If we were to assume that the only fine-grainings at which this argument would work are ones where it is physically impossible to build equivalent circuits that operate as fast as the corresponding biological structures, would this show that the argument fails? This matters specifically for considering whether the argument shows that a sufficiently high-fidelity whole-brain emulation would be conscious.

  1. Specifically nomologically supervenes. Full definition of these terms here

  2. In general, while I've been writing a lot about how this notion of "functionally identical" is slippery and ambiguous. In this case I think there is a charitable reading which makes sense of both sides of the ongoing debates. That reading is: "a black box which computes the outputs for any given input in the same amount of time as the original". 

Thoughts? Leave a comment

Comments
  1. Julius — Nov 12, 2025:

    Admittedly I haven’t read the arguments, but I would have assumed people who argued against neural replacement where claiming neurons were doing something non-functional (whatever that might mean) which was essential to consciousness?

  2. mynamelowercaseNov 13, 2025:

    Yes - I think people do make this claim too, however it is important to distinguish between:

    1. Claiming that fading qualia arguments fail because neurons have some load-bearing internal structure that is essential to consciousness
    2. Claiming that fading qualia arguments fail because neurons are performing some form of hypercomputation

    It seems to me that (1) isn't a particularly strong argument if you allow for arbitrary fine-grainings, and that for (2) the burden of proof is to show that hypercomputation is possible.

    There are other options, but I think they get quite weird: The Leibniz / Malebranche approach:

    The physical world we observe does not causally interact at all with our conscious experience - God just guarantees (either by constant work of divine intervention, or by a divine miracle by setting everything up just right at the start of the universe) that everything operates as if they did.

    In those cases I guess all bets are off as to what happens in neural replacement cases.