Drawn In Perspective

Maybe to understand function you need to understand malfunction

Imagine you press the following keys on an old-style calculator (the kind which only displays one input or output at a time on the display):

2

+

2

=

You are using your calculator to compute the output of some function.

casio-fx-82-scientific-calculator-vintage-made-in-japan-collectible-calculators-electronic-gadgets-3060645873.jpg

Now imagine your calculator has a bug where sometimes, when you press the "+" button it reads that as a press of the "x" button instead. Your calculator is clearly malfunctioning, however you would not know it from having carried out the above computation, you would need to try a variety of other inputs in order to find this out.

Some bugs in such a calculator might even be invisible no matter how many inputs you try. For example a calculator might read presses of the "+" button as the sequence "+0+", that is, to compute 2+3, it actually calculates 2+0+3. Since adding zero doesn't change the result, this malfunction would be completely invisible across all possible inputs. I would still consider this a case of malfunction, though some people who have written about these cases seem to prefer the term "miscomputation".

I think these kinds of examples can better help us understand the conditions under which two computers might be the same or not.

The main papers I've been reading on this topic are this Fresco & Primiero paper on "Miscomputation" and the Piccinini paper on Computing Mechanisms which it cites.

Both papers discuss the concept of "miscomputation" at various levels of abstraction. The first paper presents a very detailed taxonomy, but broadly the main kinds of miscomputation discussed are:

  • Malfunctions of the physical operation of a given computing machine
  • Programs which are designed or implemented in ways which fail to achieve their intended teleological function

One puzzle for the implementation debate which both papers note is that any notion of implementation needs to properly handle both kinds of cases of miscomputation. Both papers also note that cases of miscomputation are often overlooked by other parts of the literature on these topics.

However, one thing I am wondering is whether both papers could go further. It might be that our notion of malfunction isn't just a puzzle for accounts of implementation to solve. It could in fact be doing a lot of the work to make our intuitive notion of implementation work at all. In particular, in order for a physical system to implement an abstract program, the physical system needs to display some kind of normal lawlike behaviour, and that concept of normal behaviour needs to be distinguished from abnormal / malfunctioning behaviour.

If this crucial process of distinguishing "normal" from "malfunctioning" behaviour of a physical system turns out to rely on a mind-dependent or constructed sense of "normal", it would imply that questions of implementation are more relative to an observer, or community of observers than we might assume. On the other hand, if there is some canonical, mind-independent way to draw this distinction, this might be all that is required to pin down a definition of implementation which overcomes triviality objections.

I am optimistic that there must be some theory closer to the second (mind-independent) kind, however theories of the first (mind-dependent) kind have an interesting implication which I thought was worth noting.

That implication would be: that the mathematical and logical structure demonstrated by computing machines is a structure which humans project on to those machines. In particular, we project the conditions for the possible behaviours of the machine as a part of our definition of what normal operation of those machines looks like. Any deviation from this definition would be dismissed as "malfunction"1.

I'm not entirely sure this kind of view can be made to work - in particular it seems like computers are able to surprise us with their outputs in ways that aren't quite compatible with this kind of account. I do, however, think something like this would help a neo-Kantian philosophy of mathematics explain how computers are able to automate synthetic a priori mathematical reasoning.


  1. I've been having trouble expressing this thought clearly - this is my current best attempt. 

Thoughts? Leave a comment