What have I been blogging about?
A lot of my blog posts this year have been on relatively specific, object-level questions in the history and philosophy of computer science or the philosophy of mind. I try to write these so that they'll be of general interest and use to readers whether or not they agree with my overall tastes. Where possible I try to communicate one useful idea or approach for thinking about the world per blog post.
At the same time, a few people following along with my blogging spree this month have asked me variations of the same questions: "why are you interested in all these topics? is there an overall theory or position you are building towards expressing?". The answer to both questions is "yes, sort of".
"Yes", because I am motivated to write these posts because I think they are all relevant to the evaluative philosophy of minds, especially questions relating to artificial minds. "Sort of", because there is no single "grand-theory" I have to communicate, just a series of object level views which add up to the way of seeing the world that's fairly stably guided my own decision making about questions relating to the philosophy of AI for about the last five years.
Why am I blogging about it?
I'm writing these views out for a couple of reasons. Firstly, because people have asked me to. I'll often get in to philosophical discussions about object level scientific questions like, which research directions to pursue, and I'll start talking through technical philosophical topics to explain my answers. At some point people's eyes will glaze over a little and they'll ask - "could you write this down and send it to me?"... and this blog is the result of several false starts in trying to carry out that exercise. Secondly because, I think I am likely wrong about a few of the positions I hold, either in terms of understanding the original arguments I appeal to, or their applications to the problems at hand, in which case I'd like people to point that out to me!
You've been reading it! Hello :) Also, thank you <3
All the same I'm somewhat flattered to discover that I now have regular readers (hi everyone! thank you so much for reading along so far) and so writing this blog has become a much less lonely activity. I've also been incredibly grateful to discover that people haven't just been reading along passively. I've had people reach out to help me transcribe 350 year old letters, to introduce me to cool new topics like second-order cybernetics, to challenge my ideas on whether the "hard" sciences really are so unfriendly to diy-tinkering as I first imagined, and even to volunteer to help me write new questions for general knowledge poker.
As a courtesy to all these cool people actually taking the time to read and engage with what I've been writing, I figured it would be nice to step back a bit and say a bit more about the general themes I'm writing about, and why I am writing about them. I'll reference previous blog posts where I've already written them, but also post external links to further reading. Maybe if you are reading this you'd like to explore these topics ahead of my posts about them. I'd also really welcome people reaching out to suggest related things I might like to read, and especially sending me their own writing if its on related topics.
Support me to write more
I also wanted to mention here that you can subscribe to receive emails every time I publish a new post. If you do sign up your email only gets used to deliver posts I write. If you have friends who you think would enjoy my writing I'd also really appreciate it if you share my work with them too. This might be a good first post to share with them to give them an idea of what the rest of the blog is like.
I find it incredibly motivating whenever someone new subscribes. I find it even more motivating whenever someone reaches out to share some thoughts with me about a post they read. Doing either of these things will almost certainly encourage me to write more content, and the latter will help me shape it to be better.
An overview of topics related to minds and artificial minds, and why I am interested in them
I think that minds are some of the most fascinating and valuable things that exist in this universe. The title of this blog, "Drawn In Perspective" and its tagline, "The stars may be large, but they cannot think or love.", are both from a quote by the philosopher and mathematician Frank Ramsey expressing this sentiment, and possibly offending some astronomers and cosmologists in the process.
I don't feel the least humble before the vastness of the heavens. The stars may be large, but they cannot think or love; and these are qualities which impress me far more than size does. My picture of the world is drawn in perspective, and not like a model to scale. The foreground is occupied by human beings and the stars are all as small as threepenny bits.

When we redraw the world in this way the heavens are still out there to contemplate. Indeed understanding their motions is essential to understanding the answers to questions about how the world works. What motivates me personally to understand how the world works is so that we can make it a better place for the beings who live here.
As a value system I think this amounts to a kind of romantic, naturalist, utopianism. The core idea is that human labour should be directed towards the improvement of life in this world, judged from the point of view of the individuals who lead those lives. I wrote a bit about the historical roots of this idea in my blog post on Kant's perpetual peace, and a bit about some of the ways the concept of "improving life" breaks down in my blog post on housecats.
I think there are some powerful critiques of this worldview out there. I've been influenced in the past by Adorno, Horkheimer and Marcuse1 to think about the ways in which second order effects of this kind of utopian striving can end up flattening out human culture and creativity, and ultimately disempower individuals in favour of institutions and machines. I touched on some of these ideas in my blog post on Ove Arup's "Computer Christening Speech".
I'm personally quite impressed by how prescient some of the writings of Foucault2 and Nietzsche3 were for providing tools for making sense of these effects. Freud also writes about similar themes in Civilization and its Discontents but I personally did not get as much from reading it as from reading the other writers mentioned above. The specific tools in question look at the ways in which social forces can result in organisations, cultures, or even a whole society marching on relentlessly towards increasingly non-human ends, with nobody really at the wheel.
There's been a recent and very influential paper titled "Gradual Disempowerment" talking about similar effects in the age of artificial intelligence. In particular the authors of that paper talk about worlds where a series of connected social pressures result in humans giving up control to AI systems, and these systems perhaps even end up pushing society to replace human lives with artificial lives on the margin.
I also write about the science fiction genre of cyberpunk, not because I think it makes useful predictions, or even because it introduces new useful concepts, but because as a genre of 20th century fiction it captures very well the zeitgeist of anxiety felt in the face of these kinds of forces. Cyberpunk narratives are about tensions that emerge between totalizing systems where progress has become disconnected from human flourishing, and about the all-too-human individuals who notice this and refuse to fit in to such a world. I'm especially interested in the way in which these narratives shaped and were shaped by "hacker culture" in the late 20th century. I've written a post about the virtues of this culture, and how it can channel cyberpunk anxieties in ways which can help drive innovation. I've also written about some of the ways it can break down and about searching for its roots in early days of the European enlightenment. Inspired by some of this I've also tried my hand at some diy replications of experiments outside of the field of computer science, but which are still relevant to my overall interests.
One of the hallmarks of cyberpunk as a genre is that it is dystopian. There is no rule that this has to be the case, but in the hands of an author dedicated to writing stories that feel true, any traces of optimism tend to dissolve into a backdrop cynical resignation. I think this is because, in the limit, collective action is always more powerful than individual action, and cyberpunk stories pit individuals against collectives. All the same, I personally read cyberpunk as a fundamentally hopeful genre. What the genre expresses is that we can hope for better worlds in the first place, ones where the choice is not "good individual" versus "evil collective", but ones where there are better collectives in the first place. I think the real world is in many ways still much closer to one which validates this hope than one which crushes it.
I think that questions about collective agency, and questions about moral progress go hand in hand. For a start, I think that the correct way to make sense of moral progress is by starting along the lines of the pragmatist philosophy of writers like C.S. Peirce and Frank Ramsey4. In Ramsey's words:
The essence of pragmatism I take to be this, that the meaning of a sentence is to be defined by reference to the actions to which asserting it would lead, or, more vaguely still, by its possible causes and effects
However I think that taking pragmatism seriously motivates an account of human language and concepts that eventually leads you to conclude that a lot of our mental contents like "beliefs" and perhaps even "desires" don't exist solely confined to our heads. Instead such mental contents are a combination of what goes on in our heads and the dynamics of the overall society which we interact with. This position is described as semantic externalism.
A natural next step, closely related to the intersection of pragmatism and semantic externalism is the extended mind hypothesis and the concept of a "cyborg". This entertainingly, brings us back to cyberpunk fiction. The image we normally associate with the term, of a grotesque human/machine tissue hybrid, is imported from these sci-fi stories. However the term "cyborg" predates those stories and comes from the technical study of cybernetics. The term originally described a shift in worldview wherein we understand the homeostatic relationships that arise between humans and machines as they are situated within broader environments. The philosophical concept of an extended mind helps make this idea formal, making arguments that for many kinds of mental phenomena we cannot draw sharp boundaries for where our minds end and where the rest of the world, including the machines we use, begin.
Donna Haraway's, A Cyborg Manifesto, embraces this worldview as a way to motivate social change. By blurring the boundaries between humans, their technologies and their broader societies new forms of cooperation are made possible, while avoiding essentialising narratives about social roles.
Cyborgs as a concept also appear in transhumanist literature about ways in which humanity might avoid gradual disempowerment by AI. The hope, presumably, is that by using machines to uplift human agency we can keep humans empowered, relevant, and productive.

However, the cyberpunk portrayal of a cyborg is rarely one of harmony. The boundaries between machine and flesh are often exaggerated and grotesque. I think this is a good symbolic representation of both a societal and a metaphysical worry.
The societal worry is that, the more we let our guards down and make technology a part of us, the less like ourselves we become. Allowing a technology to become a part of our extended minds, or, in futuristic scenarios, even grafting it onto our flesh, serves to make the technology a part of us, but is no guarantee that that technology will become good for us. These worries are all too present if you observe the ways in which we can become glued to our smartphones, or worse, if you watch the way a gambling addict becomes one with a slot machine.
The metaphysical worry is that, no human/machine merge ever seems totally complete. There is always some part of ourselves that seems to remain "natural", "core" or "inside" which is distinct from the "outside". This goes back to an issue with the original idea of semantic externalism - while some mental contents are not confined to our heads, some of the most important ones are: and those are the ones relating to phenomenal consciousness. David Chalmers presents a revised account of semantic externalism that accounts for this in his version of Two-Dimensional Semantics.
If you accept that phenomenal consciousness is a real and valuable thing worth preserving, you are also forced to accept that most forms of cyborgism can only go so far. What's more, you might want to give a scientific account of phenomenal consciousness so that we can make sure to preserve its presence and valence in the world, or at least design AI systems which support this.
The accounts of phenomenal consciousness I am most compelled by are ones which adopt nonreductive forms of computational functionalism. I find these accounts compelling for two reasons. On the one hand, absent qualia arguments make a good case that there is an explanatory gap between the accounts of brain function which physics gives and the internal phenomenal feel of conscious experience, ruling out reductive physicalist accounts. On the other hand, fading qualia arguments make a good case that it is the process implemented in causal structure, rather than the specific substance of brains, which makes phenomenal consciousness possible, motivating computational functionalist accounts. I've written some fairly narrow technical posts on both of these types of arguments, though neither of them are particularly good introductions to the topic.
Unfortunately I think that the term "function" as it appears in "computational functionalism", is slippery and ambiguous, and I am part way through a series of posts arguing that contemporary philosophy of computer science is currently not sufficiently sophisticated to support this position. Ultimately I think the crux of the issue is solvable but it comes down to threading the needle between two especially complex topics: on the one hand the Kripke-Wittgenstein rule-following paradox, and on the other hand, the foundations of mathematical logic, and how computation relates to the physical world.
I've written some blog posts on definite descriptions and the problem of universals, several blog posts on Kant and Leibniz's philosophy of mathematics, and on Donald Davidson's anomalous monism, all of which are intended to help support this discussion. My hope is that with more reps of writing on this topic I'll get better at presenting the ideas in a way which are clear and easy to digest.
This overall position, once it is all laid out, ends up, loosely, predicting the following things:
- Agency does not require phenomenal consciousness. It's possible to have agency without being conscious. However, some forms of phenomenal consciousness may be different in kind to others, in virtue of being the conscious experiences of self-aware agents.
- The instantiation of computational functions (and therefore of minds, as conceived as computational functions) is partly context dependent, partly dependent on causal structure. In particular the semantics of function instantiation and function equivalence are a combination of local causal structure and the external context in which the the structure is deployed.
- However phenomenal consciousness is not context dependent. In particular, if it depends on causal structure at all, it depends only on causal structure and not on broader contextual facts like the society or environment surrounding that structure.
- At some levels of complexity, agents become able to model themselves and, as a result, to self correct their behaviour based on external context towards normative ideals.
- As a result, norms and rules for behaviour require broader context in order to interpret, and likely behave in ways which are anomalous and cannot easily be reduced to strict lawlike analysis.
- These contextual processes explain value change, and moral progress (but not phenomenal consciousness). They are also the processes which will provide the best tools for understanding the psychology of AI systems such as large language models.
- We can't rely on these processes to lead to good outcomes, and so we need to work to understand them better in order to be able to shape them deliberately.
-
In particular Dialectic of The Enlightenment and One Dimensional Man. ↩
-
Most of the time when I talk about Foucault I'm talking about The History of Sexuality and Discipline and Punish. However, I often quote from a variety of the passages that show up in Paul Rabinow's anthology The Foucault Reader. ↩
-
I think Nietzsche's concept of the last man is especially interesting in this context. I find the Genealogy of Morality, (so long as you don't take it too seriously as a work of history) helps introduce a series of additional concepts for thinking about how agents and their values can change over time. I also am influenced a lot by secondary literature about Nietzsche's moral psychology, especially e.g. page 3 of this paper where it introduces the concept of "dynamic, sentient self-systems", and Brian Leiter's concept of a "type-fact", though I disagree with Leiter that we should think of type-facts as immutable, and my view is that we should instead think of them as dynamic and evolving. ↩
-
Simon Blackburn writes a good exposition on Pierce's pragmatism in his book "On Truth". Cheryl Misak has a good introduction to Frank Ramsey's life and thought in her interview on the podcast "Philosophy Bites". ↩
-
The painting featured in this post is Two Men Contemplating the Moon by Caspar David Friedrich. Image courtesy of the met museum. ↩
-
The "cyborg" picture is the Sandevistan from the TV show Cyberpunk: Edgerunners. ↩