A collage featuring a vintage illustration of a woman’s head mapped with labeled sections resembling a phrenology chart. The mapped sections are overlaid by a neutral network diagram– depicting crisscrossing black lines. Two anonymous hands extend from the left side, pulling on two wires from the diagram.  In the background is a panel of the Turing Machine with numerous knobs and switches, highlighting a connection between the history of computing, psychology, biology, and artificial intelligence. While I expected studying Philosophy as an undergraduate to be personally useful and indirectly useful to my professional career, I didn’t forsee how relevant it would be to our increasingly AI-infused world.

In this post, Matt Mandel, after “[coming] to realize that using LLMs is pushing all of us to more closely examine our philosophical assumptions” has sketched out “a rough attempt at laying out what in philosophy is most relevant for AI.”

I’ve taken the list — covering things with which I’m familiar and those things I’d like to follow-up on — and added links. The “What is agency?" section is particularly timely, as I’ve been thinking about this a lot recently but need to do more reading.

Part 1: Philosophical Concepts

Part 2: How should we build AI

In terms of Part 1, to the What is a mind? section it’s definitely worth adding Turing’s foundational paper from 1950 asking “can machines think?” It’s the one that introduces the “Turing test” and directly sets up the questions that Searle and Nagel are responding to.

I’d also add a section entitled What is knowledge? and include:

  • Dreyfus, What Computers Can’t Do — which argues that embodied, situated knowledge resists formalisation.
  • Wittgenstein, Philosophical Investigations — not the easiest of reads, but it’s where we get notions of the impossibility of a private language and the difficulty of defining terms such as “game”.

I had to ask Perplexity for help with Part 2. I haven’t read any of these but apparently they’re relevant additions:

  • How should we build AI?
    • Vallor, Technology and the Virtues — A virtue-ethics framework for emerging technology which Perplexity describes as “arguably the most important contemporary work bridging classical ethics and AI practice.” The original list focuses on alignment as a technical problem, whereas Vallor asks what kind of people we need to be to build good technology.
  • How might AI impact human flourishing?
    • Crawford, Atlas of AI — I’ve had this book on my shelf for a while but haven’t read it yet. It’s a materialist analysis of AI’s supply chains, labour exploitation, and environmental costs, which sounded too depressing for me to read last year.
  • What does AI mean for how we know things?
    • Floridi, The Ethics of Artificial Intelligence — A framework of five principles for ethical AI (beneficence, non-maleficence, autonomy, justice, and explicability) which apparently is “the standard reference in AI governance circles.”
    • Nguyen, Games: Agency as Art — Explores how gamification narrows our values into simplified metrics and therefore relevant to AI alignment. If we have to specify values precisely enough for machines to optimise, do we inevitably impoverish them?

Source: Substack Notes

Image: Hanna Barakat