Philosophy and Science of Learning (IN PROGRESS!–not final version)

INSTRUCTOR

Daniel Rothschild

ABOUT

The recent advances in AI are powered by learning algorithms that allow computers to develop abilities through training on data. These advances provide an unprecedented opportunity to study learning from a new angle: we now have powerful learning systems whose internals we can inspect, whose training we can control, and whose performance we can measure precisely. This gives us new traction on foundational questions about the nature of learning that have long resisted resolution.

This module uses machine learning as a window into those questions. It covers the foundations of machine learning from a theoretical and conceptual perspective, develops a taxonomy of modern ML paradigms, and asks what the remarkable recent successes of AI tell us about how learning works — in machines and, more cautiously, in biological minds.

MECHANICS

Classes will mostly be on Tuesdays from 1-4pm in the seminar room, 19 Gordon Square, 101. We will always have a 20-30 minute break.

The module will be assessed by a 3500-word essay.

All unlinked readings available here (ask instructor for password).

All staff and students are welcome at any sessions.

BACKGROUND READING

This class will cover quite a few topics in machine learning and some on human learning. If you want to read or listen to a semi-popular book that covers a lot of recent background, I highly recommend Tom Griffiths’ new book, Laws of Thought, which will give you useful background on symbolic AI, neural networks, language acquisition, and Bayesianism.

Another useful resource are various online courses on machine learning, such as Andrew Ng’s machine learning course, the classic version of which is on youtube. There are many online courses; the practical programming side will not be useful for this course, but the basic theory will be.

SCHEDULE

28 APRIL: LEARNING AS SEARCH

Learning, across all its forms, can be understood as search through a space of possible systems guided by experience — a framework broad enough to encompass Bayesian updating, standard paradigms of machine learning, and human cognitive development.

Optional: Leibniz, New Essays (selections); Rothschild, “The Scope of Bayesianism”

5 MAY: LEARNING AS A COMPUTATIONAL PROCESS

The Church-Turing thesis and the theory of computation give precise content to the idea of learning as search, but the key insight is that universality is nearly vacuous as a constraint — what matters is efficiency and inductive bias, and this session introduces neural networks and backpropagation as the answer that has actually worked.

Background: Valiant, Probably Approximately Correct; Valiant, PAC learning paper

12 MAY: A TAXONOMY OF MACHINE LEARNING

Gradient descent over parameterized functions is the unifying engine behind the apparent diversity of modern AI successes — language models, image generation, game play — and this session develops a taxonomy of machine learning paradigms that reveals the underlying unity, with supervised learning as the central and most powerful paradigm.

Optional: Smolensky, “On the Proper Treatment of Connectionism” (1986)

19 MAY: REINFORCEMENT LEARNING AND MOTIVATION

Reinforcement learning is introduced technically — temporal difference learning, value functions, Deep Q-learning — before the session pivots to ask what the reward signal actually is for human learners, whether understanding itself can be intrinsically rewarding, and what kind of values are coherent enough to specify an objective function at all.

2 JUNE: LLMS: ASSOCIATIONISM AND FAST INFERENCE

The taxonomy from previous sessions might suggest that the dominant learning mechanism in modern AI is essentially associationist — gradual, error-driven, domain-general — and this session asks how slow associationist training produces systems capable of fast, flexible, apparently reasoning-like behavior at inference time, with language emerging as the key to the answer.

4 JUNE 1-3pm: STUDENT PRESENTATIONS

9 JUNE: LANGUAGE AND LEARNING

Only AI systems trained extensively on natural language exhibit powerful domain-general reasoning, and this session argues that the explanation lies in language’s properties as a compression system — making general inference computationally tractable — with implications for the longstanding debate about the role of language in human thought.

Supplementary: Fedorenko et al., “Language is Primarily a Tool for Communication Rather than Thought” (2024); Griffiths et al., “Whither Symbols in the Era of Advanced Neural Networks?”


Image: Lace pattern woodcut by Isabella Catanea Parasole, 1600