Undecidability proof of the halting problem using lambda calculus

As a teaching assistant for a graduate level “Theory of Computation” course, I don’t understand why we use Turing machines as a model for all the computability and complexity theorems. It is much easier if we just use arguments on functions in a usual programming language. One good choice of such a language is the lambda calculus.

Basically, a function in lambda calculus is equivalent to a Turing machine. The “halting” of a Turing machine is equivalent to the corresponding lambda calculus term reducing to its “normal form”. From this viewpoint, I came up with the following undecidability proof of the halting problem using the lambda calculus.

Proof. Suppose that we have a function Halting(M,w), a hypothetical solver for the halting problem. Halting takes two arguments, a function M and its input w, and it tells you whether or not M(w) will output True. Note that although M may go into infinite loops, never will our magical solver HaltingHalting always returns True or False within finite amount of time.

Now we show a simple contradiction, which proves that the magical solver Halting cannot really exist. This question is: Does the following expression E returns True or False?

E = Halting(λm.not(Halting(m,m)), λm.not(Halting(m,m)))

It turns out that this question cannot be answered. If E returns True, then we apply the function λm.not(Halting(m,m)) to its argument λm.not(Halting(m,m)), and we get

not(Halting(λm.not(Halting(m,m)), λm.not(Halting(m,m))))

Alas, this is exactly the negation of the original expression E, which means E should be False. This is a contradiction (or call it a “paradox” if you like), which shows that the halting problem solver Halting cannot exist, which means that the halting problem cannot be solved.

QED.

To connect back to usual computing theory terminology (as in Sipser’s classic textbook), the functions Halting and M corresponds to two Turing machines, one of which taking the other as an argument. True corresponds to a Turing machine reaching its accept state, and False corresponds to reaching the reject state.

I’m surprised how much shorter this proof is as compared to [Church 1935, 1936] and [Turing 1936]. Actually what I did in the proof was just making a loop in a circuit. Almost every logical paradox has a loop involved–pretty much the same kind of loop, the Möbius strip. Every logic question expects the answer to be one of two sides–either True or False, but the Möbius strip has only one side!

If you hope to learn more on related stuff, a nice book by Neil Jones shows that you can prove every theorem in theory of computation (including computability and complexity) using a simple programming language. It’s not really necessary to use Turing machines for theory of computation. Turing machines make things awkward and complicated.

Now, a fun exercise question may be, what is the connection between the above expressions E with the Y combinator?

λf.(λx.f(x x)) (λx.f(x x))

Advertisement