RSS

Monthly Archives: April 2012

Propositions as programs

The Curry-Howard correspondence says that propositions are types, and proofs are programs. I had been wondering if there is a simpler way to think about it, so I came up with this:

Propositions are programs; proofs are “abstract interpretation” of them to True.

Here True is a meta-level truth value. A theorem prover is then an “abstract interpreter” which tries to evaluate the proposition to True. This computational process which derives True from the proposition, is called a judgment. Here I may have conflated the concept “abstract interpretation”. I chose to use this term just because those two words “abstract” and “interpretation” conveys the general idea I hope to capture, but the term “abstract interpretation” here may not be what you use to think it is. I could have used “supercompilation”, but I don’t think the word conveys the concept very well.

Any way, this is a special kind of evaluation. It may be partial evaluation, supercompilation, or some sort of static analysis, whatever you call it. It can also take human inputs interactively, as long as it is semantic-preserving — it returns True whenever an actual interpreter would return True. But it is a lot more efficient, because unlike a normal interpreter, it takes shortcuts (e.g. induction hypotheses). If it thus evaluates to True, then the proposition is proved. This is much like fortune-telling. It immediately tells you the result that an actual interpreter would eventually give you. It has a limited number of basic “reduction operations” such as unfolding, induction and rewriting, so we can record reduction sequences as “proofs”, and we can verify them later on.

This seems to match what we have been doing with proof assistants like Coq, and possibly a more natural way of thinking about this kind of theorem proving. I feel that the propositions in Coq are a little far-fetched to be called “types” of the proof terms (note: not to be confused with tactics). For example, we can have a proof term like

fun (n' : nat) (IHn' : n' + 0 = n') => ...

Is n' + 0 = n' the type of IHn'? Well, you can call it whatever you like, but calling it a “type” doesn’t feel natural at all. What we want is just a way to bind an equation to a name, so that we can refer to it. It feels better if we just think of propositions as programs with boolean return types and the proof terms reduction sequences of them into True. If you take a look at the proof terms of Coq, you may find that this is the case. For example, take a look at this simple theorem and the tactics that prove it:

Theorem plus_0_r : forall n:nat, n + 0 = n.
Proof.
  intros n. induction n as [| n'].
  reflexivity.
  simpl. rewrite -> IHn'. reflexivity.  Qed.

They produce the following proof term:

plus_0_r = 
fun n : nat =>
nat_ind (fun n0 : nat => n0 + 0 = n0) eq_refl
  (fun (n' : nat) (IHn' : n' + 0 = n') =>
   eq_ind_r (fun n0 : nat => S n0 = S n') eq_refl IHn') n
     : forall n : nat, n + 0 = n

You may think of this proof term as a program with the theorem as its type, but you can also think of it as a reduction of the program n + 0 = n to True, where n is a natural number. It is saying: “Do an induction where the first branch executes an equivalence judgment, and the other branch does an unfolding, a rewriting using the induction hypothesis, and an equivalence judgment.” I think the second way of thinking of it is much more natural.

This interpretation can also explain the operation of Boyer-Moore style theorem provers (e.g. ACL2), as this is almost exactly how they work. They just don’t normally output a machine-verifiable reduction sequence.

You may have noticed that we have an important difference from the orginal Curry-Howard correspondence here:

Proofs are no longer considered programs.

At least proofs are not object-level programs. They are the “operation histories” in the abstract interpreter, which are at the meta-level. We can give this operation history to an independent verifier, so that it can “replay” the abstract interpretation and verify the correctness of the proof.

Alternatively, if you consider the verifier as an interpreter then proofs are its input programs. In this sense you can also say that proofs are programs for the proof verifier, and both propositions and proofs can be considered programs. But they are programs at two different semantic levels: the proofs are at a higher level than the propositions. This is very different from Curry-Howard.

Thus we have arrived at a unified way of thinking about the two very different styles of theorem proving: Curry-Howard correspondence (as in Coq), and Boyer-Moore (as in ACL2).

 

On software design patterns

In this post I try to answer the controversial question

Are software design patterns good or bad?

To understand this question, just look at the following two pictures.

  1. Here are some nice design patterns. You can use them to decorate the floors in your house.
  2. Here is a masterpiece. Can you construct it using the above patterns, or any patterns at all?

I think the answer is already obvious. One cannot produce a masterpiece by just putting patterns together. Artists know that techniques are never as important as vision. That is why they spend their lives studying the nature of things such as the human body. Once they get hold of the essence, they use whatever techniques at hand to express what they want. They can also change the techniques or invent new ones as needed.

The same holds for computer programming. We need to keep in mind that design patterns are means and not ends. True programmers are never constrained by existing patterns. They observe and grasp the essence of things, inventing new techniques and patterns as needed.

 
 

Concurrent stack does not exist

Some weeks ago I read this thought-provoking article by Nir Shavit Data Structures in the Multicore Age. The topic is about efficiency of concurrent data structures. There have been many thinkings going on after the reading, but the most interesting point to me is that the author started trying to use a “concurrent stack”, but ended up happily using a pool.

“In the end, we gave up the stack’s LIFO ordering in the name of performance.”

Now there can be several interesting questions to ask. Can we give up some property of a data structure if it is crucial to the correctness of the program? If we can give up the LIFO ordering, then why did we think we need a stack? But the most interesting question is probably:

Does a “concurrent stack” really exist?

I think the answer is No — a “concurrent stack” hasn’t ever existed. Here is why:

  1. If the threads are allowed to push and pop concurrently, then the “stack” can’t really maintain a LIFO order, because the order of the “ins” and “outs” is then indeterminate and contingent on the relative execution speeds of the threads.
  2. If the execution of two threads strictly interleave. Each time a “pusher” pushes an element, a “popper” immediately pops it out, then this is a FIFO order, a queue.
  3. If a pusher pushes all the elements before a popper starts to pop all of them out, then this is indeed a LIFO order, a stack. But notice that there is no concurrency in this case — the executions of the threads must be completely sequential, one after another.
  4. If two threads interleave randomly, or there are more than two threads accessing the “stack” at the same time, then nothing can be said about the access order.

From the above, we can see that there is a fundamental conflict between the two notions, “concurrency” and a “stack”.

If a “stack” can be accessed concurrently, then there is no way we can maintain a LIFO order. On the other hand, if we enforce a LIFO order, then the stack cannot be accessed concurrently.

If we have realized that the essence of a stack is a continuation, and a continuation (by itself) is sequential, then it is no surprise we arrive at this conclusion.

Since a LIFO order is essential to a stack, we can’t call the data structure a “stack” if we can’t maintain a LIFO order.

We can’t call a data structure a “stack” just because it has the methods named “push” and “pop” — we have to look at what the methods actually do.

Even if we continue to think that we are using a stack, the threads are in effect just distributing messages, with the operations “push = send” and “pop = receive”. So in essence this data structure is a pool. This exactly justifies the author’s relaxation to a pool, although no actual relaxation happens — the data structure has been a pool from the beginning. It was just disguised as a stack and less efficient.

So we see that the concept of a “concurrent stack” does not really exist. We also see that some data structures we have in the sequential world may not have concurrent counterparts.

 
Leave a comment

Posted by on April 17, 2012 in concurrency, data structures, reviews

 

How to reinvent the Y combinator

I don’t believe in the doctrine “Don’t reinvent the wheels”. I believe that one can never learn the essence of anything without reinventing it (intentionally or not). Once you reinvent a thing, you can never forget it — because otherwise you can just reinvent it again.

Today I found that I forgot how to derive the definition of the Y combinator. I learned it several years ago from an online article, but now the search term “Y combinator” only brings news about startups (sigh). I attempted it for two hours, but still couldn’t make the leap from the “poor man’s Y” to “Y”. Finally, I opened my good old friend The Little Schemer. Alas. Chapter 9 tells me exactly how to reinvent Y. Thank you Dan Friedman and Matthias Felleisen!

To prevent myself from forgetting how to derive Y again, I made a slide to record my understanding of it. It is slightly different from the ways of the recent version of The Little Schemer, but I think it’s easier to understand. I hope it can help some people (including the future me). So here it is.

Exercise: The Y combinator derived from this tutorial only works for direct recursion, try to derive the Y combinator for mutual recursive functions, for example the following two functions even and odd.

(define even
  (lambda (x)
    (cond
     [(zero? x) #t]
     [(= 1 x) #f]
     [else (odd (sub1 x))])))

(define odd
  (lambda (x)
    (cond
     [(zero? x) #f]
     [(= 1 x) #t]
     [else (even (sub1 x))])))
 
Comments Off on How to reinvent the Y combinator

Posted by on April 9, 2012 in lambda calculus, programming languages, semantics, tutorials