RSS

Category Archives: philosophy

Three famous quotes

These three quotes are logically related to each other. Have you figured out their logical connections? ;-)

UNIX is simple, it just takes a genius to understand its simplicity. —Dennis Ritchie

This suit of clothes is invisible to those unfit for their positions, stupid, or incompetent. —the emperor’s weavers

If you can’t explain it to a six year old, you don’t understand it yourself. —Albert Einstein

Advertisements
 
Comments Off on Three famous quotes

Posted by on February 18, 2014 in culture, philosophy, programming, religion, wisdom

 

Programs may not be proofs

haskell-curry

Dear Haskell Curry and W.A. Howard,

Like many others, I have high appreciation of the beauty in your works, but I think your discoveries about the Curry-Howard correspondence have been taken too far by the programming languages research community. Following several of your examples showing the similarity between types and logic theorems, some of the researchers started to believe that all programs are just proofs and their types the corresponding theorems. In their own words: “Types are theorems. Programs are proofs.”

I think it’s just the other way around: “Proofs are programs, but programs may not be proofs”. All proofs are programs, but only some of the programs are proofs. In mathematical terms, programs are a strict superset of proofs. Many programs are not proofs simply because proving things is not why they are made. They exist to serve other purposes. Some of them (such as operating systems) may not even terminate, thus failing the most important criteria of being a proof, but they are perfectly legitimate and important programs. Programs are more physical than math and logic—they are real things similar to flowers, trees, animals (or their excrement), cars, paintings, buildings, mountains, the earth and the sun. Their existence is beyond reason. Calling all of them proofs is a bit far-fetched in my opinion :-)

Sometimes we can derive theorems from programs and say something about their properties, but way too many times we can’t. This is the similar to the fact that sometimes we can derive math theorems from the real world but we don’t always succeed. When we fail to predict the future, we can still live it. Similarly, when math and logic fail to predict the behavior of programs, they may still run without problems. It would be nice if more programs are designed in nice ways such that we can predict how they will behave, but there is a limit as to how far we can see into their future. Thanks to this limit, because if all the future can be predicted, programs may not worth existing and life may not worth living.

 
Comments Off on Programs may not be proofs

Posted by on January 17, 2014 in logics, philosophy, programming languages, proofs

 

On pureness

It is usually a pleasure to use a functional programming language, but it is doubtful whether we should care about the “pureness” of the language. To write programs in a completely pure functional programming language (e.g. Haskell) is much like living in a wired world.

There are no electromagnetic waves nor sound waves in such a world, so you don’t have wifi, cell phones, satellite televisions, radios, etc. You don’t even have light or sound. In other words, everything is blind and deaf.

All information must pass through wires or pipes, connected by switch boxes which we call “monads”. You must carefully route all the wires and pipes and connect them correctly before any information processing device, including your eyes and ears, can properly work.

So I think a language will need parts of it not to be pure, in order to naturally represent the effects such as electromagnetic waves. Normally those effects are called “benign effects“.

 
Comments Off on On pureness

Posted by on March 31, 2013 in opinions, philosophy, programming languages, semantics

 

Propositions as programs

The Curry-Howard correspondence says that propositions are types, and proofs are programs. I had been wondering if there is a simpler way to think about it, so I came up with this:

Propositions are programs; proofs are “abstract interpretation” of them to True.

Here True is a meta-level truth value. A theorem prover is then an “abstract interpreter” which tries to evaluate the proposition to True. This computational process which derives True from the proposition, is called a judgment. Here I may have conflated the concept “abstract interpretation”. I chose to use this term just because those two words “abstract” and “interpretation” conveys the general idea I hope to capture, but the term “abstract interpretation” here may not be what you use to think it is. I could have used “supercompilation”, but I don’t think the word conveys the concept very well.

Any way, this is a special kind of evaluation. It may be partial evaluation, supercompilation, or some sort of static analysis, whatever you call it. It can also take human inputs interactively, as long as it is semantic-preserving — it returns True whenever an actual interpreter would return True. But it is a lot more efficient, because unlike a normal interpreter, it takes shortcuts (e.g. induction hypotheses). If it thus evaluates to True, then the proposition is proved. This is much like fortune-telling. It immediately tells you the result that an actual interpreter would eventually give you. It has a limited number of basic “reduction operations” such as unfolding, induction and rewriting, so we can record reduction sequences as “proofs”, and we can verify them later on.

This seems to match what we have been doing with proof assistants like Coq, and possibly a more natural way of thinking about this kind of theorem proving. I feel that the propositions in Coq are a little far-fetched to be called “types” of the proof terms (note: not to be confused with tactics). For example, we can have a proof term like

fun (n' : nat) (IHn' : n' + 0 = n') => ...

Is n' + 0 = n' the type of IHn'? Well, you can call it whatever you like, but calling it a “type” doesn’t feel natural at all. What we want is just a way to bind an equation to a name, so that we can refer to it. It feels better if we just think of propositions as programs with boolean return types and the proof terms reduction sequences of them into True. If you take a look at the proof terms of Coq, you may find that this is the case. For example, take a look at this simple theorem and the tactics that prove it:

Theorem plus_0_r : forall n:nat, n + 0 = n.
Proof.
  intros n. induction n as [| n'].
  reflexivity.
  simpl. rewrite -> IHn'. reflexivity.  Qed.

They produce the following proof term:

plus_0_r = 
fun n : nat =>
nat_ind (fun n0 : nat => n0 + 0 = n0) eq_refl
  (fun (n' : nat) (IHn' : n' + 0 = n') =>
   eq_ind_r (fun n0 : nat => S n0 = S n') eq_refl IHn') n
     : forall n : nat, n + 0 = n

You may think of this proof term as a program with the theorem as its type, but you can also think of it as a reduction of the program n + 0 = n to True, where n is a natural number. It is saying: “Do an induction where the first branch executes an equivalence judgment, and the other branch does an unfolding, a rewriting using the induction hypothesis, and an equivalence judgment.” I think the second way of thinking of it is much more natural.

This interpretation can also explain the operation of Boyer-Moore style theorem provers (e.g. ACL2), as this is almost exactly how they work. They just don’t normally output a machine-verifiable reduction sequence.

You may have noticed that we have an important difference from the orginal Curry-Howard correspondence here:

Proofs are no longer considered programs.

At least proofs are not object-level programs. They are the “operation histories” in the abstract interpreter, which are at the meta-level. We can give this operation history to an independent verifier, so that it can “replay” the abstract interpretation and verify the correctness of the proof.

Alternatively, if you consider the verifier as an interpreter then proofs are its input programs. In this sense you can also say that proofs are programs for the proof verifier, and both propositions and proofs can be considered programs. But they are programs at two different semantic levels: the proofs are at a higher level than the propositions. This is very different from Curry-Howard.

Thus we have arrived at a unified way of thinking about the two very different styles of theorem proving: Curry-Howard correspondence (as in Coq), and Boyer-Moore (as in ACL2).

 

On software design patterns

In this post I try to answer the controversial question

Are software design patterns good or bad?

To understand this question, just look at the following two pictures.

  1. Here are some nice design patterns. You can use them to decorate the floors in your house.
  2. Here is a masterpiece. Can you construct it using the above patterns, or any patterns at all?

I think the answer is already obvious. One cannot produce a masterpiece by just putting patterns together. Artists know that techniques are never as important as vision. That is why they spend their lives studying the nature of things such as the human body. Once they get hold of the essence, they use whatever techniques at hand to express what they want. They can also change the techniques or invent new ones as needed.

The same holds for computer programming. We need to keep in mind that design patterns are means and not ends. True programmers are never constrained by existing patterns. They observe and grasp the essence of things, inventing new techniques and patterns as needed.

 
 

ADTs and objects

After reading William Cook’s essay On Understanding Data Abstraction, Revisited, let me try to condense the difference between abstract data types (ADTs) and objects into a few words.

(To avoid ambiguity, I use “instances” to stand for data created by instantiating ADTs)

  • “Instances” created by the same ADT share the same functions (methods). Functions may be parameterized, but the code is the same in each instance. Because of this, they need to share the same representation.
  • “Objects” (as proposed by Cook) don’t necessarily share function code. Each object may have a completely different set of functions (with matching names and types). This is in essence the same as “call-backs” in GUI widgets. Because of this diversity, each object may have completely different representation.

Ironically, it is usually the former case in mainstream object-oriented languages like Java and C++. In Java you can somehow achieve the latter with interfaces and anonymous classes, but it is awkward. JavaScript’s prototype-based system seems to be closer to the essence of the latter, but still not feel natural.

But different from Cook’s view, I think it is better not to consider binary operations like set union as methods. Union may take two sets with different representations and create a new one, thus it can operate by using the “true methods” of both arguments (possibly iterators and membership).