RSS

Category Archives: types

Null reference may not be a mistake

null

The null pointer is considered to be a “billion-dollar mistake“. I have been wondering why there is such a notion until I saw the video where Tony Hoare claims it to be his mistake. In fact, he didn’t really say that null pointer should not be used.

From this video, you can see that introducing null reference is not really a mistake. On the contrary, null references are helpful and sometimes indispensable. The mistake is not in the existence of the null pointers, but in how the type system treats them. Unfortunately, most languages (C++, Java, C#, …) don’t treat them correctly.

Every class type A of Java is in fact a union type {A, null}, because you can use null where an A object is expected. {A, null} is almost equivalent to the Maybe type of Haskell, where null corresponds to Nothing of Haskell. So the trouble really is that an annotation like {String, null} should be distinguished from String, so that it will be clear whether null can possibly be its value.

Unfortunately most languages don’t provide a convenient union type that you can put String and null together (Typed Racket is an exception). If Java is to have union types, we can say something like:

{String, null} find1() {
  if (...) {
    return "okay";
  } else {
    return null;
  }
}

This is saying: find1 may return a name which is a String, or it may return nothing. Because of the union type {String, null}, the type system knows that you should check for null when you have called find(), so it will force you to write a null check:

String s = find();  
if (s != null) {
  x = s.length();
}

In comparison, if we define a slightly different function find2, with a different return type:

String find2() {
    ...
    return "okay";
}

From the return type we know that find2 will never return null, so the type checker can let you you use the String without checking:

String s = find();
x = s.length();

Advertisements
 
Comments Off on Null reference may not be a mistake

Posted by on June 3, 2013 in programming languages, types

 

Propositions as programs

The Curry-Howard correspondence says that propositions are types, and proofs are programs. I had been wondering if there is a simpler way to think about it, so I came up with this:

Propositions are programs; proofs are “abstract interpretation” of them to True.

Here True is a meta-level truth value. A theorem prover is then an “abstract interpreter” which tries to evaluate the proposition to True. This computational process which derives True from the proposition, is called a judgment. Here I may have conflated the concept “abstract interpretation”. I chose to use this term just because those two words “abstract” and “interpretation” conveys the general idea I hope to capture, but the term “abstract interpretation” here may not be what you use to think it is. I could have used “supercompilation”, but I don’t think the word conveys the concept very well.

Any way, this is a special kind of evaluation. It may be partial evaluation, supercompilation, or some sort of static analysis, whatever you call it. It can also take human inputs interactively, as long as it is semantic-preserving — it returns True whenever an actual interpreter would return True. But it is a lot more efficient, because unlike a normal interpreter, it takes shortcuts (e.g. induction hypotheses). If it thus evaluates to True, then the proposition is proved. This is much like fortune-telling. It immediately tells you the result that an actual interpreter would eventually give you. It has a limited number of basic “reduction operations” such as unfolding, induction and rewriting, so we can record reduction sequences as “proofs”, and we can verify them later on.

This seems to match what we have been doing with proof assistants like Coq, and possibly a more natural way of thinking about this kind of theorem proving. I feel that the propositions in Coq are a little far-fetched to be called “types” of the proof terms (note: not to be confused with tactics). For example, we can have a proof term like

fun (n' : nat) (IHn' : n' + 0 = n') => ...

Is n' + 0 = n' the type of IHn'? Well, you can call it whatever you like, but calling it a “type” doesn’t feel natural at all. What we want is just a way to bind an equation to a name, so that we can refer to it. It feels better if we just think of propositions as programs with boolean return types and the proof terms reduction sequences of them into True. If you take a look at the proof terms of Coq, you may find that this is the case. For example, take a look at this simple theorem and the tactics that prove it:

Theorem plus_0_r : forall n:nat, n + 0 = n.
Proof.
  intros n. induction n as [| n'].
  reflexivity.
  simpl. rewrite -> IHn'. reflexivity.  Qed.

They produce the following proof term:

plus_0_r = 
fun n : nat =>
nat_ind (fun n0 : nat => n0 + 0 = n0) eq_refl
  (fun (n' : nat) (IHn' : n' + 0 = n') =>
   eq_ind_r (fun n0 : nat => S n0 = S n') eq_refl IHn') n
     : forall n : nat, n + 0 = n

You may think of this proof term as a program with the theorem as its type, but you can also think of it as a reduction of the program n + 0 = n to True, where n is a natural number. It is saying: “Do an induction where the first branch executes an equivalence judgment, and the other branch does an unfolding, a rewriting using the induction hypothesis, and an equivalence judgment.” I think the second way of thinking of it is much more natural.

This interpretation can also explain the operation of Boyer-Moore style theorem provers (e.g. ACL2), as this is almost exactly how they work. They just don’t normally output a machine-verifiable reduction sequence.

You may have noticed that we have an important difference from the orginal Curry-Howard correspondence here:

Proofs are no longer considered programs.

At least proofs are not object-level programs. They are the “operation histories” in the abstract interpreter, which are at the meta-level. We can give this operation history to an independent verifier, so that it can “replay” the abstract interpretation and verify the correctness of the proof.

Alternatively, if you consider the verifier as an interpreter then proofs are its input programs. In this sense you can also say that proofs are programs for the proof verifier, and both propositions and proofs can be considered programs. But they are programs at two different semantic levels: the proofs are at a higher level than the propositions. This is very different from Curry-Howard.

Thus we have arrived at a unified way of thinking about the two very different styles of theorem proving: Curry-Howard correspondence (as in Coq), and Boyer-Moore (as in ACL2).

 

ADTs and objects

After reading William Cook’s essay On Understanding Data Abstraction, Revisited, let me try to condense the difference between abstract data types (ADTs) and objects into a few words.

(To avoid ambiguity, I use “instances” to stand for data created by instantiating ADTs)

  • “Instances” created by the same ADT share the same functions (methods). Functions may be parameterized, but the code is the same in each instance. Because of this, they need to share the same representation.
  • “Objects” (as proposed by Cook) don’t necessarily share function code. Each object may have a completely different set of functions (with matching names and types). This is in essence the same as “call-backs” in GUI widgets. Because of this diversity, each object may have completely different representation.

Ironically, it is usually the former case in mainstream object-oriented languages like Java and C++. In Java you can somehow achieve the latter with interfaces and anonymous classes, but it is awkward. JavaScript’s prototype-based system seems to be closer to the essence of the latter, but still not feel natural.

But different from Cook’s view, I think it is better not to consider binary operations like set union as methods. Union may take two sets with different representations and create a new one, thus it can operate by using the “true methods” of both arguments (possibly iterators and membership).

 

A bug in GHC’s type system

Several days ago, I implemented an experimental type inference system with first-class polymorphism. When comparing it with other systems, I found a possible bug in GHC’s type system regarding universal quantification. The phenomemon was confirmed and reproduced by people at #haskell IRC channel for GHC versions above 7.01. The code that causes trouble is:

gen :: [forall a. a -> a]
gen = [id]
test1 = head gen 1

Obviously this program should typecheck, since:

  • id has the type forall a. a -> a.
  • A list gen containing id should have type [forall a. a -> a](as in the annotation).
  • head has the type forall a. [a] -> a.
  • head gen should have the type forall a. a -> a.
  • head gen should be able to be applied to 1.

But GHC rejected this program for a strange reason.

Couldn't match expected type `t1 -> t0'
with actual type `forall a. a -> a'
Expected type: [t1 -> t0]
Actual type: [forall a. a -> a]
In the first argument of `head', namely `gen'
In the expression: head gen 1

On the other hand, it works if (head gen) is bound at let:

test2 = let hg = head gen in hg 1

It doesn’t break the soundness of the type system since it only rejects some correct programs, but this kind of pecularities of type systems can be painful when they add up. I guess this may be caused by the interaction between GHC’s internal type system with the legacy let-polymorphism.

 
Leave a comment

Posted by on March 5, 2012 in bugs, programming languages, types

 

Sum types and union types

In a new type system I’m designing, I was trying to find a good reason to get rid of sum types (as are commonly used in ML and Haskell). Well, I don’t hate them that much, but for simplicity’s sake, I always try to remove things unless there are undeniable reasons that they must exist.

I think sum types have a large overlapping of functionality with product types and cause inelegance and inefficiency, so I hope to replace them with union types (some people call “open unions”). Union types are more orthogonal with respect to product types. I seem to have found a good idea where sum types originated and the reason why we don’t need them in a programming language.

It appears that sum types were originated from mathematics. In mathematics, there are no named product types. All we have are Cartesian products. Cartesian products have no constructors (or type tags) in them. How do we distinguish a student represented as (Height x Weight) with a professor, also represented as (Height x Weight)? We put tags on them before putting them into a disjoint union. This process is called injection. We do this every time when putting something into a disjoint union, and we project the elements out when we need their values.

From a programming point of view, the injections and projections are inefficient and inconvenient. This is equivalent to constructing new objects whenever we need to put them into a list, and destructing them when we need their components. Then construct new objects (again) when we need to put them into another list, and so on.

In order to avoid these shortcomings, programming language researchers decided to attach type tags to the objects when they are created. The tags tell us what the object is and they stay with the objects throughout their life-time. This results in record types. But somehow the PL researchers seemed to haven’t been completely freed from the influence of mathematics. They decided that each variant of a union needs to define a new constructor (injection). Thus sum types were born. For example,

data T1 = Foo Int | Bar String

Here Foo and Bar are essentially injections. Why do we need the injections and projections in a language where type tags always tell us what an object is? Another problem is that a constructor can only belong to one sum type, and there is no way we can reuse it in another. For example, there is no way you can define another sum type like:

data T2 = Foo Bool | Baz Float

because Foo is already defined in T1. What we wanted to express is that Foo belongs to the union T1 and it also belongs to the union T2. This is a very reasonable need, but most Hindley-Milner style type systems doesn’t allow this.

If we relax the requirement that each variant of a sum type must define a new constructor, we get union types. They are more general and orthogonal with respect to product types. We can always choose to put another level of type tags upon the variants, therefore we lose nothing if we don’t have sum types.

 
Leave a comment

Posted by on August 28, 2011 in programming languages, types

 

PySonar: a type inferencer and indexer for Python

pysonar2

PySonar is a type inferencer and indexer for Python. It includes a powerful type system and a sophisticated inter-procedural analysis. Compared to style-checking tools or IDEs, PySonar analyzes programs in deeper ways and produces more accurate results. PySonar resolves more names than typical IDEs. The current resolution rate is about 97% for Python’s standard library.

Demos

To get a quick feel about what PySonar can do, here is a sample analysis result for a small fraction of Python’s standard library.

What’s in there

  1. A powerful type system. In addition to the usual types you can find in programming languages, PySonar’s type system has union types and intersection types — two of the most powerful elements I have found during my PL research. They are rarely found in programming languages. I know of only two languages with statically checked union types: Typed Racket and Ceylon. Different from these languages, PySonar can work without any type annotations. It infers all the types by doing inter-procedural analysis.
  2. Control-flow aware interprocedural analysis. Because Python has very dynamic and polymorphic semantics and doesn’t contain type annotations, a modular type inference system such as the Hindley-Milner system will not work. I actually implemented a HM-like system in the first version of PySonar, but it didn’t work well. As a consequence, all types are inferred by an inter-procedural analysis which follows the control-flow and some other aspects of the semantics.
  3. Handling of Python’s dynamism. Static analysis for Python is hard because it has many dynamic features. They help make programs concise and flexible, but they also make automated reasoning about Python programs hard. Some of these features can be reasonably handled but some others not. For code that are undecidable, PySonar attempts to report all known possibilities. For example, it can infer union types which contains all possible types it can possibly have:
  4. High accuracy semantic indexing
    PySonar can build code indexes that respects scopes and types. Because it performs inter-procedural analysis, it is often able to find the definitions of attributes inside function parameters. This works across functions, classes and modules. The following image shows that it can accurately locate the field x.z which refers to the “z” fields in classes B1 and C1, but not A1.

Availability

The code is open source from my GitHub repository.

Users

Here are some of PySonar’s users:

  • Google. Google uses PySonar 1.0 to index millions of lines of Python code, serving internal code search and analysis services such as Grok and Code Search
  • SourceGraph. SourceGraph is a semantic code search engine. They use PySonar to index hundreds of thousands of opensource Python repositories. They started using PySonar 1.0 as the Python analysis for their site. I recently joined them and finished integrating PySonar 2.0
 
Comments Off on PySonar: a type inferencer and indexer for Python

Posted by on September 12, 2010 in algorithms, programming languages, semantics, static analysis, types

 

Tags: , , , ,