Category Archives: programming languages

RubySonar: a type inferencer and indexer for Ruby

I have built a similar static analysis tool for Ruby. The result is a new open-source project RubySonar. RubySonar can now process the full Ruby standard library, Ruby on Rails, and Ruby apps such as Homebrew.

RubySonar’s analysis is inter-procedural and is sensitive to both data-flow and control-flow, which makes it highly accurate. RubSonar uses the same type inference technique of PySonar, and thus can resolve some of the difficult cases that can challenge a good Ruby IDE.

Comments Off on RubySonar: a type inferencer and indexer for Ruby

Posted by on February 3, 2014 in programming languages, static analysis


Programs may not be proofs


Dear Haskell Curry and W.A. Howard,

Like many others, I have high appreciation of the beauty in your works, but I think your discoveries about the Curry-Howard correspondence have been taken too far by the programming languages research community. Following several of your examples showing the similarity between types and logic theorems, some of the researchers started to believe that all programs are just proofs and their types the corresponding theorems. In their own words: “Types are theorems. Programs are proofs.”

I think it’s just the other way around: “Proofs are programs, but programs may not be proofs”. All proofs are programs, but only some of the programs are proofs. In mathematical terms, programs are a strict superset of proofs. Many programs are not proofs simply because proving things is not why they are made. They exist to serve other purposes. Some of them (such as operating systems) may not even terminate, thus failing the most important criteria of being a proof, but they are perfectly legitimate and important programs. Programs are more physical than math and logic—they are real things similar to flowers, trees, animals (or their excrement), cars, paintings, buildings, mountains, the earth and the sun. Their existence is beyond reason. Calling all of them proofs is a bit far-fetched in my opinion :-)

Sometimes we can derive theorems from programs and say something about their properties, but way too many times we can’t. This is the similar to the fact that sometimes we can derive math theorems from the real world but we don’t always succeed. When we fail to predict the future, we can still live it. Similarly, when math and logic fail to predict the behavior of programs, they may still run without problems. It would be nice if more programs are designed in nice ways such that we can predict how they will behave, but there is a limit as to how far we can see into their future. Thanks to this limit, because if all the future can be predicted, programs may not worth existing and life may not worth living.

Comments Off on Programs may not be proofs

Posted by on January 17, 2014 in logics, philosophy, programming languages, proofs


On object-oriented programming

[written at the end of 2013 AD, during the Dark Ages of programming]

The programmer’s world is full of fads and superstitions. Every now and then there will be somebody who come up and announce: “I can save the world!” No matter how bad the ideas are, there will always be followers, and the ideas soon become their religion. They then develop their community or camp, try to let those ideas dominate the world, and try to make the ideas live forever.

Object-oriented programming (OOP) is such a religion. It claimed to be able to save the world from the so-called “software crisis”, but as a hindsight after so many years since it was introduced, not only didn’t OOP save us, it brought us way more confusion and harm than benefits. Unfortunately its dogmas and mispractices have become so wide-spread and deeply intrenched. In this article, I hope to provide my viewpoint into this matter and try to find out the lessons that we can learn.

Like every article on my blog, the opinions are completely personal and not representing my employers or professors.

Is everything an object?

“Everything is an object” is the core dogma of OOP and deemed as the highest standards of OO language design. Now let’s take a careful look to see if it is true, or if it is a good idea to make things that way.

Many people take “everything is an object” for granted because when this sentence is taken literally it matches their everyday experience. Since the word “object” in English basically means “a thing”, how can “everything is an object” be not true? But be careful since the definition of an “object” in OOP has a specific meaning which is very different from its meaning in English.

OOP’s definition of an object is “a combination of data fields and associated procedures known as methods“. Can you really fit everything into this model?

First let’s look at the real world and see if this definition can capture everything. Cars, trees, animals may sometimes be thought of as objects, but what about a change of the objects’ position, its velocity and duration? What methods do they have? Well, you may define classes called Velocity or Time, with methods such as addition, but do velocity and time really contain the things that you call “methods”? They don’t. They are just your imagination. You can add the velocities or time, but how can velocities or time contain the addition procedure? This is like saying that the bullets contain the gun.

So the most you can say is that “everything is an object” is a good way of thinking, but that is not true either. The definition of an object implies that a method can only belong to one object, but most of the time it doesn’t make sense thinking of it as belonging to any object. Say we have the expression 1+2, does the operator ‘+’ belong to 1, or does it belong to 2? You have to make some arbitrary choice. Since you can make a choice, this means that the ‘+’ operator doesn’t really belong to either of them. The operation is inherently outside of the objects.

So thinking of some things as objects may be helpful, but thinking of everything as an object is neither true nor useful. Unfortunately “everything is an object” has been taken as a dogma and the highest standard of OO language design. Some OO languages claim that everything is an object in them. Whenever you notice that something is not an object, somebody will try to make it one. They may succeed in that, but things get very complicated that way, because that’s not how things work.

The idealism of “everything is an object” is similar to “everything is a function” in the functional programming world and “everything is a set” in the math world. Before computer science was conceived there was a thing called the lambda calculus. Some people encoded everything including numbers and their operations, various data structures and control structures, … all in lambdas. One of the encodings of numbers is called the Church numeral. Every programming language researcher has played with them during their education. But unlike “everything is an object”, “everything is a function” has never become a dogma or marketing phrase. Those formulations sometimes provide thought experiments and inspirations to the researchers but nobody really use them for actual computation, because they are inefficient and they are not really how things work. They are just approximations (models) to some essence of computation that we can’t see. If you really use them for practical projects, things become complicated.

Mathematicians have a similar concept: set theory. Some geniuses encoded everything — numbers, operations on numbers, mathematical structures, … all in sets. Everything is just sets containing sets containing sets and so on. What’s the problem? But when they really tried to do their proofs using those sets, the proofs fell under their own weights. They are too complicated. Even with the complexity, set theory is not expressive enough to capture whatever the mathematicians have to say. Many people tried to fix it, but they all failed.

So “everything is an object” is in some sense on the same track of “everything is a function” and “everything is a set”. Good thought exercise, but doesn’t really work well in practice. I don’t think that there is some “one true language”. When compared to the “absolute truth” every theory is wrong, but some theories are more wrong than others. The model of OOP is too far from correct or practical. It’s somewhat like the flat earth theory. Until today some people still believe that the earth is flat and make all kinds of theories to prove it. Some of their arguments look very scientific, but do you believe in their formulas or a picture of the earth from a spaceship? When you get the fundamental things wrong and don’t throw them away, you have to patch them endlessly with even more complicated theories. You will have to make theories that bend the light and gravity in weird ways. And that’s what happened to OOP.

Are functions objects?

From what I know, the original motivation of putting functions inside objects was to support GUI applications. You click on a button and some function (a callback) will be invoked. For the convenience of referring to the button, the callback takes the triggered object as its first argument. Since the callback does nothing more than this, it seems to be convenient to just store it inside the button. And thus we had an “object” which combines the attributes of the button and a method (the callback). Indeed this is a good idea, but this limited usage case can’t really justify a universal notion of “everything is an object”, just like a two-mile walk can’t prove that the earth is flat.

If you really understand what is abstraction, you may have noticed that even the above story contains a subtle mistake: the callback in the button is not really a method. The true purpose of a method is to provide abstraction to the attributes, but the callback’s purpose is not to provide abstraction. It is just a usual function triggered by the button, which happens to take the button as its first argument.

Very few functions should be considered methods of an object. If you look carefully, most of the time the objects just serve as a namespace (or module) in which you can store attributes and functions, but those functions don’t logically belong to the objects. They just take the objects as inputs and produce some output. Only the functions that are most intimately connected to the attributes and provide an abstraction layer to them should be considered methods. Most of those are called “getters” or “setters”. Some others hide implementation details for more complex data structures such as lists, hash tables, sets etc.

In some languages such as Scala or Python, functions are also treated as objects, but they actually just wrapped the functions into an object, give them names such as “apply” or “call“, so that when the objects are “invoked” you know which functions to call. But putting a function into an object doesn’t really mean that functions are also objects, just like inviting friends to your house doesn’t make them your family.

Functions are fundamental constructs. They don’t belong to objects. They describe a change, transition or transformation of objects. They are not objects and can’t be simulated by objects. They are like a base case of an inductive definition. They are where the illusion of “everything is an object” ends.

The cost of excessive abstraction

The major appeal of OOP is abstraction (and thus code reusing and DRY), but actually most of those abstraction facilities are already provided by traditional procedural languages and functional languages. Some of them do it even better than OO languages. OO claims its originality by emphasizing abstraction much more strongly than other languages. The result is that OO programmers usually overdo it. Some of them pursue abstraction and code reusing to the degree as if they are everything about programming.

For the purpose of code reusing, OO encourages a level of abstraction which makes programs hard to understand and hard to analyze. I often see Java programs with multiple levels of inheritance, overloading and design patterns, but actually doing very little. And because there is so much code that doesn’t do useful things, it is really hard to find out which part of the code is doing the thing you want. It is like going through a maze. Another nice word for this is “robustness”. If I have to go into all this trouble to make code reusable or robust, I’d rather just make copies of the code and modify them, but keep each copy simple and easy to understand.

Whenever you criticize Java or C++ for their verbosity, OO proponents will tell you that they are not authentic OO languages. They would ask you to look at Smalltalk. If Smalltalk’s ways are that good, why almost nobody is using Smalltalk now? Because there are real problems in its approach. I think Smalltalk is the origin of over-abstraction and over-complication you find in other OO languages.

The “authentic” OO style of Smalltalk promotes the notion of “extremely late binding”, which basically means that the meaning of the program constructs is determined as late as possible. Late binding gives you a chance to swap out the underlying implementation without forcing the upper levels to change, but this also means that you are no longer sure what a piece of code means. When I look at expressions such as ‘1+2’ and ‘if (t) then … else …’ in Java or C++, I at least know for sure that they mean an integer addition and an usual conditional. But I’m no longer sure about this in an “extremely late binding language”, because the meaning of ‘+’ and ‘if” can be redefined. Giving the programmers the power of defining control structures is a bad idea, because soon your language will be abundant of quirky control structures designed by programmers who try to be clever. It will no longer be the language that you used to know.

An example for this feature is Smalltalk’s conditional structure, which looks like this:

result := a > b
    ifTrue:[ 'greater' ]
    ifFalse:[ 'less or equal' ]

You send a message ifTrue: to a Boolean object, passing as an argument a block of code to be executed if and only if the Boolean receiver is true.

First of all, if you really have a well-designed language, you shouldn’t be wanting to define your own control structures. As a seasoned Lisp/Scheme programmer, I have seen many custom-designed control structures (such as the various looping macros) over the years, but none of them turned out to be good ideas. I’d rather write slightly longer and more verbose code in the vanilla language than to learn those weird control structures. Second, if you are really genius enough to have invented another good control structure, the late binding feature of Smalltalk probably won’t provide you the necessary power for defining it. The power of functions as an abstraction tool is limited. It is strictly less powerful than Lisp/Scheme’s macros. Third, this feature of Smalltalk is not really a novel approach, and it has a serious problem. A similar but more beautiful conditional construct had been defined in lambda calculus since before computer science was born:

TRUE = λx.λy.x
FALSE = λx.λy.y
IF = λb.λt.λf.b t f

This is very beautiful and can be done in any functional language, but why none of the functional languages implement conditionals this way? Because when you see an expression IF b t f, you will have no idea whether it is a conditional or not, because IF can be redefined in the program. Also because IF is just a function, it may also accept unexpected values other than TRUE or FALSE. This may happen to make the conditional construct work but cause trouble later on. This is called “unintentional semantics”. This kind of bug can be very hard to track down.

This approach also makes compiler and static analysis hard. When the compiler sees IF b t f, it no longer knows that it is a conditional so it can’t optimize it that way. It has to treat it as a usual function call. Similarly when the type checker sees it, it doesn’t know what type to expect for b, because it may not be a conditional at all. The above argument against the lambda calculus can easily be adapted to Smalltalk.

So abstraction is a powerful weapon when used moderately, but when you do it in excess, it backfires. Not only does it make it hard for humans to understand the code, it makes automated analysis tools and compiler optimizations difficult or impossible to make.

Design patterns, the brain eater

Although OO languages are touted for their abstraction capabilities, they are actually not strong in terms of abstraction and expressiveness. There are certain things that are very easy to do in traditional procedural languages and functional languages, but has been made unnecessarily hard in OO languages. This is why design patterns appeared. Design patterns’ origin was mostly due to the dogma of “everything is an object”, the lack of high-order functions (or the correct implementation of them) and OO’s tendency of mystifying things.

When I first heard about design patterns I was already a PhD student at Cornell doing some PL research. I mostly used Standard ML and Haskell. After hearing my friends’ high opinions of the Design Patterns book (the “GoF” book) I developed curiosity, so I borrowed one from the library. Within a few hours I found a mapping from all the weird names it introduced to the programming techniques I had been using all the time. Some of them are so fundamental and they exist in every high-level language, so they don’t really need names. Most of the advanced ones (such as visitor) are transcriptions of functional programming concepts into a convoluted form in order to get around OO language’s limitations. Later on I found that Peter Norvig gave a talk on design patterns as early as 1998, pointing out that almost all of the design patterns will be “transparent” once you have first-class functions. This confirmed my observations — I don’t need them.

I have to admit that some of the design patterns are cleverly designed and contain some ingenuity. You really need to get to the essence of the OO languages’ internal magics and also understand lots of functional programming techniques in order to create them. But intelligence =/= wisdom. Even if they can achieve what functional languages can do, they are usually a lot more complicated. Choosing the hard ways can’t really prove your genius. When you have first-class functions, things become so much easier and you won’t even notice the design patterns’ existence. Like Peter Norvig said, they will become “transparent”. So what a good language designer should do is to add first-class functions into the language instead of proposing design patterns as workarounds.

Every time I remove a design pattern (some other people wrote), the code becomes simpler and more manageable. I just removed the last visitor pattern from my Java code a few days ago and I felt so relieved. They gave me nothing but extra work when they existed. They hindered my progress. By deeply understanding how OO languages are implemented, you can write more advanced things than those provided by design patterns but without actually using them. I owe these insights to some functional programming people. If you really want to understand the essence of OO design patterns and how NOT to use them, this little book may be a good starting point.

Unfortunately design patterns somehow got really popular in companies, to the degree of unbearable. I saw the GoF book on almost every bookshelf when I interned at Google. Even if you don’t write them yourself, there was almost no way you could avoid other people slipping design patterns into your code. Design patterns’ marketing strategy was much like weight loss products: “It can burn your fat without you doing any work!” They appeal to some new programmers’ hope that they can write programs without understanding the fundamental concepts of computer science. Just by applying several patterns and patching things together, they hope to have a good program. This is too good to be true. You end up doing more work than you hoped to avoid. Design patterns eat programmers’ brains. After using design patterns for some time, they no longer see things or write programs in clear and straightforward ways.

What is an OO language any way?

To this point we haven’t yet talked about what makes a language an “OO language” and what makes it not. Is it an OO language just because I can put both data fields and functions into a record? Or is it an OO language only if it also provides extremely late binding? How about inheritance, overloading, etc etc? Must I have all of them? Any of them?

It turns out that there is no good answer to this question. There really is no such thing as an “object-oriented language”. Objects can be part of a language, but it is just a small part of it. You can’t really say that a language is object-oriented just because it provides objects as a feature. The so-called OO languages are solidly rooted in traditional procedural programming (PP). OOP basically stole everything from PP, renamed the terminologies and acted as if the ideas were its own.

Historically the term OO was mainly used for marketing reasons. It could give a language some advantages of attracting people if you claim it to be an OO language, but now this advantage is diminishing because more and more people have realized the problems of OO’s methodology.

Harm in education and industry

Although OO has lots of problems, it is very successful in marketing and has risen to a dominant position over the years. Under social and market pressure, many colleges started using OO languages such as Java as their introductory language, replacing traditional procedural languages such as Pascal and functional languages such as Scheme. This in a large degree caused the students’ failure to learn the most essential concepts of programming. The only thing that OO emphasizes is code reusing, but how can you teach it to the students who can’t even write usable code, not to mention that code reusing is not really as important as some people believe.

At both Cornell and Indiana, I have been teaching assistant for introductory programming courses in Java. I still remember how confused the students were. Most of them had trouble understanding things such as the meaning of “this”, why everything needs to be put inside classes, why make every field private and use getters, the difference between a method and a static method, etc etc.

There is a good reason that they don’t understand — because OO is not how things work. Most of the time I feel that I was teaching design flaws and dogmas to them. Many of them learned very little in the end. Worse, some of those students really believed in OO. They ended up being proud of writing over-engineered and convoluted code. They no longer see things or write programs in straightforward ways. This is sad. I feel that we are no longer educating students as creative and critical thinkers, but mindless assembly line workers.

In industry, OO hasn’t really proved its effectiveness with evidence. Good systems may be built in a “OO language”, but the code is often written by people who understand the problems of OO and don’t embrace “everything is an object” or “design patterns”. Good programmers usually use workarounds in OO languages and are essentially writing in a traditional procedural style combined with bits from functional programming. So some OO languages and their tools may be pretty widely used, but the OO style doesn’t really have much influence on the advancements of programming as a field.

Final word

So what does this post has to say? A jihad against OO languages? Advocate functional programming? Neither. As I said, there is no such thing as an “OO language”, so where is the war? Every so-called OO language also contains good elements that it borrowed (or stole) from procedural languages or sometimes functional languages, so they are not completely useless.

It is the extra features added by OO in addition to procedural programming that are causing most of the problems. Those extra “true OO techniques” contain way more confusion than real value. In my experience, accepting even one or two of those ideas may put you into a series of troubles and wrong ways of thinking which may take a long time to examine and recover. They are like diseases.

Thus I suggest not to buy OO’s way of thinking and don’t try to exploit its “features”. By eschewing those problematic features you can still produce acceptable programs in an OO language, because you are essentially using it as an non-OO procedural language.

(Chinese translation by ZoomQuiet)

Comments Off on On object-oriented programming

Posted by on December 24, 2013 in oop, programming languages


Purely functional languages and monads

In general, functional languages are pleasant to work with because they support first-class functions, which are a very powerful modeling tool. But if you pursue the extreme — to use a purely functional language, you get adverse effects similar to OO design patterns. In a conventional OO language, having to use OO design patterns makes hard things even harder; but in a purely functional language, having to use pure functions to model side-effects makes trivial things hard and hard things impossible. So speaking of the two evils, OOP is the lesser one because at least easy things are still easy in them.

The problem with pure FP is: there exists things that are not pure.

Side-effects are real

Electrical engineers probably have the best understanding of purely functional languages, because electric wires are pure by default (if you are not paranoid enough to count heat as a side-effect). Purely functional languages are the analogous of combinational logic circuits. They are very useful and are an important part of a system, but you can’t use them alone to build up complex systems. This is why people invented flip-flops and sequential logic circuits. If you look at the design of the flip-flops, you will start to appreciate the ingenuity in them. They create memory out of “pure circuits”. And the memory is where the side-effects come from.

So pureness comes for free, but side-effects took efforts to invent. Unfortunately a large portion of the physical capabilities provided by the sequential circuits are disregarded by purely functional languages. Instead, purely functional languages are obsessed in simulating the effects on their own. There is a difference between the physical and the simulated. The simulated can’t be as efficient as the physical.

For a simple example, many efficient data structures rely on mutations to create connections between their components (think about circular data structures and networks). In such structures, side-effects act very much like a physical force to hold the parts together, and update them as needed. To implement this in a purely functional language, you have to use indirect ways. Every time you change it, you need to create a new data structure which shares most of the old data structure. But then you got into the trouble of keeping track of where the “current structure” is. It is moving in the dataflow and you have to pass it on. Had you used side-effects, the structure can stay at the same location, so you won’t need to worry that you will lose track of it. You don’t need to pass it on. This will save lots of administrative code.

Also sharing the old structures induces extra levels of indirection in the data structure, which causes significant overhead. Thus purely functional data structures can’t really match the performance provided by its impure counterpart. Purely functional data structures also cause lots of object creation and thus stress the garbage collector.

Some people say that purely functional data structures will reduce contention when there is lots of concurrency, but notice that if the observers of the old data structure “purely update” it, they will hold a different structure other than it’s current state. After that point, the universe is “forked” and they will live in completely different worlds. You will lose consistency. If you really want all threads to see the same data structure, then the contention is unavoidable, because you need a channel for the information to pass through. Nothing can save you from the contention because there is data dependency.

Everything is persistent in purely functional data structures, but how often do you need the outdated ones? In impure data structures, you still have the choice of making copies when necessary and achieve the same persistence effect, just with better performance.

Monads are design patterns

Purely functional languages also complicates the programs and ensue a huge cognitive cost. Modeling side-effects using pure functions is in the same mentality of wrapping functions inside objects as in OOP. They are both over-engineering and cause unnecessary manual labor.

One of the major “design patterns” in purely functional languages is called “monads”, a highly stylized way of structuring side-effects. If you look deep into them, monads make programs complicated and hard to write, and monad transformers are in essence a hack to get around monads’ limitations — they are not principled ways of composing monads. Representing side-effects using monads is as convoluted as writing interpreters or compilers using visitor patterns. For this matter, I wrote a short article some time ago which not many people have read:

To write programs in a purely functional programming language is much like living in a wired world. In such a world, there are no electromagnetic waves nor sound waves, so you don’t have wifi, cell phones, satellite TV, radios, etc. You don’t even have light or sound. In other words, everything is blind and deaf.

All information must pass through wires or pipes, connected by switch boxes which we call “monads”. You must carefully route all the wires and pipes and connect them correctly before any information processing device, including your eyes and ears, can properly work.

Trivial things in other languages (such as random number generators or circular data structures) become non-trivial in a purely functional language. Easy things often become research problems when you try to write them using monads, so you often see papers titled similar to “A Monadic Approach to a-solved-problem”.

About this I have an interesting story to tell. Once upon a time, my PhD advisor Amr Sabry tried to reimplement Dan Friedman’s miniKanren (a logic programming language) in Haskell, but he couldn’t figure out how to compose the monads. He asked for help from Oleg Kiselyov, arguably the world’s most knowledgeable person about getting around Haskell’s type system. And if you don’t know, Amr Sabry is probably the world’s most knowledgeable person about purely functional programming languages and side-effects. His paper What is a Purely Functional Language is often referred to as the official definition of “pureness”. After solving the problem with Oleg’s help, they coauthored a Functional Pearl paper. Ironically, Dan Friedman, the original author of that piece of code, didn’t have any such trouble writing it in Scheme in the first place. Certainly there is no reason Amr should be able to figure out how to compose the monads. He and Oleg just wrote the Haskell code for fun, but this story tells me something about monads: they make things unnecessarily complicated.

How did Dan Friedman write the code so easily? He just directly passed the states in-and-out and not using any monads, or you can say that he used monads’ essence without actually using them. Following Dan’s style, I rewrote miniKanren, added constraint logic programming and a highly sophisticated negation operator to it. All this is done within three weeks during my first semester as a PhD student, together with Dan’s B521, Amr’s B522, other course loads and teaching duties. I would certainly be bogged down had I used monads, and I don’t see any point of translating it into a monadic style. It is just so much simpler without monads.

(Correction to some historical facts by request from Prof. Friedman: He’d like to give a lot of the credits of the current miniKanren code’s simplicity to Chung-chieh Shan, who at one moment simplified the code to its current style.)

Equational reasoning can’t save the world

On the safety side, purely functional languages haven’t much advantage either. Some people claim that the value of monads is that they explicitly delimit the side-effects and support equational reasoning, but programs are not always as easy as algebra formulas such as a(b+c)=ab+a*c. If all programs are that easy, we would not need monads at all. Monads don’t really make your program easier to reason about.

Pure functions always return the same results when they are given the same input, but every monadic function has an extra “implicit” argument which is different at every call to the same function, so although it is still true that “pure functions always return the same results when they are given the same input”, the problem is, you never get the same input because of that always-changing extra argument! You don’t know what’s inside that argument. That argument is called the “state”.

There is no way you can statically know the values inside state monads, for the same reason that you don’t know the values inside the heap. This is because the state monads are in essence the same as the heap. The heap maps memory addresses to values, and state monads map variables to values. If you have written static analysis, it will be clear that monadic code is in essence putting parts of a static analysis into the user’s code. In other words, every monadic code is reimplementing part of a static analysis. So monadic code is as hard to analyze as if you use side-effects, only that it takes a lot more work to write. You can write horribly side-effective code with monads which nobody can understand and no static analysis tool can help. There is no such thing monads can make easier which can’t be done by static analysis. Static analysis researchers know this very well.

You can write pure functions in any language

Monads are contagious. Once the code gets into monads, it is not easy to get out. Having to explicitly specify side-effects in the types is similar to having to explicitly declare exceptions in Java. You must either “handle” it, or you must declare that you have passed it on. Why should programmers write them while they can be easily inferred by static analysis? Static analysis use what is essentially monads (or even more advanced techniques), and they take the burden of writing monadic code away from programmers.

Of course overusing side-effects will make programs harder to analyze, but you can reduce side-effects and write pure functions even in an impure language. For example, the following C function is a pure function, satisfying every requirement of the definition of “pureness”.

int f(int x) {
    int y = 0;
    int z = 0;
    y = 2 * x;
    z = y + 1;
    return z / 3;

Advanced static analysis tools would have no trouble figuring out things about this kind of code. They would know that this function is pure without you writing any annotations, and they can do a lot more for you than this. So pure functions don’t just belong to purely functional languages. You can write pure functions in any language (including assembly), but the point is, you should be allowed to use side-effects too, especially when they make things easier.

Thinking critically about mathematics as a language

Looking back into history, the dogma from mathematics is the driving force behind purely functional languages. Mathematical functions are simple and beautiful, but unfortunately they work well only when the thing you are trying to model is pure by nature. Purely functional language proponents like using buzzwords like “category theory”, and call those who don’t understand it “uninitiated”. I know a considerable amount of category theory. Even the category theorists themselves call it “abstract nonsense”, because it is to a large extent a grotesque way of saying what other mathematicians already know, in much the same sense that GoF design patterns are a grotesque way of saying what most decent programmers already know. So category theory is the analogous of design patterns in mathematics.

If you read Gottlob Frege’s article Function and conceptyou will be surprised that most mathematicians got functions wrong before his writing, and that was just a little more than a hundred years ago. Mathematics have done lots of things wrong with its language. This has been pointed out a long time ago by Gerald Susman in his Structure and Interpretation of Classical Mechanics and recently in his InfoQ talk. There is a lot of truth in his words. There is no reason programming language designers should blindly follow the ways of mathematics, because it is just another quirky language.

What is functional programming, really?

The above is not to disagree with functional programming in general. On the contrary, functional programming in general is highly valued. I just disagree with the dogma of the “purely functional” ones. Impure functional languages such as Scheme and ML haven’t these problems. They have “benign side-effects”. In fact, in Scheme functions are called “procedures”, not “functions”. This is because its designers know that they are not functions in the mathematical sense, and they intended to make them not necessarily pure. The purely functional language community often try to steal the word “functional programming” from traditional functional languages (Lisp, Scheme, ML), as if only purely functional languages deserve the name. This is not fair and this is harmful. We should be able to use the word “functional programming language” to refer to any language with correct implementation of first-class functions.

Don’t fall in love with your model

Everything starts to do harm when they are pursued to the extreme. Purely functional programming tries to fit the world into its model, but the world works in a completely independent way. It is wrong to think of everything as a nail when you have a hammer. Only by observing the reality can we get out of the religions that are limiting us. Don’t fit the world to your model. Fit your model to the world.

Comments Off on Purely functional languages and monads

Posted by on November 16, 2013 in functional programming, programming languages


A pure logic negation operator for miniKanren

Have you ever noticed that some examples from The Reasoned Schemer are not so reasonable? If so, you may want to read this post.


miniKanren is an educational logic programming language designed by Dan Friedman, William Byrd and Oleg Kiselyov. For teaching logic programming, they also co-authored the book The Reasoned Schemer (TRS). As a person hugely benefitted from this book (and also every other book of the “little book” series), I highly recommend TRS to you.

While elegantly designed, miniKanren hasn’t a “pure” negation operator. There is a ‘conda’ operator which is similar to Prolog’s cut, but it is not pure. That means once it is used, we may miss possible answers even if they exist. Thus although the ‘conda’ operator exists, it is not recommended for serious use.

But now we have a problem, if we can’t have a ‘cond’ operator with implicit negation of the conditions of the previous lines, we will have trouble interpreting the results from some code from The Reasoned Schemer (TRS). For example, on “Frame 30” of TRS, we have the following program, which invokes rembero, a “logic function” for deleting an item from a list.

The definition of rembero is (notice the ‘conde’ operator):

(define rembero
  (lambda (x l out)
      ((nullo l) (== '() out))
      ((caro l x) (cdro l out))
      ((fresh (res)
         (fresh (d)
           (cdro l d)
           (rembero x d res))
         (fresh (a)
           (caro l a)
           (conso a res out)))))))

If it is used this way:

(run* (out)
 (fresh (y)
   (rembero y `(a b ,y d peas e) out)))

Running it and we have the following 7 answers:

;; =>
;; ((b a d peas e)               ; y == a
;;  (a b d peas e)               ; y == b
;;  (a b d peas e)               ; y == y
;;  (a b d peas e)               ; unreasonable beyond this point
;;  (a b peas d e)
;;  (a b e d peas)
;;  (a b _.0 d peas e))

Have you ever been surprised that there are 7 answers? Is it really possible that y fails to remove itself, but goes beyond and removes ‘d’, ‘peas’ and ‘e’ (Answers 4, 5 and 6), or fails to remove all of them (Answer 7)? Have you noticed that only the the first 3 answers are reasonable, and the last 4 answers shouldn’t really happen?

For this particular example, the result from The Reasoned Schemer was not so reasonable.

a pure negation operator

As a student in Dan’s class (B521), I was puzzled by the above results. I asked Dan and Will why this happens. They told me that this is because ‘conde’ operator of miniKanren is not exactly like ‘cond’ of Scheme. In Scheme, every line in the ‘cond’ expression implicitly negates all the previous conditions. This is to say, we execute the second line only if the first condition fails, and we execute the third line only if the conditions on the first and second lines both fail, and so on.

On the other hand, the ‘conde’ operator of miniKanren doesn’t implicitly insert the negation of the conditions on the previous lines. The reason that miniKanren doesn’t do this is because there is no easy way of doing “negation” in logic programming. According to Will Byrd, this is a thorny subject that has been researched for over 30 years.

As a dare devil who never believes how difficult things are, I thought: “Why not try my luck and see how far I can go competing with these 30 years of research?” Out of this evil-minded motivation, I independently reimplemented miniKanren and added a negation operator in it (named “noto”, naturally). Different from ‘conda’ and Prolog’s cut, noto is pure in the sense that it doesn’t cut out possible answers if they exist. Using noto, I defined a new conditional construct named ‘condc’, which implicitly inserts negations of all previous conditions on each line. It is designed to behave as an exact logic counterpart of Scheme’s ‘cond’.

If we use the ‘condc’ operator to redefine remebero (only one character is changed), we will have the following (more reasonable) results:

;; redefine rembero using condc operator
(define rembero
  (lambda (x l out)
      ((nullo l) (== '() out))
      ((caro l x) (cdro l out))
      ((fresh (res)
         (fresh (d)
           (cdro l d)
           (rembero x d res))
         (fresh (a)
           (caro l a)
           (conso a res out)))))))

(run* (out)
 (fresh (y)
   (rembero y `(a b ,y d peas e) out)))

;; =>
;; (((b a d peas e) ())
;;  ((a b d peas e) ())
;;  ((a b d peas e)
;;   (constraints:
;;    ((noto (caro (b #1(y) d peas e) #1(y)))
;;     (noto (caro (a b #1(y) d peas e) #1(y)))))))

Notice that we got only 3 answers (instead of 7), plus two constraints for the third answer. In fact each answer is paired with a constraint list, but the constraint lists are empty for the first two answers. This is why they are displayed as ((b a d peas e) ()) and  ((a b d peas e) ()).

Now I briefly describe what these three answers mean. The first answer (b a d peas e) happens when “y is a”, thus it removes the first item (a) from the list. The second answer (a b d peas e) happens when “y is b”, thus it removes the second item (b) from the list. If you are confused why we still have a ‘b’ here after ‘a’, this is because the third item (y) is now ‘b’!

The third answer is more interesting. It not only has an answer (a b d peas e), but also has two constraints attached to this answer:

(noto (caro (b #1(y) d peas e) #1(y)))
(noto (caro (a b #1(y) d peas e) #1(y)))

(Here #1(y) is a special notation to say that y is a logic variable.)

These constraints are in conjunctive form. They are saying: If we are to have this answer, neither (caro (b y d peas e) y) nor (caro (a b y d peas e) y) should hold, which is basically saying “y is neither a nor b”. This is correct, because if y is either a or b, we would not have reached this answer because y would have removed one of the first two items, and the iteration would have stopped.

We have no more answers beyond the third, because under no condition should y be able to remove ‘d’, ‘peas’ or ‘e’, because the logic variable y will definitely remove y, no matter what it is! The iteration will definitely stop at the point where “y meets y”. Clear?

How does it work?

The principle behind the negation operator (“noto”) is to propagate the negation of goals as constraints (as in constraint logic programming) down the execution paths of the miniKanren program.

Before I tell you further details, I want to describe the intuition behind its design. Looking at the details without knowing the design principles will not be very useful. To see how you can design a negation operator, just think about how you can make the goal (noto G) succeed. First of all, you want to make the goal G fail, so that (noto G) can succeed, right? But G may contain unbound logic variables, and you can’t just randomly assign them values. This is why a more elaborate mechanism was devised. It is there to ensure the soundness of the logic.

So now we can proceed to look at the details how this works:

  1. When the negation of a goal G is first encountered, as (noto G), a specially designed “evil unifier” (unify-evil) is invoked. As its name suggests, unify-evil works similar to unify, but in a “negative way”. The goal of unify-evil is to take every chance to make the goal G fail. Basically, it tries its best to find values that can be bound to the free logic variables, such that G can fail. But notice that unify-evil doesn’t permanently associate those values to the logic variables. It just tries out those values, and as soon as it knows that G can fail, it dumps those associations. Thus the free logic variables remain free.
  2. If unify-evil cannot make G fail no matter how hard it tries, then we know that G will succeed, thus we know that the goal (noto G) will fail. This means, we have failed to produce answers on this path. We should backtrack and explore other paths.
  3. If unify-evil succeeds in making G fail, then (noto G) has a chance to succeed. But at this moment it is too early to declare success, because the unbound logic variables may pick up some other values later, which could make G succeed, and consequently make (noto G) fail.
  4. Because of (3), we will have to propagate the negation of G as a constraint down the path of execution, checking that G fails every time when we have new information about the unbound logic variables (e.g. some fresh variables are later bound).
  5. If at the end of the execution path unify-evil can still succeed in making G fail, then we can safely declare the success of (noto G). This is because the free logic variables will have no more chances to make G succeed. This (noto G), if it is not subsumed by the current substitution state, should be included in the final answer.
  6. A reified value, together with the non-subsumed constraints on the logic variables, will be output together as the answer.

From the above mechanism, can you see why the example of rembero produces these results?

(run* (out)
 (fresh (y)
   (rembero y `(a b ,y d peas e) out)))

;; =>
;; (((b a d peas e) ())
;;  ((a b d peas e) ())
;;  ((a b d peas e)
;;   (constraints:
;;    ((noto (caro (b #1(y) d peas e) #1(y)))
;;     (noto (caro (a b #1(y) d peas e) #1(y)))))))</pre>

Why hasn’t the second answer a constraint which says “y is not a”? This is because for the second answer, we already know that “y is b”, which implies “y is not a”. The system is intelligent enough to omit “y is not a” from the answer’s constraints because it knows that it is subsumed under the current substitution (“y is b”).


Nested negations does not work properly, so if you have (noto (noto (== x 10))), you are not guaranteed to have x bound to 10. I have a later version of the negation operator did make this work, but it caused non-termination problems, and I ran out of allocated time soon after that.  More work needs to be done to make nested negations work.

After several years of this experiment, I had an interesting discussion with Oleg Kiselyov on this topic. An excerpt of our conversation is included as comments at the bottom of the code. In his words, although the implementation of noto works to some degree, it is not perfect. To the best of his knowledge, no negation operators work perfectly until this day.

So, did I beat 30 years of hardcore research? Probably not. But consider this – it took me less than a month to think of and implement all this. I worked completely independently, day and night. This happened in 2008 when I first learned miniKanren and logic programming. Today as a mature programming languages researcher, I’d like to take it as an amusement to revisit and see how far I can go down the path I have started exploring 5 years ago.


The reimplemented miniKanren, together with the negation operator, has been available from my GitHub for years without being noticed. Now I made it an independent project and hope to have time (and public pressure) to develop it further. I also hope to gather ideas from real logic programming gurus about other ways of implementing pure negation operators.

If you are interested in playing with it, or you want to research on this topic, my code is here for free:

Comments Off on A pure logic negation operator for miniKanren

Posted by on July 6, 2013 in logic programming, programming languages, research


psydiff: a structural comparison tool for Python

(click on the above picture for a demo)

Psydiff is a structural comparison tool for Python, written in Python itself. The main algorithm and UI of psydiff is almost the same as ydiff.

If interested, you can see a demo of it here (psydiff comparing itself):

All the source code can be downloaded from my GitHub repo:

It’s still in early stage of development. I appreciate your bug reports, feature requests or contributions.

Comments Off on psydiff: a structural comparison tool for Python

Posted by on July 6, 2013 in programming languages, software, tools


Null reference may not be a mistake


The null pointer is considered to be a “billion-dollar mistake“. I have been wondering why there is such a notion until I saw the video where Tony Hoare claims it to be his mistake. In fact, he didn’t really say that null pointer should not be used.

From this video, you can see that introducing null reference is not really a mistake. On the contrary, null references are helpful and sometimes indispensable. The mistake is not in the existence of the null pointers, but in how the type system treats them. Unfortunately, most languages (C++, Java, C#, …) don’t treat them correctly.

Every class type A of Java is in fact a union type {A, null}, because you can use null where an A object is expected. {A, null} is almost equivalent to the Maybe type of Haskell, where null corresponds to Nothing of Haskell. So the trouble really is that an annotation like {String, null} should be distinguished from String, so that it will be clear whether null can possibly be its value.

Unfortunately most languages don’t provide a convenient union type that you can put String and null together (Typed Racket is an exception). If Java is to have union types, we can say something like:

{String, null} find1() {
  if (...) {
    return "okay";
  } else {
    return null;

This is saying: find1 may return a name which is a String, or it may return nothing. Because of the union type {String, null}, the type system knows that you should check for null when you have called find(), so it will force you to write a null check:

String s = find();  
if (s != null) {
  x = s.length();

In comparison, if we define a slightly different function find2, with a different return type:

String find2() {
    return "okay";

From the return type we know that find2 will never return null, so the type checker can let you you use the String without checking:

String s = find();
x = s.length();

Comments Off on Null reference may not be a mistake

Posted by on June 3, 2013 in programming languages, types