Why self-driving cars are not going to happen


Google has been developing and testing their self-driving cars for quite a while, making headlines frequently, but from some simple observations, you may see that those cute robotic cars will never going to work in the real world.

Problem 1: Difficulty in computer cognition

Driving a car takes a lot of intelligence. Interesting enough, such a daily activity requires a lot more intelligence than playing chess or go. To be safe, not only must a self-driving car obey traffic rules, it must also actively predict and avoid potential dangers on the road. It shouldn’t just rely on last-second reactions.


Here is a simple example of such an upcoming danger–a falling mattress from a front car. This can be immediately recognized by a human. He wouldn’t wait for the mattress to hit the ground, because that would be too late. He would avoid that car, quickly accelerate and pass it, and possibly honk and warn the other driver. How can a self-driving car do that?

How can a computer know that there is a mattress strapped on the top of the other car, and it’s likely to fall off? This calls for not only intelligence, but also experience and common sense. Will machine learning, deep learning or neural networks make this work? Very unlikely. A computer cannot know what is a “falling mattress” even if it crashes with a falling mattress a thousand times. But it is amazing how humans (and animals) know the danger almost immediately without previous exposure to such a situation. This kind of animal intelligence is very general. It doesn’t require much “learning”. It is in their instinct.

True human cognition is needed for the self-driving car to recognize objects and predict potential dangers. This is the biggest obstacle of making the self-driving car marginally work as well as a human. Yes, we have seen Google’s cars on the roads, but they are not yet handling the real, complex world. Far from it. They only come out on nice sunny days, driving around the few simple and uncrowded roads in the Mountain View area. I have never seen self-driving cars on a free way or a populated city. Even in such ideal conditions, Google’s self-driving cars had 272 “disengagement events” in the past year, which required human intervention. Just recently, a self-driving car hit a bus because of the computer’s misjudgement.

From the recent victory of AlphaGo over the human Go champion, you may think that the powerful AI technologies behind AlphaGo, such as neural networks and deep learning, can also be used to enhance self-driving cars. Unfortunately, they will not help much in this case. If you take a deep look into them, the current machine learning or deep learning technologies hasn’t really much to do with human learning, intelligence or cognition. You don’t need true intelligence to play go? Unfortunately you are right. AlphaGo just proved to us that it is the case ;) Chess and go are just board games. No matter how hard those games seemingly are, they are all played in very simple and controlled environments, thus are easy for computers to apply their brutal power. Self-driving car has to deal with the real world, and the real world is very complex, full of surprises. This is where only true human intelligence can survive.

So for a self-driving car to handle the variety of road situations and emergencies, it needs true human cognition. Unfortunately nobody has any idea how to implement human cognition in a machine, because we still haven’t much idea how our own cognition system works.

Problem 2: Liability issues


You may think that for practical uses, self-driving cars don’t need to be 100% perfect in their judgements and actions. Unfortunately, those do need to be perfect before they can be allowed on the roads. They can’t just be better than the average human drivers. They must be better than the best human drivers. That is, they must be the best of the best.

Thus they must satisfy two requirements:

  1. Self-driving cars must not cause any accident.
  2. Self-driving cars must not fail to avoid potential dangers that a human would avoid.

Why should they satisfy such strict requirements? 1) For every accident, somebody must be held liable for the property losses and injuries. If humans are drivers, one of them will be liable. If it was a self-driving car causing the accident, Google will be liable. Google’s software will be considered the driver at fault. 2) A human can predict and avoid potential dangers on the road, for example a falling mattress from a front car. Thus a self-driving car must be able to do the same. If it fails to predict danger and an accident happens, the self-driving car will also be liable to losses that resulted from such inability.

So Google is going to be liable for every accident that is either 1) caused by the incorrect actions of the self-driving car or 2) caused by its inability of avoiding danger. The second point is especially difficult to achieve, and it requires very high level human cognition. How much money has Google to pay all the damages, medical expenses and lives of people? Can any amount money be able to pay for people’s lives? Nope.

This is why self-driving cars should never cause an accident, and it should never fail to avoid an accident that would have been avoided by a human. They need to be 100% correct in their cognition, prediction and handling of dangerous situations. 99.9999% correctness is not enough. But as I already told you, we have no idea when or whether such a high level human cognition can be implemented in a machine. Thus, I don’t see how (fully) self-driving cars can ever be put into daily use without a fundamental breakthrough.

Choose easier problems

Although fully self-driving cars are very hard to achieve, and I can’t see how Google (Tesla, Mercedes, or anybody) could possibly make it happen in the near future, there is indeed something else we can do with the current technologies. Lucky enough, Automatic braking, distance keeping, and many things other than taking over the car’s steering wheel, can possibly be achieved.

Why can’t a computer control the car’s steering? Because of exactly the same issues I have noted above. Steering the direction of the car requires true human cognition and common sense, and this also implies serious liability issues. Once the computer take control of the steering and direction, technically speaking the human is no longer the driver of the car. Thus the software must be liable for any resulting accidents. The same principles also apply to Tesla’s Autopilot. It is wrong of Tesla Autopilot to take full control of steering.

If you restrict the problem to automatic braking, thing are much easier. Automatic braking can be implemented in a dumb way. The computer just need to actuate the brake when the radar detects obstacles ahead. It doesn’t need to understand what the obstacle really is. It just need to be large enough to cause a collision. Since the automatic braking is only assisting the driver to prevent possible collisions that he failed to act, and the human is still the driver of the car, he is still responsible for the consequences. So Problems 1 and 2 no longer applies to automatic braking.


Comments Off on Why self-driving cars are not going to happen

Posted by on March 22, 2016 in artificial intelligence, car, wisdom


Why “Falcon Wing Doors” is a bad idea

Tesla’s new SUV, the Model X, has a very special back door design. They open like “falcon wings”, thus are called “Falcon Wing Doors”.


Although looking futuristic and may help with tight parking spaces, this design has several potential problems.

Delayed escape during accident

If an accident happens and there is a fire, how quickly can you escape? You will get quite trapped in a Model X. When the car loses power, according to Model X’s Emergency Response Guidethere is a very complicated procedure to follow before you can open the Falcon Wing Doors.falcon-wing-door-powerless

First of all, notice that without power, there is no way that the door can be opened from the outside. This means that the firefighters can’t help you much! If you are lucky enough–you are not injured, you are not scared, you successfully get out of the trap of the airbags, you have to do the following three complicated movements:

  1. Remove the speaker grill from the door.
  2. Pull the mechanical release cable.
  3. Manually lift up the doors.


How do you remove the speaker grill? Doh.. Honestly, having hammer and screw driver ready in a Model X may be a good idea ;-)

Reduced utility

  1. You can’t install a roof rack and carry a kayak. This makes this SUV (Sport Utility Vehicle) less useful than other brands.
  2. There can’t be bottle/utility holders on the back doors, because they will fall off when the doors open. As a consequence, back passengers can’t put their water, snacks or cell phones on the doors.

Vulnerability in weather


  1. When the doors open, the back of the car is almost complete exposed, on the sides and top. A good chance for rain and snow to get into the car in large amounts.
  2. When the doors open, dirt and snow that gathered on the roof can fall into the car.
  3. If the car is covered in heavy snow or ice, the doors won’t open. You have to clear the roof very carefully before the back doors can be opened.
  4. The long seams of these doors are more prone to leak problems. As the seals weather out, water could drip down from the roof.
  5. If dirt, snow, tree leaves or twigs gets trapped into the top seals, you may have trouble sealing the doors, and water will drip in.
  6. It will be tricky open/close the doors in strong wind. Wind may cause the doors to malfunction.
  7. Because the doors increased the car’s center of gravity, the car may shake quite a bit when the doors are open against strong wind.
  8. You can’t get in/out easily with half-open doors, but fully open doors will cause a big heat loss in a cold windy day.


Although the doors may help with parking in horizontally tight spaces, they are troublesome in vertically tight spaces. They may hit the roof in some garages, like this one. Even if the sensors prevented the doors to hit the roof, the back passengers may have trouble getting out through the half-open doors.

Indeed they are easier to open in horizontally tight parking spaces. But, how often do you park in such tight spaces? If that happens, can’t you just drop off the back passengers before pulling in? Can’t sliding doors provide the same benefits?

Mazda5, 2011

Manufacturing and maintenance

falcon-wing-door-machineryThe machinery of these doors is overly complicated. They are difficult and expensive to manufacture, prone to mechanical problems, and difficult to repair. Take a look at the news, and see how a legitimate supplier of hydraulic lifters to other famous brands (such as Cadillac) failed to meet the Falcon Doors’ ridiculous requirements.



Because the Falcon Wing Doors’ complexity, it increases the center of gravity of the car. This decreases the stability of the car and cornering abilities. Also when parking on uneven ground, the high profile of the doors make the car unstable.

Not beautiful or fancy

With this novel door design, the Model X doesn’t really look beautiful, friendly, or fancy. It looks like a Prius. It’s nowhere close to Ferrari, Lamborghini or McLaren. Notice that the “scissor doors” of the hyper cars don’t really have some of the problems of Model X’s Falcon Wing Doors.


The Model X (with the doors open) looks like a falcon, ready for an aggressive move. Not feeling friendly.

With these in mind, and the fact that Space X’s rockets are named “Falcon”, the Falcon Wing Doors feels more like an gimmick and over-engineering than a useful/beautiful design. There really is no need to make car doors looking like falcon wings.

Comments Off on Why “Falcon Wing Doors” is a bad idea

Posted by on January 20, 2016 in car, design


Some observations about Tesla’s Autopilot


After analyzing Tesla Model S’s various design problems, I was shocked again, this time by a popular video, showing Tesla’s autopilot nearly causing a frontal crash.

Some observations:

  1. Autopilot sees tree shadows on the ground, and mistaken them as obstacles. This may not be a right kind of road to use the autopilot, but in principle this could happen even if you are on a high way with trees or some other shadow-casting objects, such as clouds. The failure of distinguishing shadows from objects suggests that Tesla hasn’t done basic computer vision research, specifically, a subject called “image shadow removing”. I’m also doubting whether the autopilot uses stereo vision or colors at all.

  2. When seeing the tree shadow, autopilot tried to avoid it as if it were an obstacle. It didn’t brake (the speed remained at ~38mph). It steered the car to the left, trying to cross the double yellow line, nearly causing a frontal crash with an upcoming vehicle. This shows that the autopilot hasn’t ideas about basic road rules and correct emergency strategy. An experienced driver would brake instead of averting the obstacle without slowing down.

There should be no public “beta testing” on such mission critical software. It needs to be perfectly safe before it can be released. Nobody wants to test such software with their lives. Putting complicated “warnings” or “conditions” on the software license, calling it “beta” and asking users to keep a hand on the steering wheel, can’t really get Tesla out of liability if someone is actually hurt.

Comments Off on Some observations about Tesla’s Autopilot

Posted by on January 16, 2016 in car, design


Design Flaws of the Tesla Model S


Many people admire Tesla cars. They think it is the car of the future. Although electric cars are environment-friendly, quiet and performant, I found the design of Tesla’s Model S particularly awkward and user-unfriendly, with potential safety problems.

Lacking physical controls

A major design problem of the Model S is that the designers put too much faith in the touch screen. There are so few physical controls. Almost everything is controlled by the touch screen. Take a look at the interior, it is almost bare. This hurts both the esthetics and functionality.

There is no switch on the dome for operating the sunroof (see picture). When people see a door or window, they expect a switch right next to it. Not the case with the Model S. You look up from the seat, and there is nothing you can press…


This “simplicity” comes with a cost. How do you open the sunroof? Answer: from the touch screen. You just tap the Control tab on the top, then tap the Sunroof tab to the left, and then you tap and hold a slide bar on the right, drag it down…


But this is not really simple. It just makes simple things complicated. It violates a very important design principle: Controls should be close to the objects that they control, and should be natural for people to discover and intuitive to use. The touch screen controls too many objects. It is overly multiplexed in functionality. It is nowhere close to the object it controls, and there is a deep path for finding the menu items. All of this makes the car confusing and cumbersome.

Compare with other cars, they usually have a dedicated switch for the sunroof, right there above your head. You pull it back and it opens the sunroof. You push it forward and it closes the sunroof. You push it upwards and it tilts the sunroof. The control is easy to discover, direct to access, and intuitive to use. Designers call this kind of design “natural mapping”, because the motion of the controller naturally corresponds to the motion of the sunroof.

Similarly in the Model S, there is no physical dial for the vent air, no physical switches for the head lights, fog lights, ambient lights, … You have to use the touch screen for all of those.

Central point of failure

From a system designer’s perspective, the Model S has a central point of failure. Because the touch screen controls almost everything, if it fails, you lose control of pretty much everything: sunroof, vent, door handles, windows, …

This indeed happened to some Tesla users. Take a look at this article. To quote the most important part of it:

“Just before the car went in for its annual service, at a little over 12,000 miles, the center screen went blank, eliminating access to just about every function of the car…”

Ergonomics not well thought out

I also noticed that when I sit back in the driver’s seat, the touch screen is not quite within my arm’s reach. I have to sit up a little and reach my right arm out. Because the screen lacks tactical feedback, you must look at it in order to trigger the correct button. This is not efficient or comfortable, and may not be safe when driving.

There is also a space-utilization issue. Underneath the touch screen, there is a flat surface.


This is the usual place where other cars put the shifter, coffee holder and utility compartments. For the Model S, it’s just a flat, wide-open space. If you put small objects in it, they will fly around, collide with each other, making noise as you drive. The platform’s position is so low, that you have to bend over to get things. This is a waste of the most ergonomically convenient space in the car–the space right under the occupant’s arms at a comfortable sit position.

Some users also reported that the coffee holder of the Model S are placed at a diabolical location, making it easily for the elbows to knock over the coffee cups. Thus some expert user DIY’ed his own coffee holder using a 3D printer…


Troublesome door handle

The Model S has a very special door handle design. At its rest position, the door handle retracts, flushing with the door surface.


As the driver walk close, the door handle extrudes, like a friend extending his hand, for a handshake.


Emotionally this is a nice design, but practically there are problems.

  • The shape of the door handle is a flat metal plate. This is not ergonomic or comfortable.
  • In cold weather, the door handle will be frozen in ice and not be able to extend. In this situation, you will not be able to open the door!

There have been discussions how to take care of the door handle in cold weather. Solutions includes:

  • Remotely start the car. Melt the ice with internal heat.
  • Pour hot water on the door handle.
  • Put hot water bag on the door handle.
  • Use a hair dryer.
  • Put a packing tape on the door handle. Peel the tape off to remove ice.

Now, maybe you have understood, why no other cars, from the cheapest to the most expensive, use this door handle design?

Reliability issues

Despite of its high price, the Model S has more than its share of reliability problems. Reports are saying, because of power system failures, two thirds of the early Model S cars can’t outlive 60,000 miles. Consumer Reports rated Tesla Model S “the most unreliable EV”.

Safety problems


On Jan 1 2016, a Model S caught fire for no obvious reason at a supercharge station in Norway. Firefighters were not able to put off the fire with water. They covered the car with a special foam and waited the car to melt down.

This is not the first Model S fire incident. There were already three such incidents. As compared to gasoline cars on fire, this is indeed a small number, but the reason why they caught fire is more mysterious. There doesn’t need to be accidents. The Model S can just start to burn mysteriously in your garage!

Unlike Elon Musk’s claims that Model S is very safe, the fire incidents should be taken seriously. Lithium batteries are known to be a fire hazard. Take a look at the fire incidents of the Boeing 787 Dream Liners and see why Model S fires shouldn’t be taken lightly.

Safety issue of the autopilot

Please refer to my new post on this issue.


Comments Off on Design Flaws of the Tesla Model S

Posted by on December 11, 2015 in car, design


Three famous quotes

These three quotes are logically related to each other. Have you figured out their logical connections? ;-)

UNIX is simple, it just takes a genius to understand its simplicity. —Dennis Ritchie

This suit of clothes is invisible to those unfit for their positions, stupid, or incompetent. —the emperor’s weavers

If you can’t explain it to a six year old, you don’t understand it yourself. —Albert Einstein

Comments Off on Three famous quotes

Posted by on February 18, 2014 in culture, philosophy, programming, religion, wisdom


RubySonar: a type inferencer and indexer for Ruby

I have built a similar static analysis tool for Ruby. The result is a new open-source project RubySonar. RubySonar can now process the full Ruby standard library, Ruby on Rails, and Ruby apps such as Homebrew.

RubySonar’s analysis is inter-procedural and is sensitive to both data-flow and control-flow, which makes it highly accurate. RubSonar uses the same type inference technique of PySonar, and thus can resolve some of the difficult cases that can challenge a good Ruby IDE.

Comments Off on RubySonar: a type inferencer and indexer for Ruby

Posted by on February 3, 2014 in programming languages, static analysis


Programs may not be proofs


Dear Haskell Curry and W.A. Howard,

Like many others, I have high appreciation of the beauty in your works, but I think your discoveries about the Curry-Howard correspondence have been taken too far by the programming languages research community. Following several of your examples showing the similarity between types and logic theorems, some of the researchers started to believe that all programs are just proofs and their types the corresponding theorems. In their own words: “Types are theorems. Programs are proofs.”

I think it’s just the other way around: “Proofs are programs, but programs may not be proofs”. All proofs are programs, but only some of the programs are proofs. In mathematical terms, programs are a strict superset of proofs. Many programs are not proofs simply because proving things is not why they are made. They exist to serve other purposes. Some of them (such as operating systems) may not even terminate, thus failing the most important criteria of being a proof, but they are perfectly legitimate and important programs. Programs are more physical than math and logic—they are real things similar to flowers, trees, animals (or their excrement), cars, paintings, buildings, mountains, the earth and the sun. Their existence is beyond reason. Calling all of them proofs is a bit far-fetched in my opinion :-)

Sometimes we can derive theorems from programs and say something about their properties, but way too many times we can’t. This is the similar to the fact that sometimes we can derive math theorems from the real world but we don’t always succeed. When we fail to predict the future, we can still live it. Similarly, when math and logic fail to predict the behavior of programs, they may still run without problems. It would be nice if more programs are designed in nice ways such that we can predict how they will behave, but there is a limit as to how far we can see into their future. Thanks to this limit, because if all the future can be predicted, programs may not worth existing and life may not worth living.

Comments Off on Programs may not be proofs

Posted by on January 17, 2014 in logics, philosophy, programming languages, proofs