RSS

Why “Falcon Wing Doors” is a bad idea

Tesla’s new SUV, the Model X, has a very special back door design. They open like “falcon wings”, thus are called “Falcon Wing Doors”.

model-x-door

Although looking futuristic and may help with tight parking spaces, this design has several potential problems.

Delayed escape during accident

If an accident happens and there is a fire, how quickly can you escape? You will get quite trapped in a Model X. When the car loses power, according to Model X’s Emergency Response Guidethere is a very complicated procedure to follow before you can open the Falcon Wing Doors.falcon-wing-door-powerless

First of all, notice that without power, there is no way that the door can be opened from the outside. This means that the firefighters can’t help you much! If you are lucky enough–you are not injured, you are not scared, you successfully get out of the trap of the airbags, you have to do the following three complicated movements:

  1. Remove the speaker grill from the door.
  2. Pull the mechanical release cable.
  3. Manually lift up the doors.

 

How do you remove the speaker grill? Doh.. Honestly, having hammer and screw driver ready in a Model X may be a good idea ;-)

Reduced utility

  1. You can’t install a roof rack and carry a kayak. This makes this SUV (Sport Utility Vehicle) less useful than other brands.
    suv-kayak
  2. There can’t be bottle/utility holders on the back doors, because they will fall off when the doors open. As a consequence, back passengers can’t put their water, snacks or cell phones on the doors.
    suv-back-door

Vulnerability in weather

suv-in-snow

  1. When the doors open, the back of the car is almost complete exposed, on the sides and top. A good chance for rain and snow to get into the car in large amounts.
  2. When the doors open, dirt and snow that gathered on the roof can fall into the car.
  3. If the car is covered in heavy snow or ice, the doors won’t open. You have to clear the roof very carefully before the back doors can be opened.
  4. The long seams of these doors are more prone to leak problems. As the seals weather out, water could drip down from the roof.
  5. If dirt, snow, tree leaves or twigs gets trapped into the top seals, you may have trouble sealing the doors, and water will drip in.
  6. It will be tricky open/close the doors in strong wind. Wind may cause the doors to malfunction.
  7. Because the doors increased the car’s center of gravity, the car may shake quite a bit when the doors are open against strong wind.
  8. You can’t get in/out easily with half-open doors, but fully open doors will cause a big heat loss in a cold windy day.

Parking

Although the doors may help with parking in horizontally tight spaces, they are troublesome in vertically tight spaces. They may hit the roof in some garages, like this one. Even if the sensors prevented the doors to hit the roof, the back passengers may have trouble getting out through the half-open doors.
garage-low-roof

Indeed they are easier to open in horizontally tight parking spaces. But, how often do you park in such tight spaces? If that happens, can’t you just drop off the back passengers before pulling in? Can’t sliding doors provide the same benefits?

Mazda5, 2011

Manufacturing and maintenance

falcon-wing-door-machineryThe machinery of these doors is overly complicated. They are difficult and expensive to manufacture, prone to mechanical problems, and difficult to repair. Take a look at the news, and see how a legitimate supplier of hydraulic lifters to other famous brands (such as Cadillac) failed to meet the Falcon Doors’ ridiculous requirements.

To look into this issue, you may find the glassdoor review of a Tesla component engineer interesting ;)

Screen Shot 2016-02-08 at 2.05.05 PM

Handling

Because the Falcon Wing Doors’ complexity, it increases the center of gravity of the car. This decreases the stability of the car and cornering abilities. Also when parking on uneven ground, the high profile of the doors make the car unstable.

Not beautiful or fancy

With this novel door design, the Model X doesn’t really look beautiful, friendly, or fancy. It looks like a Prius. It’s nowhere close to Ferrari, Lamborghini or McLaren. Notice that the “scissor doors” of the hyper cars don’t really have some of the problems of Model X’s Falcon Wing Doors.

laferrari-doors

The Model X (with the doors open) looks like a falcon, ready for an aggressive move. Not feeling friendly.
332480

With these in mind, and the fact that Space X’s rockets are named “Falcon”, the Falcon Wing Doors feels more like an gimmick and over-engineering than a useful/beautiful design. There really is no need to make car doors looking like falcon wings.

 
Comments Off on Why “Falcon Wing Doors” is a bad idea

Posted by on January 20, 2016 in car, design

 

Some observations about Tesla’s Autopilot

 

After analyzing Tesla Model S’s various design problems, I was shocked again, this time by a popular video, showing Tesla’s autopilot nearly causing a frontal crash.

Some observations:

  1. Autopilot sees tree shadows on the ground, and mistaken them as obstacles. This may not be a right kind of road to use the autopilot, but in principle this could happen even if you are on a high way with trees or some other shadow-casting objects, such as clouds. The failure of distinguishing shadows from objects suggests that Tesla hasn’t done basic computer vision research, specifically, a subject called “image shadow removing”. I’m also doubting whether the autopilot uses stereo vision or colors at all.

  2. When seeing the tree shadow, autopilot tried to avoid it as if it were an obstacle. It didn’t brake (the speed remained at ~38mph). It steered the car to the left, trying to cross the double yellow line, nearly causing a frontal crash with an upcoming vehicle. This shows that the autopilot hasn’t ideas about basic road rules and correct emergency strategy. An experienced driver would brake instead of averting the obstacle without slowing down.

There should be no public “beta testing” on such mission critical software. It needs to be perfectly safe before it can be released. Nobody wants to test such software with their lives. Putting complicated “warnings” or “conditions” on the software license, calling it “beta” and asking users to keep a hand on the steering wheel, can’t really get Tesla out of liability if someone is actually hurt.

 
Comments Off on Some observations about Tesla’s Autopilot

Posted by on January 16, 2016 in car, design

 

Design Flaws of the Tesla Model S

model-s-interior.jpg

Many people admire Tesla cars. They think it is the car of the future. Although electric cars are environment-friendly, quiet and performant, I found the design of Tesla’s Model S particularly awkward and user-unfriendly, with potential safety problems.

Lacking physical controls

A major design problem of the Model S is that the designers put too much faith in the touch screen. There are so few physical controls. Almost everything is controlled by the touch screen. Take a look at the interior, it is almost bare. This hurts both the esthetics and functionality.

There is no switch on the dome for operating the sunroof (see picture). When people see a door or window, they expect a switch right next to it. Not the case with the Model S. You look up from the seat, and there is nothing you can press…

IMG_9115

This “simplicity” comes with a cost. How do you open the sunroof? Answer: from the touch screen. You just tap the Control tab on the top, then tap the Sunroof tab to the left, and then you tap and hold a slide bar on the right, drag it down…

tesla-screen

But this is not really simple. It just makes simple things complicated. It violates a very important design principle: Controls should be close to the objects that they control, and should be natural for people to discover and intuitive to use. The touch screen controls too many objects. It is overly multiplexed in functionality. It is nowhere close to the object it controls, and there is a deep path for finding the menu items. All of this makes the car confusing and cumbersome.

Compare with other cars, they usually have a dedicated switch for the sunroof, right there above your head. You pull it back and it opens the sunroof. You push it forward and it closes the sunroof. You push it upwards and it tilts the sunroof. The control is easy to discover, direct to access, and intuitive to use. Designers call this kind of design “natural mapping”, because the motion of the controller naturally corresponds to the motion of the sunroof.

Similarly in the Model S, there is no physical dial for the vent air, no physical switches for the head lights, fog lights, ambient lights, … You have to use the touch screen for all of those.

Central point of failure

From a system designer’s perspective, the Model S has a central point of failure. Because the touch screen controls almost everything, if it fails, you lose control of pretty much everything: sunroof, vent, door handles, windows, …

This indeed happened to some Tesla users. Take a look at this article. To quote the most important part of it:

“Just before the car went in for its annual service, at a little over 12,000 miles, the center screen went blank, eliminating access to just about every function of the car…”

Ergonomics not well thought out

I also noticed that when I sit back in the driver’s seat, the touch screen is not quite within my arm’s reach. I have to sit up a little and reach my right arm out. Because the screen lacks tactical feedback, you must look at it in order to trigger the correct button. This is not efficient or comfortable, and may not be safe when driving.

There is also a space-utilization issue. Underneath the touch screen, there is a flat surface.

model-s-console

This is the usual place where other cars put the shifter, coffee holder and utility compartments. For the Model S, it’s just a flat, wide-open space. If you put small objects in it, they will fly around, collide with each other, making noise as you drive. The platform’s position is so low, that you have to bend over to get things. This is a waste of the most ergonomically convenient space in the car–the space right under the occupant’s arms at a comfortable sit position.

Some users also reported that the coffee holder of the Model S are placed at a diabolical location, making it easily for the elbows to knock over the coffee cups. Thus some expert user DIY’ed his own coffee holder using a 3D printer…

model-s-3d-printed-cup-holder

Troublesome door handle

The Model S has a very special door handle design. At its rest position, the door handle retracts, flushing with the door surface.

model-s-door-handle-down

As the driver walk close, the door handle extrudes, like a friend extending his hand, for a handshake.

model-s-door-handle-up

Emotionally this is a nice design, but practically there are problems.

  • The shape of the door handle is a flat metal plate. This is not ergonomic or comfortable.
  • In cold weather, the door handle will be frozen in ice and not be able to extend. In this situation, you will not be able to open the door!

There have been discussions how to take care of the door handle in cold weather. Solutions includes:

  • Remotely start the car. Melt the ice with internal heat.
  • Pour hot water on the door handle.
  • Put hot water bag on the door handle.
  • Use a hair dryer.
  • Put a packing tape on the door handle. Peel the tape off to remove ice.

Now, maybe you have understood, why no other cars, from the cheapest to the most expensive, use this door handle design?

Reliability issues

Despite of its high price, the Model S has more than its share of reliability problems. Reports are saying, because of power system failures, two thirds of the early Model S cars can’t outlive 60,000 miles. Consumer Reports rated Tesla Model S “the most unreliable EV”.

Safety problems

model-s-burn

On Jan 1 2016, a Model S caught fire for no obvious reason at a supercharge station in Norway. Firefighters were not able to put off the fire with water. They covered the car with a special foam and waited the car to melt down.

This is not the first Model S fire incident. There were already three such incidents. As compared to gasoline cars on fire, this is indeed a small number, but the reason why they caught fire is more mysterious. There doesn’t need to be accidents. The Model S can just start to burn mysteriously in your garage!

Unlike Elon Musk’s claims that Model S is very safe, the fire incidents should be taken seriously. Lithium batteries are known to be a fire hazard. Take a look at the fire incidents of the Boeing 787 Dream Liners and see why Model S fires shouldn’t be taken lightly.

Safety issue of the autopilot

Please refer to my new post on this issue.

 

 
Comments Off on Design Flaws of the Tesla Model S

Posted by on December 11, 2015 in car, design

 

Three famous quotes

These three quotes are logically related to each other. Have you figured out their logical connections? ;-)

UNIX is simple, it just takes a genius to understand its simplicity. —Dennis Ritchie

This suit of clothes is invisible to those unfit for their positions, stupid, or incompetent. —the emperor’s weavers

If you can’t explain it to a six year old, you don’t understand it yourself. —Albert Einstein

 
Comments Off on Three famous quotes

Posted by on February 18, 2014 in culture, philosophy, programming, religion, wisdom

 

RubySonar: a type inferencer and indexer for Ruby

I have built a similar static analysis tool for Ruby. The result is a new open-source project RubySonar. RubySonar can now process the full Ruby standard library, Ruby on Rails, and Ruby apps such as Homebrew.

RubySonar’s analysis is inter-procedural and is sensitive to both data-flow and control-flow, which makes it highly accurate. RubSonar uses the same type inference technique of PySonar, and thus can resolve some of the difficult cases that can challenge a good Ruby IDE.

 
Comments Off on RubySonar: a type inferencer and indexer for Ruby

Posted by on February 3, 2014 in programming languages, static analysis

 

Programs may not be proofs

haskell-curry

Dear Haskell Curry and W.A. Howard,

Like many others, I have high appreciation of the beauty in your works, but I think your discoveries about the Curry-Howard correspondence have been taken too far by the programming languages research community. Following several of your examples showing the similarity between types and logic theorems, some of the researchers started to believe that all programs are just proofs and their types the corresponding theorems. In their own words: “Types are theorems. Programs are proofs.”

I think it’s just the other way around: “Proofs are programs, but programs may not be proofs”. All proofs are programs, but only some of the programs are proofs. In mathematical terms, programs are a strict superset of proofs. Many programs are not proofs simply because proving things is not why they are made. They exist to serve other purposes. Some of them (such as operating systems) may not even terminate, thus failing the most important criteria of being a proof, but they are perfectly legitimate and important programs. Programs are more physical than math and logic—they are real things similar to flowers, trees, animals (or their excrement), cars, paintings, buildings, mountains, the earth and the sun. Their existence is beyond reason. Calling all of them proofs is a bit far-fetched in my opinion :-)

Sometimes we can derive theorems from programs and say something about their properties, but way too many times we can’t. This is the similar to the fact that sometimes we can derive math theorems from the real world but we don’t always succeed. When we fail to predict the future, we can still live it. Similarly, when math and logic fail to predict the behavior of programs, they may still run without problems. It would be nice if more programs are designed in nice ways such that we can predict how they will behave, but there is a limit as to how far we can see into their future. Thanks to this limit, because if all the future can be predicted, programs may not worth existing and life may not worth living.

 
Comments Off on Programs may not be proofs

Posted by on January 17, 2014 in logics, philosophy, programming languages, proofs

 

Tests and static analysis

Ever since I made a static analysis tool for Python called PySonar, I have been asked about the question: “What is the difference between testing and static analysis?” When I worked at Coverity, my coworkers told me that they also spent quite some time explaining to people about their difference. My answer to this question evolves as my understanding of this area deepens. Recently I replied to a comment asking a similar question, so I think it’s a good time to write down some systematic answer for this question.

Static analysis is static, tests are dynamic

Static analysis and tests are similar in their purposes. They are both tools for improving code quality. But they are very different in nature: static analysis is (of course) static, but tests are dynamic. “Static” basically means “without running the program”.

Static analysis is similar to the compiler’s type checker but usually a lot more powerful. Static analysis finds more than type errors. It can find defects such as resource leaks, array index out of bounds, security risks etc. Advanced static analysis tools may contain some capabilities of an automatic theorem prover. In essence a type checker can be considered a static analysis with a coarse precision.

Static analysis is like predicting the future, but testing is like doing small experiments in real life. Static analysis has the “reasoning power” that tests hasn’t, so static analysis can find problems that tests may never detect. For example, a security static analysis may show you how your website can be hacked after a series of events that you may never thought of.

On the other hand, tests just run the programs with certain inputs. They are fully dynamic, so you can’t test all cases but just some of them. But because tests run dynamically, they may detect bugs that static analysis can’t find. For example, tests may find that your autopilot program produces wrong results at certain altitude and speed. Static analysis tools can’t check this kind of complex dynamic properties because they don’t have access to the actual running situation.

But notice that although tests can tell you that your algorithm is wrong, they can’t tell you that it is correct. To guarantee the correctness of programs is terribly harder than tests or static analysis. You need a mechanical proof of the program’s correctness, which means at the moment that you need a theorem prover (or proof assistant) such as Coq, Isabelle or ACL2, lots of knowledge of math and logic, lots of experience dealing with those tools’ quirks, lots of time, and even with all those you may not be able to prove it, because you program can easily encode something like the Collatz conjecture in it. So the program’s passing the tests doesn’t mean it is correct. It only means that you haven’t done terribly stupid things.

Difference in manual labor

Testing requires lots of manual work. Tests for “silly bugs” (such as null pointer dereference) are very boring and tedious to make. Because of the design flaws of lots of programming languages, those things can happen anywhere in the code, so you need a good coverage in order to prevent them.

You can’t just make sure that every line of the code is covered by the tests, you need good path coverage. But in the worst case, the number of execution paths of the program is exponential to its size, so it is almost impossible to get good path coverage however careful you are.

On the other hand, static analysis is fully automatic. It explores all paths in the program systematically, so you get very high path coverage for free. Because of the exponential algorithm complexity exploring the paths, static analysis tools may use some heuristics to cut down running time, so the coverage may not be 100%, but it’s still enormously higher than any human test writer can get.

Static analysis is symbolic

Even when you get good path coverage in tests, you may still miss lots of bugs. Because you can only pass specific values into the tests, the code can still crash at the values that you haven’t tested. In comparison, static analysis processes the code symbolically. It doesn’t assume specific values for variables. It reasons about all possible values for every variable. This is a bit like computer algebra systems (e.g. Mathematica) although it doesn’t do sophisticated math.

The most powerful static analysis tools can keep track of specific ranges of the numbers that the variables represent, so they may statically detect bugs such as “array index out of bound” etc. Tests may detect those bugs too, but only if you pass them specific values that hits the boundary conditions. Those tests are painful to make, because the indexes may come after a series of arithmetic operations. You will have a hard time finding the cases where the final result can hit the boundary.

Static analysis has false positives

Some static analysis tools may be designed to be conservative. That is, whenever it is unsure, it can assume that the worst things can happen and issue a warning: “You may have a problem here.” Thus in principle it can tell you whenever some code may cause trouble. But a lot of times the bugs may never happen, this is called a false positive. This is like your doctor misdiagnosed you to have some disease which you don’t have. Lots of the work in building static analysis tools is about how to reduce the false positive rate, so that the users don’t lose faith in the diagnosis reports.

Tests don’t have false positives, because when they fail your program will surely fail under those conditions.

The value of static analysis

Although static analysis tools don’t have the power to guarantee the correctness of programs, they are the most powerful bug-finding tools that don’t need lots of manual labor. They can prevent lots of the silly bugs that we spend a lot of time and energy writing tests for. Some of those bugs are stupid but very easy to make. Once they happen they may crash an airplane or launch a missile. So static analysis is a very useful and valuable tool. It takes over the mindless and tedious jobs from human testers so that they can focus on more intellectual and interesting tests.

 
Leave a comment

Posted by on December 27, 2013 in static analysis, testing