After analyzing Tesla Model S’s various design problems, I was shocked again, this time by a popular video, showing Tesla’s autopilot nearly causing a frontal crash.
- Autopilot sees tree shadows on the ground, and mistaken them as obstacles. This may not be a right kind of road to use the autopilot, but in principle this could happen even if you are on a high way with trees or some other shadow-casting objects, such as clouds. The failure of distinguishing shadows from objects suggests that Tesla hasn’t done basic computer vision research, specifically, a subject called “image shadow removing”. I’m also doubting whether the autopilot uses stereo vision or colors at all.
- When seeing the tree shadow, autopilot tried to avoid it as if it were an obstacle. It didn’t brake (the speed remained at ~38mph). It steered the car to the left, trying to cross the double yellow line, nearly causing a frontal crash with an upcoming vehicle. This shows that the autopilot hasn’t ideas about basic road rules and correct emergency strategy. An experienced driver would brake instead of averting the obstacle without slowing down.
There should be no public “beta testing” on such mission critical software. It needs to be perfectly safe before it can be released. Nobody wants to test such software with their lives. Putting complicated “warnings” or “conditions” on the software license, calling it “beta” and asking users to keep a hand on the steering wheel, can’t really get Tesla out of liability if someone is actually hurt.