Dan Bulwinkle

Innovation, Startups, Finance, Robotics, Cognitive Science, Computer Science, ἀλήθεια

Full Self Driving Cars

Version 0.95

Stanley Stanley, winner of the DARPA Grand Challenge. Steve Jurvetson / Wikipedia

I had planned to write a version of this in 2018 specifically addressing Tesla, but I think it is important to include the now-autonomous car industry as well.

Since I haven’t heard it said explicity, I will go over 3 specific reasons why autonomous cars aren’t and can’t be autonomous, and why companies like Tesla are in worse shape than those like Waymo.

I am absolutely an optimist. I bought an iPhone the day it came out when at the time several critics wrote that it was NGMI. Without test driving one, I bought a 4 digit VIN Model S in 2013 because of the New York Times article from that February with a photo of a Model S being towed. With robotics I’m no less optimistic; the thing to remember about robotics is that there is a line between engineering and fantasy.

You can engineer a robot to clean debris from a pool. The reason this is the case is because the task is clearly defined and the environment is clearly defined. For autonomous cars, the task is nearly (with some edge cases) perfectly defined, but the environment is not a grid with traffic lights and stop signs, but a very complex world with several unpredictable factors including weather, humans, animals, road conditions. To tackle this and believe it is merely a difficult engineering problem is as foolish as the predictions of the 1956 Dartmouth conference.

Lidar, Cameras, and Sensor Fusion

The trigger for this blog post was a clip that Musk re-xed1 where the former CEO of defunct self-driving company Cruise claimed Tesla was right to chiefly utilize cameras and stay clear of Lidar sensors. According to him, Waymo is heading in that direction. Over a long enough time horizon he believes autonomous cars will all be camera-based.

Robotics is not a computer science efficiency problem. The reason for an array of sensor types is not to train a camera system and it is not to otherwise bootstrap a self-driving car to use fewer sensors. The real world is very noisy, and different sensors work well for different use cases at different times. Critically, autonomous robots in high stakes environments like roadways require redundant sensors.

I’ve heard several experts2 shy away from that reality saying things like “it’s complementary, not redudant.” It may be true that the primary use case is complementary, but if a rock smashes into the middle of the windshield (where the main stereo camera system is housed) while in the left lane of a busy a highway, I hope the autonomous car I’m in has Lidar so it can safely pull over. If you’ve driven through Vermont or Tahoe in the winter in a Tesla3, you get messages asking to clean some camera or other. The engineers thought about the “de-icing problem” but failed to put little windshield wipers by each camera to clear salt residue. Even a radar sensor could mean the difference between a safe halt or a multi-car accident.

Tesla removed radar sensors claiming that when the car approached a bridge, the radar sensor would give an inaccurate reading and cause problems. I’ve owned an ICE car that was decked out with a radar sensor for following the car in front of it without intervention on the highway. That worked out rather well. If anything, no matter the company, every autonomous vehicle should start out with a fusion of sensors. In order to remove sensors, it should be proven that the sensors are not necessary. For example, maybe you don’t need 3 lidar sensors, you only need one.

Musk’s quip that “we don’t have lasers shooting out of our eyes” is telling: robots are not humans and as long as they are using electronic components they will need to be treated as machines. If they are in an uncontrolled environment they are going to need redundant hardware.4

Software Verification

Autonomous cars are stochastic systems. A neural network, end-to-end or not, cannot be deterministic. You can run camera input through a Gaussian function for preprocessing and some variant in stimulus, like the way the sun shines or reflects off a color, will cause the result to be different.

If you have driven in a Tesla, you’ve seen the objects on the screen transform into different objects, sometimes ghost objects that aren’t there.

Tesla Vision Tesla classifies its own car is a truck. Nov 2023

Tesla’s console software is complex and buggy. A few weeks ago the Tesla audio app was stuck in a weird state where the display didn’t reflect what I was hearing on the speakers. After trying many ways to overcome the situation, I had hold both steering wheel buttons down to restart the computer. You don’t want to do that if the autonomous driving software falls into some local minimum! Which makes me wonder: why doesn’t Tesla verify their console software? It doesn’t give me confidence that there are bugs like that in the console, and I’m surprised that other people haven’t run into bugs and made the analogy: what if there is a bug in the autonomous software, which, by the way, already isn’t possible to apply software verification due to its non-deterministic nature.

Sentience

In order to have complete self-driving, without any human intervention what-so-ever on the roads as they exist today, you need sentience.5 Or, to put it another way, if self driving cars truly exist, you’ll have a lot more types of robotics completely autonomously operating.

A good illustration of the degree of cognitive ability here is in the movie Short Circuit when Johnny 5 lands on a grasshopper and asks Stephanie to re-assemble the grasshopper. It quickly learns the value of life. It’s also what Detective Spooner reflects on in iRobot: don’t calculate the odds of survival, save the girl. An LLM, despite largely being a reflection on society’s social mores, is not sufficient because it lacks authenticity. Another way to say that, is that as long as OpenAI has a disclaimer below their input box, there’s a chance the resulting decision will be garbage. The cars need to be an incarnation of Takumi otherwise there will be no trust.

Today, every single autonomous car company has humans in the loop ready to take over. The extent likely varies, and in the case of Cruise we saw that there are some limitations to this attempt to solve the last 5% of autonomy. It is very unlikely that – until almost every car is autonomous – this will change in the near future. There must be a fundamental breakthrough that is on the order of a sentient human.

About 10 years ago in Mountain View I was riding my bike through a round-about. A Google car (as it were) had entered, so I thought I’d wait for it to make its way through before I went. I guess most bikers claim right of way, because the car came to a halt. The two testers had to hit a button to restart the car. So I went off to the side, again, to wait for it to continue, and I rode in circles. The car halted once again. The testers were laughing and hit the start button again, and I went on my way. It was an important lesson: self-driving cars have no way to deal with people in the real world. In the recent LA unrest Waymo cars were abused. If a mob decends on a car in New York, the driver will get scared and attempt to flee; would a self-driving car do that today? Could you issue a voice command “my safety is at stake, be more aggressive” and would it obey?

Rodney Brooks has been criticizing the hype of self-driving cars for years. There are few people in AI who seem grounded (no pun intended) to the nature of intelligence (LeCunn, Mitchell, Brockman are a few others). Brooks has been at it since nearly the beginning, starting out at Stanford’s SAIL and engaging with robots of various types at MIT for decades. To boot, he is one of the few people who has actually created a massively successful consumer robotics company, and that, too, was decades ago. It’s worth reading Brooks’s latest prediction scoresheet.

What’s the solution?

In 2020 I considered emailing the board6 of Comma AI, a company which makes a dashboard device that connects to a car’s CAN bus – an adhoc self-driving technology. It seems this wasn’t yet working as anticipated, and egos clashed between the heads of Tesla and Comma AI regarding who would achieve self-driving first. (Why the chutzpah? Prove it.)

My suggestion was going to be to steer the company in a direction where the customer is not drivers, but insurance companies. It could be a system where the more people who joined, the steeper your insurance discount would be. Why? Because if everyone had a form of self-driving technology, there’d be an enormous reduction in accidents. There would still be accidents, but of the type where two cars collide head on or side-swipe, the number would approach zero.

Rodney Brooks pioneered the subsumption architecture, which is essentially levels for robot control and was adapted to hierarchical control systems in Stanley, which won the DARPA grand challenge, and other autnomous vehicles. At a basic level a car should avoid hitting anything around it and communicate with other cars. If such a system were implemented in vehicles, you could imagine a worst case scenario of spring-like behavior as people drove, but the best case would mirror an ideal computer simulation of traffic on the roadway since one person braking for no particular reason wouldn’t cause a domino effect of cars braking.

I’m not sure why every company seems super focused on full self driving, rather than a robust network of partially self-driving cars. Society would be better off. After over 10 years of this, people still buy into the hype despite daily fatal collisions with cars, bicycles, and pedestrians. It’s unfathomable that leaders of these companies are flexing but, imho, have proven nothing. If your cybertruck is safe, it shouldn’t be flipping over a retaining wall in Oakland under any circumstance. I hope either I’m wrong and full self driving cars become standard in short order, or that something changes enabling every car with a CAN bus to at least avoid serious accidents.


  1. Original Stripe interview ↩︎

  2. I mean, there really is no such thing as a self-driving expert because self-driving cars do not exist↩︎

  3. Still unfortunately the best EV. I feel like I’m waving $50,000 in the air and dozens of automakers shrug indifference. ↩︎

  4. Tesla does have redundant AI chips! ↩︎

  5. This is a cognitive science term. It doesn’t mean “senses” but rather a higher order cognitive function. ↩︎

  6. I could have networked my way or cold emailed, and I decided it just wasn’t worth it, though I wish I did anyway. ↩︎