The testimony also brings up other uncomfortable realities about the Autopilot program, but I think what’s not being discussed enough is how this 2016 video really defined the path that Autopilot (and later, the so-called Full Self-Driving) would take, and how that path seems to be proving to be the wrong one. If you haven’t seen the video, you’re not really missing all that much. It’s primarily a driver’s seat view (with cuts to other camera views) of a Model S making a drive, with some sections sped up so the whole trip could fit into the 3:45 duration of the video. The Rolling Stones’ Paint It Black plays in the background, and eventually, the Tesla parks itself at Tesla’s offices all by itself. It’s impressive looking, no question! But I think the most significant part of the video is at the beginning when this text is displayed:
The text “THE PERSON IN THE DRIVER’S SEAT IS ONLY THERE FOR LEAGAL REASONS. HE IS NOT DOING ANYTHING. THE CAR IS DRIVING ITSELF.” is displayed, and these three sentences I think represent the core of all of Tesla’s missteps with Autopilot and FSD from that moment on to the present. This text became sort of popular for independent Tesla-fan YouTubers to replicate in their own Autopilot videos, as you can see here:
The text isn’t ambiguous at all. It says, very clearly that the human in the car is “not doing anything,” and were it not for the cruel killjoy of a legal system, would not even need to be present in the first place. And, of course, it states, unquestionably, that the car is “driving itself.” In Elluswamy’s testimony, he made clear a number of details about this video, including, according to the Reuters story, Of course, this is all a huge deal because Autopilot, and later FSD, are not systems where the car is ever completely driving itself, and where the human should ever be doing nothing. They’re Level 2 systems. They require a human being in the driver’s seat, alert and aware and ready to take control with minimal or even no warning. I’ve discussed the inherent flaws of Level 2 systems here multiple times before, and portraying a Level 2 system as something that is more capable than it is, with no disclaimers at all, only adds to the already existing problems of how humans interact with and over-trust these systems, which is at the root of what goes so wrong when Level 2 systems fail. Drivers intervened to take control in test runs, he said. When trying to show the Model X could park itself with no driver, a test car crashed into a fence in Tesla’s parking lot, he said. “The intent of the video was not to accurately portray what was available for customers in 2016. It was to portray what was possible to build into the system,” Elluswamy said, according to a transcript of his testimony seen by Reuters. Other parts of Elluswamy’s testimony brought up more concerns, like how he apparently doesn’t understand some very basic autonomous driving concepts, like Operational Design Domain, which just means the expected conditions of the environment one expects an automated vehicle to operate within. These issues were noted in a series of tweets by automated vehicle engineer Mahmood Hikmet:
This quote is from a deposition of Ashok Elluswamy, Tesla’s Head of Autopilot Software relating to the 2018 fatal Autopilot crash of Walter Huang. He doesnt know what an Operational Design Domain (ODD) is. pic.twitter.com/SqeE7xp2m4 — Mahmood Hikmet (@MoodyHikmet) January 15, 2023 Hikmet finds multiple examples in the testimony where some basic autonomous vehicle design and operational concepts like perception-reaction time are said to be unfamiliar:
I honestly don’t know how to process this. It just doesn’t seem possible that someone in charge of Autopilot wouldn’t at least be aware of what these concepts are. They’re really ground-floor kind of ideas: what conditions the system can operate in and how long they could expect handover to a human driver may take. This is day one stuff. I’m genuinely baffled. Now, all of this is important to note, but it’s also tangential to the main point here, the conclusion that all this renewed scrutiny on that 2016 video has brought me to. My conclusion is that this video represents a way of thinking that has doomed Tesla’s entire automated driving program, and, I think, needlessly, because so much of the underlying technology is actually genuinely impressive, and has massive potential to do good. You can think of it like this. Once Tesla decided to develop Autopilot and FSD, it had two paths they could choose to go down: one path would be to use the technology as something that works as a backup to the human driver, a pair of AI hands hovering over the drivers, invisibly, ready to leap into action when it becomes clear that the driver has done something that could cause harm. In this approach, Autopilot would have become a safety feature, pretty much exclusively. There would never be an expectation that a driver is not paying attention and, you know, driving, but if fatigue or poor conditions or distractions or having to pee or whatever caused the driver to make a poor decision, Autopilot would be there, always ready and alert, to take over and help correct potentially fatal mistakes. There is actually one company I know to be taking this approach: Volvo, with their partnership with Luminar. (I have an extensive interview with the Luminar CEO about this very subject that I really need to get published here soon. I’ll get on that.) Of course, this method isn’t very sexy. It doesn’t suggest you can relax and not really pay attention, it doesn’t hint that you don’t really need to watch the road, that maybe you could check out your phone or play a game or even ride in the back seat, because, hey, really, the car is driving. The other method, the one Tesla has chosen and still sticks to, is what I just described up there. It’s Autopilot as we know it, where the human is secondary, monitoring the AI, waiting to see if they need to take control—something humans are terrible at. It’s the sexier approach that, while legally saying all the right things about paying attention and being ready to take over, still implies that you’re living in the future, in a car that drives you around like a robot chauffeur. That 2016 video, with its misleading opening text and (as we now know) fundamentally disingenuous nature, came to define Tesla’s Autopilot path, and that path was one that played up the “this car will drive for you” and “this car will be a robotaxi that will make you money” angles instead of the safety angles. Sure, they tout alleged safety advantages plenty, but had they made Autopilot something that worked with a driver instead of a half-ass replacement for a driver, they would have had something with unquestionable safety advantages. They could have leaned into that. This isn’t the 1950s anymore; safety does sell cars. Had Tesla taken this approach, Autopilot’s considerable merits could have been appreciated without caveats, because its deficiencies and potential for misuse simply wouldn’t be such an ever-present factor. Autopilot is technically very impressive. It does some amazing things, and when you look back at how far this has all come since Stanley the VW Touareg won the DARPA Grand Challenge in 2005, it’s really quite staggering. And that’s what makes all of this so frustrating: The tech works in many contexts, but Tesla’s entire marketing of Autopilot and now FSD has always been about pushing expectations of it into places it’s just not suited for. Is this all Musk’s decision? Or a collective decision from Tesla’s marketing teams? I really don’t know for sure, but I do know that Musk has consistently pushed people’s expectations of what Autopilot/FSD are capable of, with tweets like these:
— Elon Musk (@elonmusk) July 8, 2019
— Elon Musk (@elonmusk) July 16, 2019 Or when he said nailing Full Self-Driving is “really the difference between Tesla being worth a lot of money or worth basically zero” last year. There are many more examples, of course. But it’s clear that Tesla’s loudest, best-known voice has consistently suggested that the automated features of Teslas are much better than reality, and, again, we can see this from that first frame of that 2016 video. And that video pushed the rest of the auto industry and its associated tech startups to dump $100 billion into this field without much to show for it beyond a few robotoaxi services here and there and glorified cruise control. So, with everyone talking about this video again, I just want to frame it for what it really is: a manifesto of how Tesla would approach their semi-automated driving systems. It’s like a historical artifact now, a record of a choice made at a crossroads. I think as time goes on, it’ll become more and more clear that it’s also a symbol of the wrong path taken. Autopilot could have been a true safety revolution, instead of good tech being presented in a messy and borderline negligent way.
Nearly 70% Of CarMax’s Tesla Inventory Disappeared In Three Days
Tesla Owners Are Incredibly Unhappy About Recently-Announced Discounts
Tesla Massively Lowers Prices Overnight, Makes Model Y And 3 A Crazy Deal
Newly Released Video Of Thanksgiving Day Tesla Full Self-Driving Crash Demonstrates The Fundamental Problem Of Semi-Automated Driving Systems
I’m not really surprised by this. Maybe surprised that he’d admit everything in court and not try to waffle around getting himself implicated in this whole mess?
As to why I’m not surprised, remember in ’18 when Tesla stopped doing that brake test (or was it steering?) on one of their models? This basic test that every manufacturer puts their vehicles through before they leave the factory and Tesla just stopped in order to save a few minutes per vehicle so they could eke out a few more per month?
Yeah, that’s when I stopped being surprised by anything Tesla did in the name of cutting corners.
Imagine you go see a cardiac surgeon and they’re like “Myocardial Infarction? I’m familiar with the words, but not much more than that. It’s some kind of heart thing, right?”
If you have no grasp of the fundamental technical language of your field, whatever it may be, your happy ass should not be employed in that field, much less placed in a position of authority.
Kudos to you for calling it out and presenting the situation in a fair manner, Jason.
Also, Robot Take the Wheel is a fantastic book!
This is an overlooked Musk tweet admitting that without real AI, not the fake stuff we have now, his method of using cameras alone will not work. And no one can predict when we will have real AI.
And a staged promo video isn’t quite the same thing as a faked test.
And here’s the thing… anyone who knows anything about the term ‘Autopilot’ as it’s used in Aviation would know that using Autopilot does not mean the pilot can stop paying attention.
And Tesla makes it clear that their system requires that the driver has to continue to pay attention.
It is useful tech to at least some drivers even though I personally think it isn’t worth the cost.
Tesla makes it clear to who that the system requires that? (derailing for a second, is that an example of when I should use “whom”? I’ve never understood that particular rule.)
Fairly sure that the courts have taken the stance that many standardized contracts (such as EULAs) are too unwieldy and dense for the average consumer to fully understand. This is backed up by the average consumer including examples of customers sleeping in, or having sex in Teslas as they drive down the road.
As to the who/whom question, an easy trick is to throw he/him in its place. “Tesla makes it clear to he” vs “Tesla makes it clear to him.” If it is “him,” it’s “whom.” There are, of course, some situations where nothing is going to sound right, but this probably covers 90% of uses. That said, common practice is really making “who” acceptable in most usage, anyway, and it’s not like using the wrong one sacrifices any clarity.
This is faked.
Levels 0 through 2: “You ARE driving”
Adaptive cruise, lane keeping, emergency braking and so on are all there to help you, the driver, who is driving. Did I mention the driver is supposed to be driving with Level 2 systems…?
The problem has always been Tesla (and others) pretending that their Level 2 systems were capable of so much more and they were just labeling them Level 2 on some legal technicality. That’s right, Joe Consumer, we’re just saying you should keep your hands on the wheel because of litigation, but you can totally take your hands off the wheel and pretend our system is Level 3 or even higher. Go ahead, stick that water bottle in the steering wheel and watch videos on your phone. Everything will be fine and we’ll get tons of data on our beta s/w. Who cares if you crash because we’ll still get good data (unless it’s so bad the transmission stops). Of course, when you do crash, we’ll blame you because you should have been driving. It’s in the beta agreement after all. I’m sure you read it…
This guy is covering up sensors and seeing how well the FSD handled it. As if FSD wasn’t horrifying enough… It’s incredible just how much he can cover up and the car will still let him use FSD. At one point it fails to start, apparently due to the obstructed sensors, but then he tries again and it works.
Anyways it’s just a small point.Should he be working in such a field,where safety is everything? I’d say hell no.There’s too much risk of missunderstandings
And once again i wonder how long before Tesla drops the pretense over FSD?And what happens when they do? The company’s rep will take a big hit, but would it affect ongoing operations much? One has to think Musk will be pushed out eventually,and that would be a good time for it. Blame him, then bring out some refreshed designs to make everyone forget the weird past.
As to Tesla dropping the bullshit, I think that would be the smart move if they can push Musk out. Start referring to things as driving assistance or something, stop promising robotaxis, and update the cars. The brand is still the first thing people think of for electric cars, so they could pivot and probably do well.
Does autopilot empty the Gatorade bottle for you then place it where needed?
Forward collision detection Green light chime Speed limit warning Automatic emergency braking Multi-collision braking Lane departure warning Blind spot collision warning Obstacle aware acceleration
It’s all detailed on their website.
As for the admission that the video was faked. That is horrendous and opens up Elon and Co. to a whole new range of fraud lawsuits. It is perfectly legal to be overly optimistic about your company’s prospects but when you start producing fake evidence that you have achieved it, you have created evidence that you know it does not work as portrayed.
I’m guessing he knew there had been conversations that brushed aside the concerns but thought that there would be no documentation of them and/or that he could say those conversations were outside his purview. Which is wild, considering that he is the head of autopilot software and human response times should be an important bit of information to accommodate in programming.