Tesla Model S driver sleeping at the wheel on Autopilot

Sleeping? Fake? Safe? The technology mature enough yet? You decide.


Electrek:
While daily commuters could relatively soon be able to fall asleep in their car and wake up at their destination, currently there’s no commercially available autonomous driving system that allow so little control over a vehicle – even Tesla’s fairly advanced semi autonomous system: the Autopilot.

Yet it doesn’t stop people from abusing the technology as evidenced by a Tesla Model S driver caught on camera while apparently sleeping at the wheel.



The Autopilot’s Autosteer and active cruise control appear to be moving the vehicle in heavy traffic while the driver is filmed looking like he is asleep in a GIF posted to Imgur earlier today.

Tesla’s Autopilot requires the driver to always monitor the vehicle and be ready to take control. If the system lacks data to continue to actively steer the vehicle safely, it will show an alert on the dashboard.

If the driver ignores the alert for too long, it will emit a sound and decelerate while activating the hazard lights and moving the vehicle to the side of the road. The vehicle basically assumes that the driver is unconscious if he can’t take control after visual and audible alerts.

In this case, it seems like the Autopilot is still very much in control and therefore is not bothering the sleeping driver – now a simple passenger.

(..)
There’s also the possibility that the video is fake, but the situation is highly possible considering plenty of people fall asleep at the wheel with or without Autopilot.

Regardless, Tesla’s Autopilot is always coupled with active safety features like emergency auto braking and emergency auto steering, which at low-speed should be able to prevent the worse from happening, but the system is still not safe enough to sleep at the wheel.
ChaosEnginesays...

Probably fake, but the technology is absolutely mature enough.

Self-driving cars are a solved problem. It's a matter of regulation, not research at this point.

In fact, once they reach critical mass, the problem actually becomes a lot easier from a technological standpoint. If all the cars on the road are AI, their behaviour becomes much more predictable, and a highway full of self-driving cars could easily communicate with each other, allowing increased traffic flow and reducing accidents.

Think about a simple scenario right now. You're driving in the fast lane on a multilane highway and your exit is coming up in a km or two. You need to cross 3 lanes, so you indicate and wait for a safe gap. You're completely dependent on the drivers in the other lane to let you in. But human nature being what it is, they might not want to let you in. Even if the first lane lets you through, the outer lanes have no idea what you want until they see you, so you have to repeat this manoeuvre a few times.

But with a highway of self-driving cars? Your car broadcasts its intentions on a localised network, and the other cars create a gap all the way to the exit. You move through and traffic resumes.

greatgooglymooglysays...

As far as I know, you have to have both hands on the wheel and it will beep at you and eventually stop if you don't. It wouldn't be too hard to have a camera pointed at the driver to see if he closes his eyes though.

RedSkysays...

Woah, woah, you're way overstating it. The tech is nowhere near ready for full hands-off driving in non-ideal driving scenarios. For basic navigation Google relies on maps and GPS, but the crux of autonomous navigation is machine learning algorithms. Through many hours of data logged driving, the algorithm will associate more and more accurately certain sensor inputs to certain hazards via equation selection and coefficients. The assumption is that at some point the algorithm would be able to accurately and reliably identify and react to pedestrians, pot holes, construction areas, temporary traffic lights police stops among an almost endless litany of possible hazards.

They're nowhere near there though and there's simply no guarantee that it will ever be sufficiently reliable to be truly hands-off. As mentioned, the algorithm is just an equation with certain coefficients. Our brains don't work that way when we drive. An algorithm may never have the necessary complexity or flexibility to capture the possibility of novel and unexpected events in all driving scenarios. The numbers Google quotes on reliability from its test driving are on well mapped, simple to navigate roads like highways with few of these types of challenges but real life is not like that. In practice, the algorithm may be safer than humans for something like 99% of scenarios (which I agree could in itself make driving safer) but those exceptional 1% of scenarios that our brains are uniquely able to process will still require us to be ready to take over.

As for Tesla, all it has is basically auto-cruise, auto-steer and lane changing on request. The first two is just the car keeping in lane based on lane marker input from sensors, and slowing down & speeding up based on the car follow length you give it. The most advanced part of it is the changing lanes if you indicate it to, which will effectively avoid other cars and merge. It doesn't navigate, it's basically just for highways, and even on those it won't make your exit for you (and apparently will sometimes dive into exits you didn't want based on lane marker confusion from what I've read). So basically this is either staged or this guy is an idiot.

ChaosEnginesays...

I wasn't talking about Tesla, but the technology in general. Google's self-driving cars have driven over 1.5 million miles in real-world traffic conditions. Right now, they're limited to inner city driving, but the tech is fundamentally usable.

There is no algorithm for driving. It's not
if (road.isClear)
keepDriving()
else if (child.runsInFront())
brakeLikeHell()

It's based on machine learning and pattern recognition.

This guy built one in his garage.

Is it perfect yet? Nope. But it's already better than humans and that's good enough. The technology is a lot closer than you think.

RedSkysaid:

Woah, woah, you're way overstating it. The tech is nowhere near ready for full hands-off driving in non-ideal driving scenarios. For basic navigation Google relies on maps and GPS, but the crux of autonomous navigation is machine learning algorithms. Through many hours of data logged driving, the algorithm will associate more and more accurately certain sensor inputs to certain hazards via equation selection and coefficients. The assumption is that at some point the algorithm would be able to accurately and reliably identify and react to pedestrians, pot holes, construction areas, temporary traffic lights police stops among an almost endless litany of possible hazards.

They're nowhere near there though and there's simply no guarantee that it will ever be sufficiently reliable to be truly hands-off. As mentioned, the algorithm is just an equation with certain coefficients. Our brains don't work that way when we drive. An algorithm may never have the necessary complexity or flexibility to capture the possibility of novel and unexpected events in all driving scenarios. The numbers Google quotes on reliability from its test driving are on well mapped, simple to navigate roads like highways with few of these types of challenges but real life is not like that. In practice, the algorithm may be safer than humans for something like 99% of scenarios (which I agree could in itself make driving safer) but those exceptional 1% of scenarios that our brains are uniquely able to process will still require us to be ready to take over.

As for Tesla, all it has is basically auto-cruise, auto-steer and lane changing on request. The first two is just the car keeping in lane based on lane marker input from sensors, and slowing down & speeding up based on the car follow length you give it. The most advanced part of it is the changing lanes if you indicate it to, which will effectively avoid other cars and merge. It doesn't navigate, it's basically just for highways, and even on those it won't make your exit for you (and apparently will sometimes dive into exits you didn't want based on lane marker confusion from what I've read). So basically this is either staged or this guy is an idiot.

RedSkysays...

@ChaosEngine

I'm not sure you understand what machine learning is. As I said, the trigger for your child.runsInFront() is based on numerical inputs from sensors that is fed into a formula with certain parameters and coefficients. This has been optimized from many hours of driving data but ultimately it's not able to predict novel events as it can only optimize off existing data. There is a base level of error from bias-variance tradeoff to this model that you cannot avoid. It's not simply a matter of logging enough hours of driving. If that base error level is not low enough, then autonomous cars may never be deemed reliable to be unsupervised.

See: https://en.wikipedia.org/wiki/Bias-variance_tradeoff
Or specifically: http://scott.fortmann-roe.com/docs/docs/BiasVariance/biasvariance.png

It's the same reason that a stock market simulator using the same method (but different inputs) is not accurate. The difference would be that while 55% correct for the stock market may be sufficiently accurate and useful to be profitable, a driving algorithm needs to be near perfect. It's true that a sensor reaction time to someone braking unexpectedly may be much better than a human's and prevent a crash, so yes in certain cases autonomous driving will be safer but because of exceptional cases, but it may never be truly hands-off and you may always need to be ready to intervene, just like how Tesla works today (and why on a regulatory level it passed muster).

The combination of Google hyping its project and poor understanding of math or machine learning is why news reports just parrot Google's reliability numbers. Tesla also, has managed to convince many people that it already offers autonomous driving, but the auto-steer / cruise and changing lanes tech has existed for around a decade. Volvo, Mercedes and Audi all have similar features. There is a tendency to treat this technology as magical or inevitable when there are some unavoidable limitations behind it that may never be surmounted.

ChaosEnginesays...

Actually, I would say I have a pretty good understanding of machine learning. I'm a software developer and while I don't work on machine learning day-to-day, I've certainly read a good deal about it.

As I've already said, Tesla's solution is not autonomous driving, completely agree on that (which is why I said the video is probably fake or the driver was just messing with people).

A stock market simulator is a different problem. It's trying to predict trends in an inherently chaotic system.

A self-driving car doesn't have to have perfect prediction, it can be reactive as well as predictive. Again, the point is not whether self-driving cars can be perfect. They don't have to be, they just have to be as good or better than the average human driver and frankly, that's a pretty low bar.

That said, I don't believe the first wave of self-driving vehicles will be passenger cars. It's far more likely to be freight (specifically small freight, i.e. courier vans).

I guess we'll see what happens.

RedSkysaid:

@ChaosEngine

I'm not sure you understand what machine learning is. As I said, the trigger for your child.runsInFront() is based on numerical inputs from sensors that is fed into a formula with certain parameters and coefficients. This has been optimized from many hours of driving data but ultimately it's not able to predict novel events as it can only optimize off existing data. There is a base level of error from bias-variance tradeoff to any model that you cannot avoid. It's not simply a matter of logging enough hours of driving. If that base error level is not low enough, then autonomous cars may never be deemed reliable to be unsupervised.

See: https://en.wikipedia.org/wiki/Bias-variance_tradeoff
Or specifically: http://scott.fortmann-roe.com/docs/docs/BiasVariance/biasvariance.png

It's the same reason that a stock market simulator using the same method (but different inputs) is not accurate. The difference would be that while 55% correct for the stock market may be sufficiently accurate and useful to be profitable, a driving algorithm needs to be near perfect. It's true that a sensor reaction time to someone braking unexpectedly may be much better than a human's and prevent a crash, so yes in certain cases autonomous driving will be safer but because of exceptional cases, but it may never be truly hands-off and you may always need to be ready to intervene, just like how Tesla works today (and why on a regulatory level it passed muster).

The combination of Google hyping its project and poor understanding of math or machine learning is why news reports just parrot Google's reliability numbers. Tesla also, has managed to convince many people that it already offers autonomous driving, but the auto-steer / cruise and changing lanes tech has existed for around a decade. Volvo, Mercedes and Audi all have similar features. There is a tendency to treat this technology as magical or inevitable when there are some unavoidable limitations behind it that may never be surmounted.

bremnetsays...

The inherently chaotic event that exists in the otherwise predictable / trainable environment of driving a car is the unplanned / unmeasured disturbance. In control systems that are adaptive or self learning, the unplanned disturbance is the killer - a short duration, unpredictable event for which the system is unable to respond to within the control limits that have been defined through training, programming and/or adaptation. The response to an unplanned disturbance is often to default to an instruction that is very much human derived (ie. stop, exit gracefully, terminate instruction, wait until conditions return to controllable boundary conditions or freeze in place) which, depending on the disturbance, can be catastrophic. In our world, with humans behind the wheel, let's call the unplanned disturbance the "mistake". A tire blows, a load comes undone, an object falls out of or off of another vehicle (human, dog, watermelon, gas cylinder) etc.

The concern from my perspective (and I work directly with adaptive / learning control systems every day - fundamental models, adaptive neural type predictors, genetic algorithms etc. ) is the response to these short duration / short response time unplanned disturbances. The videos I've seen and the examples that I have reviewed don't deal with these very short timescale events and how to manage the response, which in many cases is an event dependent response. I would guess that the 1st dead person that results from the actions or inaction of self driving vehicles will put a major dent if not halt to the program. Humans may be fallible, but we are remarkably (infinitely?) more adaptive in combined conscious / subconscious responses than any computer is or will be in the near future in both appropriateness of response and the time scale of generating that response.

In the partially controlled environment (ie. there is no such thing as 100%) of a automated warehouse and distribution center, self driving works. In the partially controlled environment where ONLY self driving vehicles are present on the roadways, then again, this technology will likely succeed. The mixed environment with self driving co-mingled with humans (see "fallible" above) is not presently viable, and I don't think will be in the next decade or two, partially due to safety risk and partially due to management of these short timescale unplanned disturbances that can call for vastly different responses depending upon the specific situation at hand. In the flow of traffic we encounter the majority of the time, would agree that this may not be an issue to some (in 44 years of driving, I've been in 2 accidents, so I'll leave the risk assessment to the actuaries). But one death, and we'll see how high the knees jerk. And it will happen.

My 2 cents.
TB

ChaosEnginesaid:

Actually, I would say I have a pretty good understanding of machine learning. I'm a software developer and while I don't work on machine learning day-to-day, I've certainly read a good deal about it.

As I've already said, Tesla's solution is not autonomous driving, completely agree on that (which is why I said the video is probably fake or the driver was just messing with people).

A stock market simulator is a different problem. It's trying to predict trends in an inherently chaotic system.

A self-driving car doesn't have to have perfect prediction, it can be reactive as well as predictive. Again, the point is not whether self-driving cars can be perfect. They don't have to be, they just have to be as good or better than the average human driver and frankly, that's a pretty low bar.

That said, I don't believe the first wave of self-driving vehicles will be passenger cars. It's far more likely to be freight (specifically small freight, i.e. courier vans).

I guess we'll see what happens.

Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists




notify when someone comments
X

This website uses cookies.

This website uses cookies to improve user experience. By using this website you consent to all cookies in accordance with our Privacy Policy.

I agree
  
Learn More