The Dangers of Semi-Automated Driving

Last week, a Tesla driver was killed in a fatal car crash while using Autopilot. Neither the driver, nor the Autopilot noticed a tractor trailer crossing lanes and Tesla explained that the vehicle’s sensors had failed to recognize “the white side of the tractor trailer against a brightly lit sky”. This will undoubtedly raise many questions about autonomous vehicles and driving, which are necessary and important, but aside from that it should also raise questions about semi-automated driving.

As Tesla points out, the Autopilot isn’t quite a fully autonomous driving assistant, but rather “an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Because of this, Tesla (and many media outlets covering the accident) were quick to shift the blame from Autopilot to the driver, who didn’t comply with instructions and placed too much trust in the Autopilot feature. Reportedly the driver was even watching a Harry Potter movie while being chauffeured by his car.

While it is true that the driver apparently didn’t follow the instructions for Autopilot properly, it would be too simple to absolve the technology from any fault, because semi-autonomous driving features blatantly disregard how humans function: We don’t have unlimited attention spans, we can get tired and we can easily lose focus and concentration when we’re bored. Advanced-but-imperfect partial automation lulls humans into a false sense of safety, yet requires human intervention at the most critical moments – when imminent danger or catastrophe looms. Don Norman has been arguing for many years that the transition from partial to full automation poses the greatest danger when it is almost complete and he just published a paper on the challenges of partially automated driving in May 2016, which I would highly recommend for anyone interested in the topic.

Advertisements