This site may earn affiliate commissions from the links on this page. Terms of use.

Both Uber and Tesla suffered major self-driving setbacks in recent weeks. Both companies were involved in fatal traffic accidents — Uber, when its vehicle struck a pedestrian, and Tesla after a Model X driver, Walter Huang, died when his vehicle struck a concrete median in Autopilot fashion. Huang's family has hired the police force firm Minami Tamaki to investigate the situation and has claimed a preliminary study shows the Autopilot system is clearly deficient. Tesla, meanwhile, continues to put arraign entirely on the driver.

"(Our) preliminary review indicates that the navigation system of the Tesla may have misread the lane lines on the roadway, failed to detect the concrete median, failed to brake the automobile, and collection the car into the median," Minami said.

The family has claimed that Huang complained virtually problems with Autopilot in that specific area. Tesla argues that these very points undermine whatsoever argument that Autopilot was to blame for Huang's death. The company released the post-obit statement to ABC News:

TeslaStatement

According to telemetry from the Model X, the driver's hands were non on the bicycle for six seconds preceding his death, despite multiple warnings to engage with information technology.

The problem hither goes deeper than the question of whether Tesla's Autopilot is linked to Huang'due south death. Proponents of self-driving cars accept often pointed to the fact that tens of thousands of people die in automotive accidents every year. Human-controlled driving isn't pretty, and your average human isn't particularly practiced at information technology. Cocky-driving vehicles could very well meliorate on a bad situation.

But not much ink gets spilled on the inevitable transition periods, during which cocky-driving cars aren't going to be equally good at driving every bit their man counterparts. It'due south easy to explain Level v self-driving to people, considering that's when the car is going to exist capable of doing everything. The lower levels, which requite the vehicle partial control in certain circumstances, tin can just role properly if the driver is completely enlightened of their limitations and capabilities.

A recent video shot past someone from the Chicago area attempted to replicate the California accident and very nearly succeeded. The vehicle heads directly towards a slab of concrete before the driver takes the wheel again. In the absence of whatever indication that Huang was attempting to commit suicide or suffered a heart attack, stroke, or equivalent upshot, we can at to the lowest degree conclude his expiry was unintentional and that he took his hands off the wheel because he believed the Tesla Autopilot would accurately guide the vehicle. And while the Uber crash from final month isn't the principal focus of this story, we can as well assume that the driver in that incident had no intention of killing a pedestrian.

The primal problem with self-driving vehicles that aren't capable of total, robust, Level v performance (and none of them currently are) is that at some point, the vehicle is going to make up one's mind it can't handle route conditions. The human driver may or may not be enlightened that decision has been made. Even if the commuter is enlightened of information technology, he or she might not be able to react quickly enough to forbid an blow. And the smarter these systems become, the greater the chance that the car might brand 1 decision to evade a catastrophe while the driver attempts to take a unlike, confounding action.

Self-driving cars really could revolutionize transport long-term. They could change the dynamics of vehicle ownership, assistance older people retain self-sufficiency, and slash the charge per unit of decease associated with drunk driving, distracted driving, and exhausted driving. The fact that some of these gains could take several decades to fully make it given how long information technology takes vehicle fleets to plough over is no reason not to pursue them. But incertitude effectually self-driving vehicle intelligence and operational characteristics is still a problem today and information technology's going to be a trouble for the foreseeable futurity. The liability questions aren't going to go abroad any time soon.

The NTSB has revoked Tesla's condition as a party to the investigation of the crash. In order to operate alongside the agency, Tesla is required to respect the confidentiality of the investigation. In taking the position that Huang was solely responsible for the incident, Tesla broke that requirement. The NTSB has a less rosy view of Autopilot'southward current functionality than Tesla does.