In March, a self-driving Uber car killed a pedestrian. The US National Transportation Safety Board’s report on the incident is horrifying.

As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.

So the system didn’t know what to do. Not to worry, the Volvo’s emergency braking system would kick in, right?

The vehicle was a modified Volvo XC90 SUV. That vehicle comes with emergency braking capabilities, but Uber automatically disabled these capabilities while its software was active.

Oh. Well, surely Uber’s own software would have an emergency system?

At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision.

Phew! But wait…

According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action.

Huh. Well, they’re testing, so I guess it makes sense that the operator would be concentrating on the road, ready to intervene.

The operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review.

And:

The system is not designed to alert the operator.

This is absolutely horrifying. The way this system was designed, it was inevitable that Uber would kill someone. Whoever is responsible for these decisions needs to be tried for manslaughter.