A software misreading is reportedly responsible for the death of a pedestrian by an Uber vehicle. Image; Compfight

Earlier this year on March 18, one of Uber’s self-driving vehicles struck and killed a pedestrian in Arizona. Since the accident occurred, there’s been an ongoing investigation on the case, and this Monday, Uber stated that the accident was caused due to a software failure which failed to comply with its design, as it reportedly detected the person yet it failed to react.

Uber has reportedly discovered the reason for the failure that took the life of Elaine Herzberg as she crossed a road in Arizona. According to some reports, the Volvo XC90’S software system flagged the pedestrian as a false positive, deciding to ignore Herzberg and not engaging in any evasive action as a result.

Uber has identified this software bug and therefore decided to stop self-driving car tests on public roads. Autonomous car companies have reported that driverless vehicles are not at their best just yet, as they constantly need to be calibrated to be more or less cautious through their computer vision software, which detects and scans what objects are.

When this sort of technology is too fresh for its own good, companies tend to turn to over safety, specifically in this case of self-driving cars. This could result in rides being a little bit too clumsy with things like violent brakes, as vehicles will stop over almost any object on the road in order to prevent any tragedies.

What has Uber done to prevent this from happening again?

Uber decided to lower the cautions in order to provide a smoother and more fluent car ride. However, this is the main reason why the Arizona car crash occurred. Following the accident and Uber ceasing its testing on public roads, the company hired NTSB former chair, Christopher Hart, to help the company fulfill safety protocols on self-driving vehicles.

An Uber spokesperson referred to this measure by stating, “We have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon.”

How often do false positives occur?

Despite being a new technology and being exclusive to basically 3 very notable companies, this happens a lot in most of the self-driving vehicle rides. For example, Alphabet’s self-driving company Waymo has an average of 5,600 miles per false positive.

However, there is an ongoing debate on how humans can serve as a backup in the event of software failures.

Source: Recode