When an Uber SUV in self-driving mode struck an Arizona pedestrian in March, it was the first fatal crash of its kind. But it was far from the only accident involving an autonomous vehicle. In fact, such accidents have dominated the headlines in recent months, raising questions about the safety of self-driving technologies.
How Do Autonomous Vehicle Computers Work?
There are three different modules in the computer systems used to operate self-driving cars—the perception module, the prediction module, and the response module.
- The perception module uses sensors—such as cameras, radar, and pulses of light—to identify nearby objects.
- The prediction module determines how these objects are likely to “behave.” For example, is the truck ahead going to switch lanes?
- The response model uses the information above to determine the most appropriate response. For example, should the autonomous vehicle accelerate, decelerate, or change lanes?
These technologies are currently being tested on public roadways in states across the nation. Arizona’s dry climate has played a major role in its becoming a hot bed for testing self-driving vehicles (which still perform better on dry roads). But Arizona’s loose laws and regulations concerning self-driving vehicles may see some changes now that someone has been killed. A Boston car accident lawyer can help you determine how to proceed if you’ve been injured due to another’s negligence.
So, what actually caused the Uber self-driving crash that killed an Arizona woman this spring?
According to Sebastian Thrun, the Stanford professor who formerly led Google’s autonomous-vehicle department, the most challenging of the three modules mentioned above is perception. Although bicycles, pedestrians, and other vehicles are relatively easy to identify, rarely-seen objects (think of a plastic bag floating across the road) pose a problem. Thrun says that when Google first began testing autonomous vehicles, its “perception module could not distinguish a plastic bag from a flying child.”
The National Transportation Safety Board (NTSB) investigated the Arizona accident, and determined that the autonomous Uber’s computer system failed to identify Elaine Herzberg as she walked across the road with her bicycle. Her presence was detected six seconds prior to the crash, but the perception module identified her first as an unknown object, then as a vehicle, and then as a bicycle, with a path the system was unable to predict. A MA car accident attorney can help you recover damages if you’ve been injured due to another’s negligence.
According to the NTSB report, at 1.3 seconds before the crash, the computer system recognized the need for emergency braking, but the emergency braking had been disabled due to a potential conflict with the autonomous system. In such an event, the human driver is expected to react. Unfortunately, the safety operator was looking at the self-driving display screen at the time of the accident, and was unable to brake in time.
It was determined that, although the Arizona accident had multiple causes, the fault ultimately lies in the system’s design failure. The AV system should slow down, for example, if the perception module becomes confused. Of course, unexpected braking can have its own consequences. Confused self-driving vehicles have been rear-ended by human drivers when they slowed down unexpectedly. In fact, this is the exact reason why the responsibility of braking has been officially assigned to human safety operators, who are tasked with being the safety net when the AV system malfunctions or gets confused. In order for this to work, however, the human driver must be paying attention to the road as closely as the driver of a non-autonomous vehicle. Continue reading