Can We Steer Self-Driving Vehicles To A Future With Less Bias?
The case for autonomous (self-driving) vehicles is clear. Experts estimate that 81-94% of serious accidents resulting in fatalities are caused by human weaknesses. As many as 1.3 million people are killed worldwide each year in crashes. People fall asleep at the wheel, get bored or distracted, text while driving, become impaired by alcohol or drugs, lose perspective from blinding glare and weather conditions, or make poor split-second judgments that turn out badly.
You might think that people are terrible drivers. Yet, 99.95% of all miles driven are without incident. That’s a high standard for self-driving vehicles to meet and better still exceed. Given the level of investment, they will in the very near future.
Remarkable advances in driving safety have come from innovations in autonomous driving. Safety improvements from self-driving research and development are used in many new cars equipped with navigation and guidance systems such as GPS mapping, lane assist, backup cameras, approaching vehicle warnings, and automatic braking for hazards. In addition to enhanced safety, autonomous vehicles could reduce gas consumption, impact traffic congestion, and reduce greenhouse gasses.
The cost of technology in autonomous vehicles may exceed the rest of everything else in the car combined. Most manufacturers include sophisticated functions like LIDAR for depth perception and advanced driving assistance systems (ADAS) for collision prevention, sensors, and camera-based guidance devices powered by neural networks.
Toyota, Mercedes Benz, Volvo, Audi, Tesla, Honda, Nissan, and many others are betting their companies on self-driving because they recognize the market is shifting, and they see huge potential payouts. The estimated market size is $550 billion in 2024.
Autonomous vehicles are an out-sized market opportunity with upside ROI. Equally important are the investment requirements to make vehicles safe and win consumer confidence. Competitive pressures and financial demands on manufacturers and technology suppliers are immense. In other words, this market is the perfect storm for bias to take root and derail progress.
Why worry about bias?
Bias in data and technology are cornerstones of unintended consequences and unforced errors. In an industry striving for perfection, there is little room for harmful externalities. Bias makes a difference in who benefits from innovation and who suffers from exclusion or elevated risks. Bias leads to faulty insights from data analysis. It doesn’t matter if you use the most sophisticated tools. The problem of recognizing and mitigating bias can’t be solved by programming, machine learning, or AI. Bias is corrosive and discriminatory. It is a human problem requiring human intervention.
Bias creeps in when:
- Historical data is used for mapping and navigation.
- Engineers don’t come from diverse populations.
- Developers make decisions based on their limited perspectives.
- Regulations take a narrow view of people’s needs.
- Public policy is subordinate to profit motives.
- Driving conditions are only tested in suburban settings.
- Sensors or AI systems don’t recognize urban hazards.
- Racial differences in pedestrians aren’t correctly interpreted.
- Testing is done by simulation with a limited number of models.
- Assisted driving technologies do not adapt to people with disabilities.
- Machine learning attempts to understand the state-of-mind of other drivers.
There are indeed many ways in which bias could negatively impact the success of self-driving vehicles. This doesn’t have to be a forgone conclusion. Let’s look at a few specific ways to steer self-driving vehicles towards a future with less bias and more fairness.
Testing the brakes on simulations
Highway safety is currently measured in deaths per billion miles traveled. Toyota estimates that 8.8 billion miles of road testing are needed to test self-driving vehicles fully. Waymo is the current industry leader in road testing with 10 million miles. It would take an unacceptably long time to reach even one billion miles of road testing.
Computer simulation with AI and machine learning will close the gap. Bias can be controlled in three ways:
- Create diverse models for the simulation included urban and suburban, wealthy, and poor neighborhoods with hazards unique to each environment.
- Engage a diverse group of analysts to interpret the testing data to ensure a wide cross-section of perspectives on what should result from each simulation.
- Attempt to understand the state-of-mind of an oncoming driver. Computer simulations can’t do this without bias. Use the vehicles AI to make split-second decisions by weighing dozens of possible outcomes.
A future with less bias integrates diversity into testing.
Reverse recognition bias
Understanding things that move is proven technology. How they function in large part depends on the human operator. Bikes, mopeds, scooters, wheelchairs, baby strollers, and tricycles are just the start of the list. The issue is more complex when it comes to pedestrians. People have a vast range of skin color, movement patterns, height, weight, walking gait, and awareness of their surroundings. Safe self-driving vehicles must be programmed to recognize pedestrians’ diversity. A future with less bias builds technology that recognizes all races.
Most of us accept the driving directions our phones give us to be the definitive, most accurate way to navigate to parts unknown. Yet, each year drivers plow into rivers and lakes or drive into abandoned construction sites because the nice voice on their GPS gave bad directions. Self-driving cars require a much greater degree of accuracy in mapping that not only includes accurate navigation but also plots road signs, rough road surfaces, crosswalks, and detailed contours of the terrain. This type of mapping is highly labor-intensive and expensive.
Because of the costs and complexity of detailed mapping, most manufacturers choose to map small towns or suburban areas with consistent landscapes and relatively low populations. Mapping needs to take place in urban areas as well to avoid excluding people in urban areas who may require the enhanced mobility of autonomous vehicles. The requirements of urban areas are different, but the need may be even greater. A future with less bias makes balanced choices of where to map.
Accelerating ethical issues
A future with less bias requires policymakers to tackle many ethical questions. Market leaders should not factor into governance, regulatory, and compliance decisions. Self-driving vehicles could reduce greenhouse emissions, curb gas consumption, and eliminate traffic jams if vehicles are shared and not locked in suburban garages most of the time.
Ethical questions that could become roadblocks for autonomous vehicle adoption include:
- Should self-driving vehicles require shared ownership?
- Will self-driving services be available to the poor, elderly, or disabled?
- Will road testing be permitted near schools, hospitals, or senior citizen housing?
- What are the tolerance levels for fatal accidents? How safe is safe enough?
- Should self-driving vehicles always have a licensed driver in the car as a backup?
- If an accident is unavoidable and the vehicle must choose between hitting an older person in a wheelchair or a child on a bike, whose life is more valuable? The decision has to be programmed.
- Is self-driving technology a public or private good?
A future with less bias requires accelerating answers to difficult ethical questions.
Data and technology bias are found in many other use cases besides. Bias is in lending practices, home mortgage applications, job searches, medical practices, university admissions, and many more situations where human decision making has been turned over to AI systems. Controlling bias begins with understanding the sources for it and recognizing unfair outcomes. This is such an important topic that we created a Bootcamp about it for technology and business leaders seeking answers for curbing bias.
Register today for the Skillsoft Bootcamp “Understanding Bias in Data” September 29-30, 2020.
Join Princeton University’s Dr. Ruha Benjamin, Data Society’s Merav Yuravlivker, and Harvard’s Matthew Finney for an insightful two day Bootcamp.
Become an active contributor in crafting a future with less bias.