Skillsoft Blog

Learn To Fall In Love Again With AI Systems

September 21, 2020 | by Skillsoft Blog

Mistakes are a natural part of human behavior. Most of our mistakes are small, inconsequential miscalculations, or temporary lapses in good judgment. We all have bad days, mood swings, or senior moments that cause unforced errors. Impulsiveness and fatigue don’t help us make the right decisions either. No wonder artificial intelligence (AI) was so warmly embraced as a remedy for the human condition.

As AI adoption accelerated, we handed over high-stakes decisions to technology solutions. We expect AI systems to determine fair prison sentences, approve credit applications equitably, and perform facial recognition without racial or gender concerns.

The advantages of AI

We fell in love with AI systems as the source for predictive modeling and automated decision-making. Visionaries imagined an AI utopia without humans messing things up. Developers rushed to build AI systems that operated efficiently, used sophisticated algorithms to deliver consistent results, and got smarter over time with machine learning. Some 64% of all businesses depend on AI for productivity growth.

AI systems are versatile, accurate, reliable, autonomic (self-correcting), fast, and affordable. That’s a strong list of advantages to justify an AI solution for nearly every computing need.

AI systems are successful at some pretty tricky assignments, such as:

  • Handling hazardous situations with AI-guided robots to fight fires, disable bombs, or clean up chemical spills.
  • Building better customer relationships with service applications like ‘Einstein’ from Salesforce or ‘Erica’ from Bank of America, helping millions of customers get the help they need - improving marketing ROI as much as 4-5 times by personalizing marketing messages to individuals.
  • Innovating in ways not imaginable without AI. Domino’s Pizza uses AI to integrate weather data into staffing and supply chain management. Don’t be surprised if your pizza gets delivered by a self-driving vehicle powered by AI.

Advances in the use of AI were the start of a perfect love story…until the harmful effects of bias were recognized.

We became aware of bias in AI systems that performed employment screenings, university admissions, criminal justice, bank lending practices, medical services, and more. Bias fueled a growing outcry against AI systems. Were we trading fairness for consistency? Social justice for affordability?

Bias in technology is a root cause of unfair outcomes. Biased data leads to faulty insights and poor-quality recommendations. It doesn’t matter if you use the best, most sophisticated AI tools. Bias is corrosive and discriminatory.

Falling out of love

We learned the hard way that AI systems don’t mask human biases. Decisions by AI systems may be just as unfair, prejudiced, or discriminatory as the humans who conceived and encoded bias in them. The scale of AI applications makes undesirable outcomes more impactful and troubling.

Across the board in all job roles and levels, the software development community lacks diversity – a primary source of bias. After all, most AI systems are modeled after human behavior. Narrow perspectives increase the probability of bias.

Machine learning (ML) is another entry point for bias. AI systems are trained by data. When historical data is used for training, it often reflects the thinking of the time period in which it was collected. Personal information with military service, arrest records, or even zip code can be used in a racially biased way. The data selected for ML training may create an echo chamber amplifying bias.

When AI systems make mistakes in contrast to humans, they are often large, consequential, and harmful. Forced errors are followed by public outrage that damages brands and reputations. It’s risky business to depend exclusively on AI systems where bias isn’t checked.

Love lessons learned

Nearly every problem that required a technology solution looked like a recipe for AI. That is until we recognized the disruptive impacts of bias. Awareness is the first step toward controlling bias. Along the journey, we’ve learned some important lessons, such as:

  1. Don’t take fairness for granted. Rather, fairness is a design characteristic that should be built into every aspect of algorithm and AI system development.
  2. High stakes applications require side-by-side participation of human and AI decision-makers. The risk of unexplainable or unfavorable outcomes is too great.
  3. Choose outreach over outrage. AI systems that have strong community impacts require an understanding of public sentiment and an opportunity for sharing different points of view.
  4. Avoid technology lightning rods. Facial recognition applications are not at an acceptable level of accuracy in a highly diverse society.
  5. Personal-data security is a vulnerability that must be addressed at the risk of losing public trust.
  6. Employees have an agency to limit bias. For example, Google employees refused to assist with family separations at the borders.

With greater awareness of bias, and by applying these lessons, we can begin to restore trust in algorithms and AI systems. Controlling bias and regaining trust requires some interventions.

Learn to love AI again

No system or process is purely technical. Our technology has a social component. To reduce bias, new rules will ensure our AI systems don’t repeat the mistakes of the past, and deliver the promise of AI within a social context of fairness and equity.

New rules for AI systems to rebuild trust:

  1. Create fairness metrics and measure fairness at each step of the technology development process: design, coding, testing, feedback, analysis, reporting, and risk mitigation.
  2. Design models to test AI systems that challenge results such as counterfactual testing. Ensure that outcomes can be repeated and explained.
  3. Perform side-by-side AI and human testing. Use third party judges to challenge the accuracy and possible biases in the results.
  4. Assign lines of responsibility across an organization. Educate employees that driving out bias is everyone’s mandate.
  5. Keep ahead of the bad guys with up-to-date data security solutions and procedures. Hacking is a possible entry point for bias.
  6. Invest in the diversity of the software development community. Support community colleges and certificate programs to attract a more diverse group of software development candidates.

Good things are happening with AI that are encouraging and hopeful. The lessons learned and new rules are being demonstrated in exciting ways. A future with less bias helps us to once again fall in love with AI. Examples of the new AI you can’t help but love:

Improve the news

An MIT AI lab under Max Tegman built an app to help consumers gain better control over the news with a much broader range of choices. was developed to fill a need for more accurate news classification with a clearer understanding of the point of view. Before launch, the development team involved a wide community of interested participants, and crowd-sourced a rating system for 500+ news outlets. After the initial human assessment, ML took over and now ranks and rates news stories. The app empowers users to select the viewpoints they want.

LinkedIn Fairness Tool (LIFT)

LIFT is an Open Source toolkit to help identify bias. LinkedIn Is the world’s largest professional search site with more than 700 million subscribers. LIFT was developed to identify bias in job search algorithms. LinkedIn joins IBM and Accenture in building toolkits to combat bias in business.

Wastewater surveillance

An app enables rapid DNA testing of wastewater for COVID-19. This AI system may provide one of the earliest signals of a hot spot without any community bias. Once COVID-19 is detected in wastewater, hospital and first responders can gear up for an increased caseload. COVID-19 is a difficult disease to control because some people have no symptoms, and large segments of a community may be excluded from individual testing. Wastewater surveillance carries no bias permitting one of the earliest forms of detection.

These examples demonstrate that technology solutions can solve for bias. Controlling bias is difficult. We are at a tipping point in our commitment to fairness and the ethical use of landmark technologies. Let’s keep the love story moving forward.

Can We Steer Self-Driving Vehicles To A Future With Less Bias?

The case for autonomous (self-driving) vehicles is clear. Experts estimate that 81-94% of serious accidents resulting in fatalities are caused by human weaknesses. As many as 1.3 million people are…

Read blog post

Data Science with Python: Creating Business Value

We are in the midst of a seismic shift to the data economy. Winners and losers will be defined by how well they utilize data for competitive advantage. Failure to recognize change may be an…

Read blog post

Storytelling and Algorithms: The Life of a Data Scientist

Kyle Garnick is a data scientist at Skillsoft Are you thinking about pursuing a career in data science? With the right academic/technical skills, you too could be diving into a world of algorithms,…

Read blog post