MIT Sloan Management Review Article on Diversity in AI: The Invisible Men and Women

  • 5m
  • Ayanna Howard, Charles Isbell
  • MIT Sloan Management Review
  • 2020

In June, a crisis erupted in the artificial intelligence world. Conversation on Twitter exploded after a new tool for creating realistic, high-resolution images of people from pixelated photos showed its racial bias, turning a pixelated yet recognizable photo of former President Barack Obama into a high-resolution photo of a white man. Researchers soon posted images of other famous Black, Asian, and Indian people, and other people of color, being turned white.

The conversation became intense. Two well-known AI corporate researchers — Facebook’s chief AI scientist, Yann LeCun, and Google’s co-lead of AI ethics, Timnit Gebru — expressed strongly divergent views about how to interpret the tool’s error. A heated, multiday online debate ensued, dividing the field into two distinct camps: Some argued that the bias shown in the results came from bad (that is, incomplete) data being fed into the algorithm, while others argued that it came from bad (that is, short-sighted) decisions about the algorithm itself, including what data to consider.

About the Author

Ayanna Howard (@robotsmarts) is the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing in the College of Computing at Georgia Tech. She also serves as the director of the Human-Automation Systems Lab in the School of Electrical and Computer Engineering. Charles Isbell (@isbellhfh) is the dean and the John P. Imlay Jr. Chair of the College of Computing at Georgia Tech.

Learn more about MIT SMR.

In this Book

  • MIT Sloan Management Review Article on Diversity in AI – The Invisible Men and Women