MIT Sloan Management Review Article on What Humans Lose When We Let AI Decide

  • 7m
  • Christine Moser, Dirk Lindebaum, Frank den Hond
  • MIT Sloan Management Review
  • 2022

It’s been more than 50 years since HAL, the malevolent computer in the movie 2001: A Space Odyssey, first terrified audiences by turning against the astronauts he was supposed to protect. That cinematic moment captures what many of us still fear in AI: that it may gain superhuman powers and subjugate us. But instead of worrying about futuristic sci-fi nightmares, we should instead wake up to an equally alarming scenario that is unfolding before our eyes: We are increasingly, unsuspectingly yet willingly, abdicating our power to make decisions based on our own judgment, including our moral convictions. What we believe is “right” risks becoming no longer a question of ethics but simply what the “correct” result of a mathematical calculation is.

Day to day, computers already make many decisions for us, and on the surface, they seem to be doing a good job. In business, AI systems execute financial transactions and help HR departments assess job applicants. In our private lives, we rely on personalized recommendations when shopping online, monitor our physical health with wearable devices, and live in homes equipped with “smart” technologies that control our lighting, climate, entertainment systems, and appliances.

About the Author

Christine Moser (@tineadam) is an associate professor of organization theory at Vrije Universiteit Amsterdam in the Netherlands. Frank den Hond is the Ehrnrooth Professor in Management and Organisation at the Hanken School of Economics in Finland and is affiliated with Vrije Universiteit Amsterdam. Dirk Lindebaum is a senior professor in organization and management at Grenoble Ecole de Management.

In this Book

  • MIT Sloan Management Review Article on What Humans Lose When We Let AI Decide