MIT Sloan Management Review Article on How AI Skews Our Sense of Responsibility

  • 4m
  • Ryad Titah
  • MIT Sloan Management Review
  • 2024

Research shows how using an AI-augmented system may affect humans’ perception of their own agency and responsibility.

As artificial intelligence plays an ever-larger role in automated systems and decision-making processes, the question of how it affects humans’ sense of their own agency is becoming less theoretical — and more urgent. It’s no surprise that humans often defer to automated decision recommendations, with exhortations to “trust the AI!” spurring user adoption in corporate settings. However, there’s growing evidence that AI diminishes users’ sense of responsibility for the consequences of those decisions.

This question is largely overlooked in current discussions about responsible AI. In reality, such practices are intended to manage legal and reputational risk — a limited view of responsibility, if we draw on German philosopher Hans Jonas’s useful conceptualization. He defined three types of responsibility, but AI practice appears concerned with only two. The first is legal responsibility, wherein an individual or corporate entity is held responsible for repairing damage or compensating for losses, typically via civil law, and the second is moral responsibility, wherein individuals are held accountable via punishment, as in criminal law.

About the Author

Ryad Titah is associate professor and chair of the Academic Department of Information Technologies at HEC Montréal. The research in progress described in this article is being conducted with Zoubeir Tkiouat, Pierre-Majorique Léger, Nicolas Saunier, Philippe Doyon-Poulin, Sylvain Sénécal, and Chaïma Merbouh.

Learn more about MIT SMR.

In this Book

  • MIT Sloan Management Review Article on How AI Skews Our Sense of Responsibility