Artificial Intelligence Safety and Security

  • 14h 43m
  • Roman V. Yampolskiy
  • CRC Press
  • 2019

The history of robotics and artificial intelligence in many ways is also the history of humanity’s attempts to control such technologies. From the Golem of Prague to the military robots of modernity, the debate continues as to what degree of independence such entities should have and how to make sure that they do not turn on us, its inventors. Numerous recent advancements in all aspects of research, development and deployment of intelligent systems are well publicized but safety and security issues related to AI are rarely addressed. This book is proposed to mitigate this fundamental problem. It is comprised of chapters from leading AI Safety researchers addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. The book is the first edited volume dedicated to addressing challenges of constructing safe and secure advanced machine intelligence.

The chapters vary in length and technical content from broad interest opinion essays to highly formalized algorithmic approaches to specific problems. All chapters are self-contained and could be read in any order or skipped without a loss of comprehension.

About the Editor

Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. During his tenure at UofL, Dr. Yampolskiy has been recognized as: Distinguished Teaching Professor, Professor of the Year, Faculty Favorite, Top 4 Faculty, Leader in Engineering Education, Top 10 of Online College Professor of the Year, and Outstanding Early Career in Education award winner among many other honors and distinctions. Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science, former Research Advisor for MIRI and Associate of GCRI.

Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Genetic Algorithms, and Pattern Recognition. Dr. Yampolskiy is an author of over 150 publications including multiple journal articles and books. His research has been cited by 1000+ scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News), on radio (German National Radio, Swedish National Radio) and TV. Dr. Yampolskiy’s research has been featured 1000+ times in numerous media reports in 30 languages.

In this Book

  • Why the Future Doesn't Need Us
  • The Deeply Intertwined Promise and Peril of GNR
  • The Basic AI Drives
  • The Ethics of Artificial Intelligence
  • Friendly Artificial Intelligence: The Physics Challenge
  • MDL Intelligence Distillation: Exploring Strategies for Safe Access to Superintelligent Problem-Solving Capabilities
  • The Value Learning Problem
  • Adversarial Examples in the Physical World
  • How Might AI Come About?: Different Approaches and their Implications for Life in the Universe
  • The MADCOM Future: How Artificial Intelligence will Enhance Computational Propaganda, Reprogram Human Culture, and Threaten Democracy … and what can be Done About it
  • Strategic Implications of Openness in AI Development
  • Using Human History, Psychology, and Biology to Make AI Safe for Humans
  • AI Safety: A First-Person Perspective
  • Strategies for an Unfriendly Oracle AI with Reset Button
  • Goal Changes in Intelligent Agents
  • Limits to Verification and Validation of Agentic Behavior
  • Adversarial Machine Learning
  • Value Alignment Via Tractable Preference Distance
  • A Rationally Addicted Artificial Superintelligence
  • On the Security of Robotic Applications Using ROS
  • Social Choice and the Value Alignment Problem
  • Disjunctive Scenarios of Catastrophic AI Risk
  • Offensive Realism and the Insecure Structure of the International System: Artificial Intelligence and Global Hegemony
  • Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History
  • Military AI as a Convergent Goal of Self-Improving AI
  • A Value-Sensitive Design Approach to Intelligent Agents
  • Consequentialism, Deontology, and Artificial Intelligence Safety
  • Smart Machines ARE a Threat to Humanity
SHOW MORE
FREE ACCESS

YOU MIGHT ALSO LIKE