Uncertainty

Artificial Intelligence
  • 13 Videos | 45m 30s
  • Includes Assessment
  • Earns a Badge
Likes 40 Likes 40
Many problems aren't fully observable and have some degree of uncertainty, which is challenging for AI to solve. Discover how to make agents deal with uncertainty and make the best decisions.

WHAT YOU WILL LEARN

  • describe uncertainty and how it applies to AI
    describe how probability theory is used to represent knowledge to help an intelligent make decisions
    describe utility theory and how an agent can calculate expected utility of decisions
    describe how preferences are involved in decision making and how the same problem can have different utility functions with different agents
    describe how risks are taken into consideration when calculating utility and how attitude for risks can change the utility function
    describe the utility of information gain and how information gain can influence decisions
    define Markov chains
  • define the Markov Decision Process and how it applies to AI
    describe the value iteration algorithm to decide on an optimal policy for a Markov Decision Process
    define the partially observable Markov Decision Process and contrast it with a regular Markov Decision Process
    describe how the value iteration algorithm is used with the partially observable Markov Decision Process
    describe how a partially observable Markov Decision Process can be implemented with an intelligent agent
    describe the Markov Decision Process and how it can be used by an intelligent agent

IN THIS COURSE

  • Playable
    1. 
    What Is Uncertainty?
    3m 29s
    UP NEXT
  • Playable
    2. 
    Uncertainty Representation
    5m 7s
  • Locked
    3. 
    Utility Theory
    1m 18s
  • Locked
    4. 
    Utility and Preferences
    3m 9s
  • Locked
    5. 
    Utility and Risks
    3m 45s
  • Locked
    6. 
    Value of Information
    2m 29s
  • Locked
    7. 
    Markov Chains
    3m 20s
  • Locked
    8. 
    Markov Decision Process
    2m 27s
  • Locked
    9. 
    MDP Value Iteration
    2m 29s
  • Locked
    10. 
    Partially Observable Markov Decision Process (POMDP)
    2m 46s
  • Locked
    11. 
    POMDP Value Iteration
    3m 30s
  • Locked
    12. 
    Applying POMDPs
    3m 7s
  • Locked
    13. 
    Exercise: Describe the Markov Decision Process
    2m 35s

EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE

Skillsoft is providing you the opportunity to earn a digital badge upon successful completion of this course, which can be shared on any social network or business platform

Digital badges are yours to keep, forever.

YOU MIGHT ALSO LIKE

Likes 1 Likes 1  
Likes 3 Likes 3  

PEOPLE WHO VIEWED THIS ALSO VIEWED THESE

Likes 543 Likes 543  
Likes 123 Likes 123