Azure AI Fundamentals: Evaluating Models with the ML Designer

Azure 2020
  • 16 Videos | 2h 9m 34s
  • Includes Assessment
  • Earns a Badge
In order to build a powerful and useful machine learning deployment, you must be able to evaluate and verify the AI model and data, as well as the accuracy and effectiveness of its predictions. Azure Machine Learning Studio and the Designer provide multiple easy-to-use methods for evaluating and scoring a model. In this course, you'll learn how to score and evaluate models and interpret and evaluate the results from some common models. You'll also explore how to create an inference pipeline, add web service output to provide external access to the model, and deploy and test a predictive web service. This course is one of a collection that prepares learners for the Microsoft Azure AI Fundamentals (AI-900) exam.

WHAT YOU WILL LEARN

  • discover the key concepts covered in this course
    add a Scoring model component in the ML Designer
    describe model evaluation types like MAE and R2
    use an evaluator on a model and interpret the metrics
    run and monitor a complete pipeline
    analyze the evaluation results in the output and logs section in the ML Designer
    identify and investigate the details of the evaluation results
    visualize the scoring data from the Scoring model
  • investigate the logs and results that are significant when running a Regression model
    interpret the results from running a Classification model
    interpret the results and logs form running a Clustering model
    create an inference pipeline using a Python script
    add a web service output to provide external access to the model
    deploy the model as a predictive service
    test the predictive service from an external app
    summarize the key concepts covered in this course

IN THIS COURSE

  • Playable
    1. 
    Course Overview
    1m 27s
    UP NEXT
  • Playable
    2. 
    Adding and Using Scoring on Models
    7m 32s
  • Locked
    3. 
    Model Evaluation Types
    7m 14s
  • Locked
    4. 
    Using Evaluators on the Model
    6m 10s
  • Locked
    5. 
    Running a Pipeline
    10m 44s
  • Locked
    6. 
    Analyzing the Evaluation Results Output and Logs
    8m 32s
  • Locked
    7. 
    Exploring the Evaluation Results Details
    9m 24s
  • Locked
    8. 
    Visualizing the Data in the Scoring Model
    9m 2s
  • Locked
    9. 
    Investigating Results from a Regression Model
    10m 17s
  • Locked
    10. 
    Interpreting Results from a Classification Model
    10m 22s
  • Locked
    11. 
    Investigating Results from a Clustering Model
    10m 50s
  • Locked
    12. 
    Creating an Inference Pipeline
    10m 56s
  • Locked
    13. 
    Adding a Web Service Output
    6m 59s
  • Locked
    14. 
    Deploying a Predictive Service
    6m 29s
  • Locked
    15. 
    Testing a Predictive Service
    5m 41s
  • Locked
    16. 
    Course Summary
    55s

EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE

Skillsoft is providing you the opportunity to earn a digital badge upon successful completion of this course, which can be shared on any social network or business platform

Digital badges are yours to keep, forever.