Linear Models & Gradient Descent: Managing Linear Models

Intermediate
  • 11 Videos | 52m 16s
  • Includes Assessment
  • Earns a Badge
Likes 3 Likes 3
Explore the concept of machine learning linear models, classifications of linear models, and prominent statistical approaches used to implement linear models. This 11-video course also explores the concepts of bias, variance, and regularization. Key concepts covered here include learning about linear models and various classifications used in predictive analytics; learning different statistical approaches that are used to implement linear models [single regression, multiple regression and analysis of variance (ANOVA)]; and various essential components of a generalized linear model (random component, linear predictor and link function). Next, discover differences between the ANOVA and analysis of covariance (ANCOVA) approaches of statistical testing; learn about implementation of linear regression models by using Scikit-learn; and learn about the concepts of bias, variance, and regularization and their usages in evaluating predictive models. Learners explore the concept of ensemble techniques and illustrate how bagging and boosting algorithms are used to manage predictions, and learn to implement bagging algorithms with the approach of random forest by using Scikit-learn. Finally, observe how to implement boosting ensemble algorithms by using Adaboost classifier in Python.

WHAT YOU WILL LEARN

  • discover the key concepts covered in this course
    define linear model and the various classification of linear models that are used in predictive analytics
    recognize the different statistical approaches that are used to implement linear models (single regression, multiple regression and ANOVA)
    define generalized linear model and the various essential components of generalized linear model (random component, linear predictor and link function)
    compare the differences between the ANOVA and ANCOVA approaches of statistical test
    demonstrate the implementation of linear regression models using Scikit-learn
  • describe the concept of bias, variance and regularization and their usages in evaluating predictive models
    define the concept of ensemble techniques and illustrate how bagging and boosting algorithms are used to manage predictions
    implement bagging algorithms with the approach of random forest using Scikit-learn
    implement boosting ensemble algorithms using Adaboost classifier in Python
    list the classifications of linear models, recall the essential components of generalized linear models, and implement boosting algorithm using Adaboost classifier

IN THIS COURSE

  • Playable
    1. 
    Course Overview
    53s
    UP NEXT
  • Playable
    2. 
    Linear Model and its Classification
    7m 9s
  • Locked
    3. 
    Linear Modeling Approach
    4m 24s
  • Locked
    4. 
    Generalized Linear Model
    2m 55s
  • Locked
    5. 
    ANOVA and ANCOVA
    3m 39s
  • Locked
    6. 
    Linear Model Implementation
    3m 49s
  • Locked
    7. 
    Bias, Variance and Regularization
    6m 51s
  • Locked
    8. 
    Ensemble Techniques
    7m 28s
  • Locked
    9. 
    Bagging Implementation
    3m 35s
  • Locked
    10. 
    Implementing Boosting Algorithm
    4m 10s
  • Locked
    11. 
    Exercise: Linear Models and Ensemble
    2m 55s

EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE

Skillsoft is providing you the opportunity to earn a digital badge upon successful completion of this course, which can be shared on any social network or business platform

Digital badges are yours to keep, forever.