Working with Google BERT: Elements of BERT

Artificial Intelligence
  • 15 Videos | 1h 20m
  • Includes Assessment
  • Earns a Badge
Likes 5 Likes 5
Adopting the foundational techniques of natural language processing (NLP), together with the Bidirectional Encoder Representations from Transformers (BERT) technique developed by Google, allows developers to integrate NLP pipelines into their projects efficiently and without the need for large-scale data collection and processing. In this course, you'll explore the concepts and techniques that pave the foundation for working with Google BERT. You'll start by examining various aspects of NLP techniques useful in developing advanced NLP pipelines, namely, those related to supervised and unsupervised learning, language models, transfer learning, and transformer models. You'll then identify how BERT relates to NLP, its architecture and variants, and some real-world applications of this technique. Finally, you'll work with BERT and both Amazon review and Twitter datasets to develop sentiment predictors and create classifiers.

WHAT YOU WILL LEARN

  • discover the key concepts covered in this course
    compare approaches to supervised and unsupervised learning in NLP
    define the concept of language models and recognize their purpose
    list multiple legacy language models and their use cases
    describe how deep learning neural networks can create language models
    name state-of-the-art language models and recognize their utility
    describe the purpose of language representation in NLP pipelines and neural network models
    outline how developers make use of transfer learning
  • describe the concept and purpose of transformer models
    describe Google BERT and how it is used in NLP products
    outline Google BERT's architecture and list use cases of its variants
    name multiple real-world problems in NLP that are solved by Google BERT
    work with an Amazon review dataset and Google BERT to develop sentiment predictors
    work with a Twitter dataset and Google BERT to create disaster Tweet classifiers
    summarize the key concepts covered in this course

IN THIS COURSE

  • Playable
    1. 
    Course Overview
    1m 11s
    UP NEXT
  • Playable
    2. 
    Supervised vs. Unsupervised NLP
    5m 13s
  • Locked
    3. 
    The Purpose of Language Models
    5m
  • Locked
    4. 
    Legacy Language Models
    6m 45s
  • Locked
    5. 
    Deep Learning-based Language Models
    6m 21s
  • Locked
    6. 
    Current State-of-art Language Models
    6m 37s
  • Locked
    7. 
    The Purpose of Language Representation
    2m 6s
  • Locked
    8. 
    Transfer Learning in NLP
    3m 34s
  • Locked
    9. 
    The Purpose of Transformer Models
    4m 59s
  • Locked
    10. 
    Google BERT and NLP
    5m 34s
  • Locked
    11. 
    BERT Architecture and Variants
    4m
  • Locked
    12. 
    NLP Problems Solved by BERT
    1m 51s
  • Locked
    13. 
    Developing an Amazon Review Sentiment Predictor
    7m 2s
  • Locked
    14. 
    Creating a Disaster Tweet Classifier
    6m 56s
  • Locked
    15. 
    Course Summary
    52s

EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE

Skillsoft is providing you the opportunity to earn a digital badge upon successful completion of this course, which can be shared on any social network or business platform

Digital badges are yours to keep, forever.

YOU MIGHT ALSO LIKE

Likes 7 Likes 7  
Likes 0 Likes 0  
Likes 4 Likes 4