Advanced NLP: Introduction to Transformer Models

Natural Language Processing 2022    |    Intermediate
  • 12 Videos | 40m 55s
  • Includes Assessment
  • Earns a Badge
With recent advancements in cheap GPU compute power and natural language processing (NLP) research, companies and researchers have introduced many powerful models and architectures that have taken NLP to new heights. In this course, learn about Transformer models like Bert and GPT and the maturity of AI in NLP areas due to these models. Next, examine the fundamentals of Transformer models and their architectures. Finally, discover the importance of attention mechanisms in the Transformer architecture and how they help achieve state-of-the-art results in NLP tasks. Upon completing this course, you'll be able to understand different aspects of Transformer architectures like the self-attention layer and encoder-decoder models.

WHAT YOU WILL LEARN

  • discover the key concepts covered in this course
    outline and apply sequence-to-sequence (Seq2Seq) encoder-decoder network architecture
    recall how to use the attention method to improve NLP model results
    identify the fundamentals of the Transformer architecture
    recall the fundamentals of the self-attention layer in the Transformer architecture
    state the fundamentals of multi-head attention in the Transformer architecture
  • identify the encoder block in the Transformer architecture
    identify the decoder block in the Transformer architecture
    outline the fundamentals of Transformer models
    recall industry use cases for Transformer models
    state the challenges for Transformer models
    summarize the key concepts covered in this course

IN THIS COURSE

  • Playable
    1. 
    Course Overview
    1m 20s
    UP NEXT
  • Playable
    2. 
    Sequence-to-Sequence (Seq2Seq) Models
    6m 52s
  • Locked
    3. 
    Attention in Seq2Seq Models
    7m 19s
  • Locked
    4. 
    Transformer Architecture
    1m
  • Locked
    5. 
    Self-Attention Layer in Transformer Architecture
    6m 52s
  • Locked
    6. 
    Multi-head Attention in Transformer Architecture
    3m 32s
  • Locked
    7. 
    Transformer Encoder Block
    1m 56s
  • Locked
    8. 
    Transformer Decoder Block
    2m 14s
  • Locked
    9. 
    Transformer Model Architecture
    1m 41s
  • Locked
    10. 
    Industry Use Cases for Transformer Models
    4m 3s
  • Locked
    11. 
    Transformer Model Challenges
    3m 24s
  • Locked
    12. 
    Course Summary
    43s

EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE

Skillsoft is providing you the opportunity to earn a digital badge upon successful completion of this course, which can be shared on any social network or business platform

Digital badges are yours to keep, forever.