Hadoop Design Principles

Apache Hadoop 2.0    |    Intermediate
  • 11 videos | 42m 45s
  • Includes Assessment
  • Earns a Badge
Likes 15 Likes 15
Hadoop's HDFS is a highly fault-tolerant distributed file system suitable for applications that have large data sets. Explore the principles of supercomputing and Hadoop's open source software components.

WHAT YOU WILL LEARN

  • recall the design principles of Hadoop
    describe the design principles of sharing nothing
    describe the design principles of embracing failure
    describe the components of the Hadoop Distributed File System (HDFS)
    describe the four main HDFS daemons
    describe Hadoop YARN
  • describe the roles of the Resource Manager daemon
    describe the YARN NodeManager and ApplicationMaster daemons
    define MapReduce and describe its relations to YARN
    describe data analytics
    describe the reasons for the complexities of the Hadoop Ecosystem

IN THIS COURSE

  • 4m 17s
  • 3m 12s
  • Locked
    3.  The Principle of Embracing Failure
    4m 22s
  • Locked
    4.  Hadoop Distributed File System (HDFS)
    4m 46s
  • Locked
    5.  Introducing HDFS Daemons
    3m 23s
  • Locked
    6.  Introducing Hadoop YARN
    3m 37s
  • Locked
    7.  YARN Daemons on the Master Server
    2m 33s
  • Locked
    8.  YARN Daemons on the Data Server
    3m 9s
  • Locked
    9.  Introducing MapReduce
    2m 30s
  • Locked
    10.  Introducing Data Analytics
    4m 54s
  • Locked
    11.  Mastering the Hadoop Ecosystem
    6m 3s

EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE

Skillsoft is providing you the opportunity to earn a digital badge upon successful completion on some of our courses, which can be shared on any social network or business platform.

Digital badges are yours to keep, forever.

PEOPLE WHO VIEWED THIS ALSO VIEWED THESE

Likes 49 Likes 49  
Likes 19 Likes 19  
Likes 167 Likes 167