Previous Page

Accessing Data with Spark: An Introduction to Spark

Accessing Data with Spark: An Introduction to Spark

Expected Duration
Lesson Objectives
Course Number
Expertise Level


Apache Spark is an open-source cluster-computing framework used for data science and it has become the defacto big data framework. In this Skillsoft Aspire course, you will explore the basics of Apache Spark, an analytics engine for working with big data that is built on top of Hadoop. Discover how it allows operations on data with both its own library methods and with SQL while delivering great performance.

Expected Duration (hours)

Lesson Objectives

Accessing Data with Spark: An Introduction to Spark

  • Course Overview
  • recognize where Spark fits in with Hadoop and its components
  • describe Spark RDDs and their characteristics, including what makes them resilient and distributed
  • identify the types of operations which are permitted on an RDD and describe how RDD transformations are lazily evaluated
  • distinguish between RDDs and DataFrames and describe the relationship between the two
  • list the crucial components of Spark and the relationships between them and recognize the functions of the Spark Session, Master and Worker nodes
  • install PySpark and initialize a Spark Context
  • create and load data into an RDD
  • initialize a Spark DataFrame from the contents of an RDD
  • work with Spark DataFrames containing both primitive and structured data types
  • define the contents of a DataFrame using the SQLContext
  • apply the map() function on an RDD to configure a DataFrame with column headers
  • retrieve required data from within a DataFrame and define and apply transformations on a DataFrame
  • convert Spark DataFrames to Pandas DataFrames and vice versa
  • describe basic Spark concepts
  • Course Number:

    Expertise Level