SKILL BENCHMARK

Snowflake Data Pipelines Proficiency (Advanced Level)

  • 24m
  • 24 questions
The Snowflake Data Pipelines Proficiency (Advanced Level) benchmark measures your ability to evaluate Snowflake constructs for continuous data ingestion and transformation, analyze streams for change data capture in Snowflake, and configure and use dynamic tables. You will be assessed on your skills in utilizing standard and append-only streams to process incremental jobs on table data, creating and configuring tasks that run on a fixed schedule and tasks that process stream data, and implementing task graphs with complex task dependencies. Learners who score high on this benchmark demonstrate that they have expertise in performing advanced data transformations in Snowflake and can work on projects without any supervision.

Topics covered

  • analyze the behavior of streams from within transactions
  • build and execute a scheduled serverless task
  • build a task graph with a child task that depends upon a parent task
  • configure a dynamic table to only refresh on demand
  • construct an architecture that utilizes streams, tasks, stages, and dynamic tables to feed into a dynamic dashboard
  • create a dashboard to consume data from different tables and dynamic tables in Snowflake
  • create a dynamic table, configure the refresh_mode, and initialize properties
  • create a dynamic table that depends on another dynamic table and ensure their refresh modes are compatible
  • create an append-only stream and analyze how it deals with insert, update, and delete operations on the base table
  • create and use a triggered task that executes whenever a standard stream has data that can be consumed
  • create a standard stream and analyze how it is updated when data is inserted into the base table
  • create a user-managed scheduled task using a cron expression
  • design and partially implement a task graph with two root nodes and a single child node that depends upon both root nodes
  • extend a data pipeline by adding a triggered task to consume stream data and a dynamic table to compute base table analytical queries
  • identify how to use and configure dynamic tables in Snowflake
  • implement an architecture with a stage, scheduled task, and table
  • implement a task graph with one root node and two child nodes that each depend upon the root
  • implement streams on views and analyze if they are working
  • outline repeatable read isolations in streams
  • outline Snowflake's support for continuous data loading, continuous data transformation, change data capture, and recurring operations
  • perform insert, update, and delete operations on standard stream contents
  • use dummy root nodes to circumvent Snowflake's restriction on multiple root nodes in a task graph
  • use variants of the MERGE INTO command to combine stream contents with the target table for inserts and updates
  • verify the change tracking property of a base table and discover how turning it off impacts connected dynamic tables

RECENTLY ADDED COURSES