SKILL BENCHMARK

DP-100: Prepare Azure Model Deployment Competency (Intermediate Level)

  • 40m
  • 40 questions
The Prepare Azure Model Deployment Competency (Intermediate Level) benchmark measures your ability to identify Data Factory concepts, run model training scripts, and implement training. You will be evaluated on your recognition of pipelines, model deployment and monitoring, metrics for evaluation, and model disaster recovery. A learner who scores high on this benchmark demonstrates that they have good experience in using Azure for building and deploying machine learning solutions on the Azure platform.

Topics covered

  • access diagnostic logs to monitor Data Lake Storage Gen2
  • analyze model fairness using the Fairlearn Python package to identify imbalances between predictions and prediction performance
  • collect and analyze Azure resource log data
  • configure Azure Virtual Machines for disaster recovery
  • configure, manage, and view activity log alerts using the Azure Monitor
  • consume a real-time service that can be used to predict labels
  • create an explainer and upload the explanation so it is available later analysis
  • create Azure Data Factory linked services and datasets
  • create Azure Data Factory pipelines and activities
  • deploy a model as a real-time service to different compute targets
  • describe cloud optimization, as well as best practices for optimizing data using data partitions, Azure Data Lake Storage tuning, Azure Synapse Analytics tuning, and Azure Databricks auto-optimizing
  • describe data privacy problems and how differential privacy works
  • describe high availability and disaster recovery and how they are related to SQL Server on Azure Virtual Machines
  • describe how learning models can use global and local features to quantify the importance of each feature
  • describe how model explainers can be created using the Azure Machine Learning SDK
  • describe how to optimize Azure Cosmos DB using indexing and partitioning
  • describe how training models can be biased due to biases in the training data
  • describe methods for optimizing Azure Blob Storage
  • describe the concepts of linked services and datasets and how they relate to the Azure Data Factory
  • describe the concepts of pipelines and activities and how they relate to the Azure Data Factory
  • describe the Integration Runtime and how it works with the Azure Data Factory
  • describe the options for backing up, storing, and restoring SQL Server databases on virtual machine instances
  • describe the purpose and features of Azure Always On availability groups
  • describe the technical options for providing business continuity to SQL Server
  • monitor a model that is deployed as an Azure Machine Learning real-time service using Jupyter Notebook and Python
  • monitor Azure Blob storage
  • monitor the Azure Cosmos DB using the portal and resource logs
  • monitor the Azure Synapse Analytics jobs and the adaptive cache
  • perform queries against the Azure Monitor logs
  • publish and track Machine Learning pipelines and share them with others
  • query Azure Log Analytics and filter, sort, and group query results
  • schedule a Machine Learning pipeline based on an elapsed time or file system changes
  • trigger a pipeline manually or using a schedule
  • use a Jupyter Notebook and Python to detect and mitigate unfairness in a trained model
  • use a Jupyter Notebook and Python to generate explanations that are part of a model training experiment
  • use Azure Machine Learning pipelines to import, transform, and move data between steps
  • use ML Studio to visualize data drift
  • use the Azure Data Factory Analytics solution to monitor pipelines
  • use the Azure Machine Learning SDK to create and run machine learning pipelines
  • use visualizations in Azure Machine Learning Studio to visualize model explanations

RECENTLY ADDED COURSES