Aspire Journeys

MLOps

  • 15 Courses | 23h 9m 39s
Rating 5.0 of 2 users Rating 5.0 of 2 users (2)
This is a transformative journey through the world of MLOps (Machine Learning Operations), where data science meets engineering excellence. Our comprehensive MLOps journey is designed to equip you with the skills and knowledge to seamlessly transition from machine learning experimentation to real-world deployment. Explore the principles, tools, and best practices that bridge the gap between data science and operational success. Whether you're new to MLOps or seeking to enhance your expertise, this journey will empower you to create efficient, reproducible, and scalable machine learning pipelines while overcoming the unique challenges of managing machine learning models and data at scale.

Track 1: Intro to MLOps

In this track of the MLOps Aspire Journey, the focus will be on understanding the fundamental concepts and principles that underpin this transformative field. Explore the evolution of MLOps, dissect the MLOps workflow, and delve into the challenges and best practices that await you on this exciting journey.

  • 1 Course | 1h 27m 21s

Track 2: MLFlow

In this track of the MLOps Aspire Journey, the focus will be on how to track, manage, and deploy your machine learning models efficiently. From MLFlow tracking and models to model deployment and CI/CD integration, this track empowers you with essential MLOps skills.

  • 7 Courses | 11h 49m 40s

Track 3: Data Version Control

In this track of the MLOps Aspire Journey, you will discover the power of Data Version Control (DVC) and its role in simplifying experiment tracking, model management, and automation in MLOps. Explore DVC's VS Code extension, command-line tools, and open-source version control system. Learn to streamline your machine learning workflows and enable continuous machine learning with DVC.

  • 7 Courses | 9h 52m 38s

COURSES INCLUDED

Getting Started with MLOps
MLOps is the integration of machine learning (ML) with DevOps, focusing on streamlining the end-to-end machine learning life cycle. It emphasizes collaboration, automation, and reproducibility to deliver reliable and scalable machine learning solutions. By implementing MLOps practices, organizations can efficiently manage and govern their machine learning workflows, leading to faster development cycles, better model performance, and enhanced collaboration among data scientists and engineers. In this course, you will delve into the theoretical aspects of MLOps and understand what sets it apart from traditional software development. You will explore the factors that affect ML models in production and gain insights into the challenges and considerations of deploying machine learning solutions. Next, you will see how the Machine Learning Canvas can help you understand the components of ML development. You will then explore the end-to-end machine learning workflow, covering stages from data preparation to model deployment. Finally, you will look at the different stages in MLOps maturity in your organization, levels 0, 1, and 2. You will learn how organizations evolve in their MLOps journey and the key characteristics of each maturity level.
11 videos | 1h 27m has Assessment available Badge

COURSES INCLUDED

MLOps with MLflow: Getting Started
MLflow plays a crucial role in systemizing the machine learning (ML) workflow by providing a unified platform that seamlessly integrates different stages of the ML life cycle. In the course, you will delve into the theoretical aspects of the end-to-end machine learning workflow, covering data preprocessing and visualization. You will learn the importance of data cleaning and feature engineering to prepare datasets for model training. You will explore the MLflow platform that streamlines experiment tracking, model versioning, and deployment management, aiding in better collaboration and model reproducibility. Next, you will explore MLflow's core components, understanding their significance in data science and model deployment. You'll dive into the Model Registry that enables organized model versioning and explore MLflow Tracking as a powerful tool for logging and visualizing experiment metrics and model performance. Finally, you'll focus on practical aspects, including setting up MLflow in a virtual environment, understanding the user interface, and integrating MLflow capabilities into Jupyter notebooks.
13 videos | 1h 27m has Assessment available Badge
MLOps with MLflow: Creating & Tracking ML Models
With MLflow's tracking capabilities, you can easily log and monitor experiments, keeping track of various model runs, hyperparameters, and performance metrics. In this course, you will dive hands-on into implementing the ML workflow, including data preprocessing and visualization. You will focus on loading, cleaning, and analyzing data for machine learning. You will visualize data with box plots, heatmaps, and other plots and use the Pandas profiling tool to get a comprehensive view of your data. Next, you will dive deeper into MLflow Tracking and explore features that enhance experimentation and model development. You will create MLflow experiments to group runs and manage them effectively. You will compare multiple models and visualize performance using the MLflow user interface (UI), which can aid in model selection for further optimization and deployment. Finally, you will explore the capabilities of MLflow autologging to automatically record experiment metrics and artifacts and streamline the tracking process.
15 videos | 1h 45m has Assessment available Badge
MLOps with MLflow: Registering & Deploying ML Models
The MLflow Model Registry enables easy registration and deployment of machine learning (ML) models for future use, either locally or in the cloud. It streamlines model management, facilitating collaboration among team members during model development and deployment. In this course, you will create classification models using the regular ML workflow. You'll see that visualizing and cleaning data, running experiments, and analyzing model performance using SHapley Additive exPlanations (SHAP) will provide valuable insights for decision-making. You'll also discover how programmatic comparison will aid in selecting the best-performing model. Next, you'll explore the powerful MLflow Models feature, enabling efficient model versioning and management. You'll learn how to modify registered model versions, work with different versions of the same model, and serve models to Representational State Transfer (REST) endpoints. Finally, you'll explore integrating MLflow with Azure Machine Learning, leveraging the cloud's power for model development.
15 videos | 1h 57m has Assessment available Badge
MLOps with MLflow: Hyperparameter Tuning ML Models
Hyperparameter tuning, an essential step to improve model performance, involves modifying a model's parameters to find the best combination for optimal results. The integration of MLflow with Databricks unlocks a powerful combination that enhances the machine learning (ML) workflow. First, you will explore the collaborative potential between MLflow and Databricks for machine learning projects. You will learn to create an Azure Databricks workspace and run MLflow models using notebooks in Databricks, establishing a robust foundation for model development in a scalable environment. Additionally, you will set up Databricks File System (DBFS) as a source of model input files. Next, you will implement hyperparameter tuning using MLflow and its integration with the hyperopt library. You will define the objective function, search space, and algorithm to optimize model performance. Through systematic tracking and comparison of hyperparameter configurations with MLflow, you will find the best-performing model setups. Finally, you will integrate SQLite with MLflow, allowing efficient management and storage of experiment-run data. You will create a regression model using scikit-learn and statsmodels, comparing the processes for the two.
12 videos | 1h 37m has Assessment available Badge
MLOps with MLflow: Creating Time-series Models & Evaluating Models
MLflow integrates with Prophet, a powerful time-series model that considers seasonal effects. MLflow provides a variety of model evaluation capabilities, empowering you to thoroughly assess and analyze model performance. First, you will use Prophet in combination with MLflow for time-series forecasting. Integrating Prophet with MLflow's tracking capabilities, you will seamlessly manage and evaluate your time-series models. Running the Prophet model and viewing metrics will allow you to assess its forecasting performance. Cross-validation will enhance the evaluation process, ensuring reliability across different temporal windows. Then, you will use MLflow to evaluate machine learning (ML) models effectively. MLflow's evaluation capabilities, including Lift curves, Receiver Operating Characteristic-Area Under the Curve (ROC-AUC) curves, precision-recall curves, and beeswarm charts, provide valuable insights into model behavior and performance. Finally, you will use MLflow to configure thresholds for model metrics and only validate those models which meet this threshold.
10 videos | 1h 23m has Assessment available Badge
MLOps with MLflow: Tracking Deep Learning Models
Deep learning models have revolutionized computer vision and natural language processing, enabling powerful image and text-based predictions. You will start with image-based predictions using TensorFlow. You will visualize and clean data to generate datasets ready for machine learning (ML). You will train an image classification model with TensorFlow and track metrics and artifacts using MLflow. You will register the model in MLflow for local deployment and deployment on Azure. Next, you will explore PyTorch Lightning to simplify deep learning model development and training. You will use it for image classification, setting up your model with little effort. You will then train an image classification model with MLflow for tracking, deploy it locally, and expose it for predictions using a REST endpoint. Finally, you will get an overview of large language models (LLMs) like Transformers. You will load a pre-trained Transformers-based sentiment analysis model from Hugging Face and use MLflow to track its performance and artifacts.
10 videos | 1h 31m has Assessment available Badge
MLOps with MLflow: Using MLflow Projects & Recipes
MLflow Projects enable you to package machine learning code, data, and environment specifications for reproducibility and easy sharing. Registering projects in MLflow simplifies version control and enhances collaboration within data science teams. MLflow Recipes, on the other hand, automate and standardize machine learning tasks with pre-defined templates and configurations, promoting consistency and repeatability while allowing customization for specific applications. With recipes and projects combined, MLflow becomes a powerful tool for impactful and consistent results, streamlining data science workflows. You will start this course by learning how MLflow Projects enable you to package, share, and reproduce machine learning code. Next, you will learn about MLflow Recipes that automate machine learning tasks in reproducible environments. You will explore the MLflow Regression Template, customize its files for model training, and run the recipe to view the model's performance. Finally, you will explore running a classification recipe in Databricks and modifying YAML and code files for configuration.
17 videos | 2h 8m has Assessment available Badge

COURSES INCLUDED

MLOps with Data Version Control: Getting Started
Data Version Control (DVC) is a technology that simplifies and enhances data versioning and management. It provides Git-like capabilities to track, share, and reproduce changes in data while optimizing storage and facilitating collaboration in data-centric projects. In this course, you will discover how DVC simplifies the intricate components of ML projects - code, configuration files, data, and model artifacts. Next, you will embark on hands-on DVC exploration by installing Git locally and establishing a remote repository on GitHub. Then you will install DVC, set up a local repository, configure DVC remote storage, and add and track data using DVC. Finally, you will create Python-based machine learning (ML) models and track them with DVC and Git integration. You will create metafiles pointing to DVC-stored data and artifacts and commit these files to GitHub, tagging different model and data versions. Through Git tags, you will access specific model iterations for your work. This course will empower you with theoretical insights and practical proficiency in employing DVC and Git.
16 videos | 1h 51m has Assessment available Badge
MLOps with Data Version Control: Working with Pipelines & DVCLive
Data Version Control (DVC) pipelines enable the construction of end-to-end data processing workflows, connecting data and code stages while maintaining version control. DVCLive is a Python library for logging machine learning metrics in simple file formats and is fully compatible with DVC. In this course, you will configure and employ pipelines in DVC and modularize and coordinate each step, while leveraging the dvc.yaml file for stage management and the dvc.lock file for project consistency. Next, you will dive into practical DVC utilization with Jupyter notebooks. You will track model parameters, metrics, and artifacts via Python code's log statements using DVCLive. Then you will explore the user-friendly Iterative Studio interface. Finally, you will leverage DVCLive for comprehensive model experimentation. By pushing experiment files to DVC and employing Git branches, you will manage parallel developments. You will pull requests to streamline merging experiment branches and register model artifacts with the Iterative Studio registry. This course will equip you with the foundational knowledge of DVC and enable you to automate the tracking of model metrics and parameters with DVCLive.
17 videos | 2h 11m has Assessment available Badge
MLOps with Data Version Control: Tracking & Serving Models with DVC & MLEM
Data Version Control (DVC) enables model tracking by versioning machine learning (ML) models alongside their associated data and code, allowing seamless reproducibility of model training and evaluation across different environments and collaborators. MLEM is a tool that easily packages, deploys, and serves ML models. In this course, you will compare ML model performance using DVC. You will create multiple churn-prediction classification models employing various algorithms, including logistic regression, random forests, and XGBoost and you will track metrics, parameters, and artifacts. Then you will leverage the Iterative Studio interface to visually contrast models' metrics and performance graphs and perform comparisons using the command line. Next, you will unlock the potential of hyperparameter tuning with the Optuna framework. You will tune your ML model, compare the outcomes of hyperparameter-tuned models, and select the optimal model for deployment. Finally, you will codify and move your ML model through REST endpoints and Docker-hosted container deployment, solidifying your understanding of serving MLEM models for predictions. This course will equip you with comprehensive knowledge of codifying and serving ML models.
15 videos | 1h 53m has Assessment available Badge
MLOps with Data Version Control: Tracking & Logging Deep Learning Models
Data Version Control (DVC) offers robust support for deep learning models by effectively managing large model files and their dependencies, allowing versioned tracking of complex architectures. This ensures reproducibility in training, evaluation, and deployment pipelines, even in deep learning projects. In this course, you will discover how to track deep learning models through DVC. Using PyTorch Lightning, you will construct a convolutional neural network (CNN) for image classification. Then you will use DVCLive to log and visualize sample images and use the DVCLiveLogger to monitor model metrics in real time via Iterative Studio. Next, you will undertake deep learning model training with TensorFlow. You will set up a CNN for image classification and train your model while leveraging DVCLive to record and display training-related metrics. Finally, you will use the DVCLiveCallback to dynamically visualize metrics during training. This course will equip you with the expertise to effectively build and track deep learning models within DVC's ecosystem.
12 videos | 1h 30m has Assessment available Badge
MLOps with Data Version Control: Creating & Using DVC Pipelines
Data Version Control (DVC) pipelines empower data practitioners to define, automate, and version complex data processing workflows. By streamlining end-to-end processes, pipelines enhance collaboration, maintain data lineage, and enable efficient experimentation and deployment in data-centric projects. In this course, you will discover the intricacies of machine learning (ML) pipelines within DVC. You will set up a pipeline with data cleaning, training, and evaluation stages and run these stages using the dvc repro command. Then you will use DVC to track the status of the pipeline with the help of the dvc.lock file. Next, you will run and track a DVC pipeline as an experiment using DVCLive and view metrics and artifacts of your pipeline in the Iterative Studio user interface. Finally, you will queue DVC experiments so they can be run later, either in parallel or sequentially. This course gives you an in-depth understanding of DVC pipelines, equipping you to seamlessly orchestrate and manage your ML workloads.
12 videos | 1h 21m has Assessment available Badge
MLOps with Data Version Control: CI/CD Using Continuous Machine Learning
Continuous integration and continuous deployment (CI/CD) are crucial in machine learning operations (MLOps) as they automate the integration of ML models into software development. Continuous machine learning (CML) refers to an ML model's ability to learn continuously from a stream of data. In this course, you will build a complete Data Version Control (DVC) machine learning pipeline in preparation for continuous machine learning. You will modularize your machine learning workflow using DVC pipelines, configure DVC remote storage on Google Drive, and set up authentication for DVC to access Google Drive. Next, you will configure CI/CD through CML and use the open-source CML framework to implement CI/CD within your machine learning project. Finally, you will see how for every git push to your remote repository, a CI/CD pipeline will execute your experiment and generate a CML report with model metrics for every GitHub commit. At the end of this course, you will be able to use DVC's integration with CML to build CI/CD pipelines.
9 videos | 1h 2m has Assessment available Badge
Final Exam: MLOps
Final Exam: MLOps will test your knowledge and application of the topics presented throughout the MLOps journey.
1 video | 32s has Assessment available Badge

EARN A DIGITAL BADGE WHEN YOU COMPLETE THESE TRACKS

Skillsoft is providing you the opportunity to earn a digital badge upon successful completion on some of our courses, which can be shared on any social network or business platform.

Digital badges are yours to keep, forever.

YOU MIGHT ALSO LIKE

Rating 4.5 of 67 users Rating 4.5 of 67 users (67)
Rating 4.6 of 40 users Rating 4.6 of 40 users (40)
Rating 5.0 of 1 users Rating 5.0 of 1 users (1)