MLOps

Courses

Many analytics teams face challenges delivering the analytic solution to the business and integrating it into the operating workflows. MLOps is the collection of best practices to deploy, monitor, and govern analytics solutions. 
MLOps 101
Analytics presents new and unique challenges for teams to operationalize, deliver, and support analytics solutions. MLOps is the set of best practices and standards to deploy, monitor, and improve analytics solutions. This course provides an overview of MLOps and how to leverage best practices for operationalizing analytics solutions.
Learn More
AI Model Monitoring & Refresh 
Maintaining the quality of a model is critical for performance and accuracy. Learn how to define, operationalize, and support model monitoring.
Learn More
AI Model Governance & Risk
Understanding governance and ethics for AI builds trust and mitigates risk. Learn how to define and implement governance and compliance.
Learn More
Analytics Architecture
Analytics architecture needs to support the full life cycle of the analytic solution including exploration, development, and production. This course reviews the necessary layers of abstraction necessary to support building and using analytics at scale.
Learn More

MLOps 101

Overview
Analytics presents new and unique challenges for teams to operationalize, deliver, and support analytics solutions. MLOps is the set of best practices and standards to deploy, monitor, and improve analytics solutions. This course provides an overview of MLOps and how to leverage best practices for operationalizing analytics solutions

Learning Outcomes
After this course, students will be able to:
  • Understand and utilize the terminology, best practices, and key concepts for deploying, monitoring, and governing Analytic Solutions.
  • Describe the role and responsibilities of the teams involved with operationalizing analytic solutions.
  • Explain and leverage the stages of the lifecycle of analytic solutions.
  • Understand and manage alerts and notification for the components of analytic solutions.

Length
1 Day (8 hours)

Pre-Requisites 
This course does not require prerequisite knowledge.

Course Content
This course contains the following modules:
Module 1: Intro to MLOps
  • This module covers the key terminology & components of enterprise MLOps.
  • Students will have the best practices and keys to success, as well as pitfalls to avoid, for successfully deploying, monitoring, and governing analytic solutions.
Module 2: Roles & responsibilities
  • Students will learn the key roles and skill sets involved in MLOps in order to coordinate work in this capability.
Module 3: Lifecycle of an analytics solution
  • Students will learn the stages and steps an analytics solution goes through from idea through deployment and iterative improvement. This helps provide context and awareness of the processes and capabilities required for successful MLOps.
Module 4: Working with MLOps
  • This module focuses on the daily responsibilities and tasks the team will perform to manage analytic solutions through their lifecycle.
Exercises
  • Identify the team’s roles and responsibilities 
  • Map out the lifecycle of a team’s typical 
  • Simulate the daily responsibilities of working with analytic solutions including deployment, managing alerts, refresh, and governance. 

AI Model Monitoring & Refresh

Overview
Maintaining the quality of an analytic solution is critical for performance and accuracy. Learn how to define, operationalize, and support model monitoring. This class focuses on Machine Learning and predictive models.

Learning Outcomes
After this course, students will be able to:
  • Understand and explain the importance of monitoring analytic solutions.
  • Define metrics for statistical performance, data drift, and bias metrics.
  • Utilize the process for champion/ challenger and re-training/ redeployment.
  • Define the requirements for a monitoring pipeline including thresholds and frequency.

Length
2 Days (8 hours/day)

Pre-Requisites 
This course requires a basic understanding of modeling and statistics.

Course Content
This course contains the following modules:
Module 1: Introduction to monitoring
  • Monitoring is key to providing reliable, quality, and relevant predictions to the business.
  • This module covers the importance, best practices, and types of monitoring to increase performance and impact.
Module 2: Monitoring metrics
  • Students will learn the various types of Monitoring metrics including statistical performance, data drift, and bias.
  • This module will also cover how to define the metrics and do initial model testing.
Module 3: Model refresh
  • Monitoring provides insights into model performance over time.
  • Students will learn how to improve and compare model performance through re-training and champion/ challenger.
Module 4: Operational monitoring 
  • Many teams fail to translate monitoring metrics into the day-to-day operational support model for an analytic solution.
Exercises
  • Define metrics for statistical performance, data drift, and bias metrics for a model.
  • Work through a simulated champion/ challenger comparison.
  • Define the requirements for an operational monitoring pipeline including performance thresholds and sampling.

AI Model Governance & Risk

Overview
Understanding governance and ethics for AI builds trust and mitigates risk. Learn how to define and implement governance and compliance


Learning Outcomes
After this course, students will be able to:
  • Understand the importance, challenges, and benefits of model governance.
  • Apply governance and compliance framework for their models.
  • Understand the value and methods for tracking and communicating model lineage and meta-data. 
  • Understand ethical AI and how to monitor for bias.

Length
2 Days (8 hours/day)

Pre-Requisites 
This course does not require prerequisite knowledge.

Course Content
This course contains the following modules:
Module 1: Model governance
  • This module introduces the terminology, concepts, and  benefits of model governance.
  • Students will learn the approaches to compliance and how to apply them for their models.
Module 2: Model lineage and meta-data
  • Students will learn how to capture the lineage and meta-data for a model including the key events required for compliance and auditing. 
Module 3: Ethical AI
  • This module provides a deep dive into the ramifications of bias and fairness for AI.
  • It includes best practices for applying to the concepts into operational bias monitoring for models. 
Exercises
  • Apply a governance and compliance framework for sample models to generate a sample compliance report.
  • Create the capturing process for model lineage and meta-data for a sample model.
  • Outline and implement a plan to monitor a model for bias.

Analytics Architecture

Overview
Analytics architecture needs to support the full life cycle of the analytic solution including exploration, development, and production. This course reviews the necessary layers of abstraction necessary to support building and using analytics at scale.

Learning Outcomes
After this course, students will be able to:
  • Describe and utilize the abstractions and architectural components for batch, on-demand/ REST, and streaming model deployments. 
  • Create architectural diagrams of model deployments, data pipelines, and explore/ testing/ promotion environments.

Length
2 Days (8 hours/day)

Pre-Requisites 
This course requires a basic experience and understanding of architecture and data pipelining.

Course Content
This course contains the following modules:
Module 1: Architecture primer
  • This course begins with an overview of the concepts of technical architecture in order to normalize understanding and terminology. This includes networking, tooling/infrastructure, and scalability topics.
Module 2: Analytics solution deployment architecture
  • Students will learn standard deployment patterns for batch, on-demand/ REST and streaming use cases.
Module 3: Designing data pipelines
  • Creating optimal data pipelines for the end solution while architecting for optimal data movement and redundancy.
  • Learn an overview of the DataOps development, test, and deployment cycle.
Module 4: Explore, test, and promotion architecture
  • Design environments to support the MLOps and DataOps processes.
Module 4: Security & governance
  • Designing architecture within security and governance constraints.
Exercises
  • Create architectural diagrams of analytic solution deployments.
  • Create examples of data pipelines.
  • Create explore/ testing/ promotion environments with critical characteristics to support MLOps and DataOps.

Receive the latest AI Adoption best practices.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.