fbpx




AWS SageMaker Training

Introduction AWS SageMaker Training

Due to the COVID-19 our training courses will be taught via an online classroom.

Receive in-depth knowledge from industry professionals, test your skills with hands-on assignments & demos, and get access to valuable resources and tools.

This course is an introduction into using Amazon SageMaker to build, train and deploy ML models in the AWS Cloud environment. After this course, you will be able to train and deploy ML models in SageMaker using built-in and custom estimators, leverage SageMaker Experiments to compare performance across various models, capture real-time training metrics using SageMaker Debugger and demonstrate the difference between deploying to a live endpoint and batch transform jobs for inference. This course is ideal for anyone with an interest in creating ML solutions in the cloud. Experience with Python and Scikit learn is required.

Are you interested? Contact us and we will get in touch with you.

Get in touch for more information

Fill in the form and we will contact you about the SageMaker training:

Academy: AWS SageMaker
I agree to be contacted *

About the training & classes

The AWS SageMaker training is split in 4 days. Click below to see a detailed description of each class: 

 
AWS SageMaker Introduction

The first session provides a brief theoretical overview of the general machine learning cycle, including data preprocessing, model training and deployment, model monitoring and how to set up and run this workflow, all within the Amazon SageMaker environment for a simple Linear Learner (logistic regression) model.

The training includes theory, demos, and hands-on exercises. After this training, you will have gained knowledge about:

  • Model training and deployment within SageMaker
  • SageMaker built-in algorithms
  • Deploying to an endpoint vs. batch transform for inference
SageMaker Autotuning Jobs

In the second session, we will expand our use case and focus on improving the performance of our simple model from the first session by setting up SageMaker AutoTuning jobs for hyperparameter tuning. We will also explore other, more complex built-in algorithms that SageMaker has to offer in order to achieve better model performance.

The training includes theory, demos, and hands-on exercises. After this training, you will have gained knowledge about:

  • Setting up SageMaker AutoTuning jobs
  • SageMaker Experiments to compare models
SageMaker Debugger with XGBoost

For the third session, we continue with the XGBoost algorithm and explore how to configure and use SageMaker Debugger to check for class imbalance, collect real-time evaluation metrics and calculate feature importance using SHAP values during training.

The training includes theory, demos, and hands-on exercises. After this training, you will have gained knowledge about:

  • Using SageMaker Debugger to check class imbalance effect on training
  • Capturing real-time training evaluation metrics
  • Model explanability and feature importance for “black-box” models using SHAP
Custom SKLearn Estimator with SageMaker

In the previous sessions, we were only using SageMaker’s built-in algorithms, but what if we want to use something other than what SageMaker offers? In our final session, we will create, train and deploy our own custom estimator using scikit learn’s Random Forest classifier and examine how this process differs from using the built-in algorithms.

The training includes theory, demos, and hands-on exercises. After this training, you will have gained knowledge about:

  • How to build custom, scikit learn-based algorithms in SageMaker