fbpx




Basic Machine Learning Training

Basic Machine Learning Training

Due to the COVID-19 our training courses will be taught via an online classroom.

Receive in-depth knowledge from industry professionals, test your skills with hands-on assignments & demos, and get access to valuable resources and tools.

This course is an introduction into general ML concepts such as supervised vs unsupervised models, classification vs regression models, dimensionality reduction, workflows, cross validation, hyper-parameter tunning, model evaluation, and bias-variance trade-off. After this course, you will be able to design basic machine learning models in python and cross-validate their outcome and optimize hyper-parameters using sci-kit learn. This course is ideal for everyone with a keen interest in machine learning. As requirements, experience with python is necessary.

Are you interested? Contact us and we will get in touch with you.

 

Get in touch for more information

Fill in the form and we will contact you about the Basic ML training:

Academy: Basic ML
I agree to be contacted *

About the training & classes

The Basic Machine Learning training is split in 3 days. Click below to see a detailed description of each class: 

 
 
Python Machine Learning Basics: I

This training provides a theoretical introduction into the basics of Machine Learning and its different sub-fields, as well as a hands-on way of seeing how it is applied in practice. At the core of this training is the scikit-learn library, one of the most powerful and versatile tools for Machine Learning in Python.

The training includes theory, demos, and hands-on exercises.

After this training you will have gained knowledge about:

  • Machine Learning, its goals and potential applications
  • Different types of Machine Learning: supervised, unsupervised and reinforcement learning
  • Classification and regression problems
  • Techniques such as clustering and dimensionality reduction
  • A minimal example workflow of a prediction model in scikit-learn
  • Splitting datasets into training and test sets
  • How to train, predict, and score a classification prediction model
  • The standard interfaces of scikit-learn classes
  • Transformers and estimators in scikit-learn
  • Some of the machine learning algorithms you’ll have at your disposal, such as k-nearest neighbors, logistic regression, support vector machines, neural networks, etc.
Python Machine Learning Basics: II

In the second training, we build upon what we have learned previously, and expand our workflow by showing how to optimize prediction models using Parameter Tuning. We discuss how and why to perform Cross-Validation and how to prevent Information Leakage. Bringing everything together, we finally show how to combine multiple steps of a machine learning workflow into pipelines, thereby making the process more organized, efficient and less error-prone.

The training includes theory, demos, and hands-on exercises.

After this training you will have gained knowledge about:

  • Cross-Validation
  • Commonly used Cross-Validation strategies
  • The importance of a validation set
  • Information leakage
  • Workflow of grid search and cross validation
  • Standard interfaces of the GridSearchCV class
  • Pipelines and their role in combining transformers and estimators
Python Machine Learning Basics: III

In the last training of the series, we expand our knowledge of how to score machine learning models, discuss common pitfalls and show how to deal with them. We will do this by first examining the concepts of bias, variance, overfitting and underfitting, followed by diving into important performance metrics such as accuracy, precision, recall, F1 scores, ROC curves, etc. for classification problems and elaborating on commonly used metrics for regression. This last part in our basic toolkit allows us to properly assess a prediction model that we train to recognize images of handwritten digits during the hands-on lab session.

The training includes theory, demos, and hands-on exercises.

After this training you will have gained knowledge about:

  • Overfitting, underfitting. bias-variance tradeoff
  • Model evaluation in practice using sci-kit learn
  • Evaluation metrics for classification, such as accuracy, precision, recall, F1, area under curve
  • Interpreting confusion matrices, classification reports and ROC curves
  • Decision function and classification probabilities
  • Dealing with unbalanced datasets
  • Evaluation metrics for regression, such as MAE, RMSE, R^2