Deployment of Machine Learning Models
Sub Category
- Data Science
{inAds}
Objectives
- Define and understand the different deployment scenarios, being it Edge or Server deployment
- Understand the constraints on each deployment scenario
- Be able to choose the scenario suitable to your practical case and put the proper system architecture for it
- Deploy ML models into Edge and Mobile devices using TLite tools
- Deploy ML models into Browsers using TFJS
- Define the different model serving qualities and understand their settings for production-level systems
- Define the landscape of model serving options and be able to choose the proper one based on the needed qualities
- Build a server model that uses Cloud APIs like TFHub, Torchhub or TF-API and customize it on custom data, or even build it from scratch
- Serve a model using Flask, Django or TFServing, using custom infrastructure or in the Cloud like AWS EC2 and using Docker containers
- Convert different models built in any framework to a common runtime format using ONNX
- Understand the full ML development cycle and phases
- Be able to define MLOps, model drift and monitoring
Pre Requisites
- Machine Learning Basics, including model building process
- Deep learning basics and neural networks training process
- Computer vision basics, including ConvNets, transfer learning and pre-trained models architectures
FAQ
- Q. How long do I have access to the course materials?
- A. You can view and review the lecture materials indefinitely, like an on-demand channel.
- Q. Can I take my courses with me wherever I go?
- A. Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side!
{inAds}
Coupon Code(s)