Transfer Learning for Images Using PyTorch: Essential Training

Transfer Learning for Images Using PyTorch: Essential Training

English | MP4 | AVC 1280×720 | AAC 48KHz 2ch | 0h 58m | 145 MB

After its debut in 2017, PyTorch quickly became the tool of choice for many deep learning researchers. In this course, Jonathan Fernandes shows you how to leverage this popular machine learning framework for a similarly buzzworthy technique: transfer learning. Using a hands-on approach, Jonathan explains the basics of transfer learning, which enables you to leverage the pretrained parameters of an existing deep-learning model for other tasks. He then shows how to implement transfer learning for images using PyTorch, including how to create a fixed feature extractor and freeze neural network layers. Plus, find out about using learning rates and differential learning rates.

Topics include:

  • What is transfer learning?
  • Using autograd
  • Creating a fixed feature extractor
  • Training an extractor
  • Fine-tuning the ConvNet
  • Learning rates and differential learning rates
Table of Contents

1 Welcome
2 What you should know before watching this course
3 What is transfer learning
4 VGG16
5 CIFAR-10 dataset
6 Creating a fixed feature extractor
7 Understanding loss CrossEntropyLoss() and NLLLoss()
8 Autograd
9 Using autograd
10 Training the fixed feature extractor
11 Optimizers
12 CPU to GPU
13 Train the extractor
14 Evaluate the network and viewing images
15 Viewing images and normalization
16 Accuracy of the model
17 Fine-tuning
18 Using fine-tuning
19 Training from the fully connected network onwards
20 Unfreezing and training over the last CNN block onwards
21 Unfreezing and training over the last two CNN block onwards
22 Learning rates
23 Differential learning rates
24 Next steps