PyTorch for Deep Learning

PyTorch for Deep Learning

English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 324 Lessons (48h 59m) | 15.9 GB

Learn PyTorch from scratch! This PyTorch course is your step-by-step guide to developing your own deep learning models using PyTorch. You’ll learn Deep Learning with PyTorch by building a massive 3-part real-world milestone project. By the end, you’ll have the skills and portfolio to get hired as a Deep Learning Engineer.

Learn PyTorch. Become a Deep Learning Engineer. Get Hired.

We can guarantee (with, like, 99.57% confidence) that this is the most comprehensive, modern, and up-to-date course you will find to learn PyTorch and the cutting-edge field of Deep Learning. Daniel takes you step-by-step from an absolute beginner to becoming a master of Deep Learning with PyTorch.

WHAT YOU’LL LEARN

  • Everything from getting started with using PyTorch to building your own real-world models
  • Why PyTorch is a fantastic way to start working in machine learning
  • Understand how to integrate Deep Learning into tools and applications
  • Create and utilize machine learning algorithms just like you would write a Python program
  • Build and deploy your own custom trained PyTorch neural network accessible to the public
  • How to take data, build a ML algorithm to find patterns, and then use that algorithm as an AI to enhance your applications
  • Master deep learning and become a top candidate for recruiters seeking Deep Learning Engineers
  • To expand your Machine Learning and Deep Learning skills and toolkit
  • The skills you need to become a Deep Learning Engineer and get hired with a chance of making US$100,000+ / year
Table of Contents

1 PyTorch for Deep Learning
2 Course Welcome and What Is Deep Learning
3 Why Use Machine Learning or Deep Learning
4 The Number 1 Rule of Machine Learning and What Is Deep Learning Good For
5 Machine Learning vs. Deep Learning
6 Anatomy of Neural Networks
7 Different Types of Learning Paradigms
8 What Can Deep Learning Be Used For
9 What Is and Why PyTorch
10 What Are Tensors
11 What We Are Going To Cover With PyTorch
12 How To and How Not To Approach This Course
13 Important Resources For This Course
14 Getting Setup to Write PyTorch Code
15 Introduction to PyTorch Tensors
16 Creating Random Tensors in PyTorch
17 Creating Tensors With Zeros and Ones in PyTorch
18 Creating a Tensor Range and Tensors Like Other Tensors
19 Dealing With Tensor Data Types
20 Getting Tensor Attributes
21 Manipulating Tensors (Tensor Operations)
22 Matrix Multiplication (Part 1)
23 Matrix Multiplication (Part 2): The Two Main Rules of Matrix Multiplication
24 Matrix Multiplication (Part 3): Dealing With Tensor Shape Errors
25 Finding the Min Max Mean and Sum of Tensors (Tensor Aggregation)
26 Finding The Positional Min and Max of Tensors
27 Reshaping, Viewing and Stacking Tensors
28 Squeezing, Unsqueezing and Permuting Tensors
29 Selecting Data From Tensors (Indexing)
30 PyTorch Tensors and NumPy
31 PyTorch Reproducibility (Taking the Random Out of Random)
32 Different Ways of Accessing a GPU in PyTorch
33 Setting up Device Agnostic Code and Putting Tensors On and Off the GPU
34 PyTorch Fundamentals: Exercises and Extra-Curriculum
35 Introduction and Where You Can Get Help
36 Getting Setup and What We Are Covering
37 Creating a Simple Dataset Using the Linear Regression Formula
38 Splitting Our Data Into Training and Test Sets
39 Building a function to Visualize Our Data
40 Creating Our First PyTorch Model for Linear Regression
41 Breaking Down What’s Happening in Our PyTorch Linear regression Model
42 Discussing Some of the Most Important PyTorch Model Building Classes
43 Checking Out the Internals of Our PyTorch Model
44 Making Predictions With Our Random Model Using Inference Mode
45 Training a Model Intuition (The Things We Need)
46 Setting Up an Optimizer and a Loss Function
47 PyTorch Training Loop Steps and Intuition
48 Writing Code for a PyTorch Training Loop
49 Reviewing the Steps in a Training Loop Step by Step
50 Running Our Training Loop Epoch by Epoch and Seeing What Happens
51 Writing Testing Loop Code and Discussing What’s Happening Step by Step
52 Reviewing What Happens in a Testing Loop Step by Step
53 Writing Code to Save a PyTorch Model
54 Writing Code to Load a PyTorch Model
55 Setting Up to Practice Everything We Have Done Using Device-Agnostic Code
56 Putting Everything Together (Part 1): Data
57 Putting Everything Together (Part 2): Building a Model
58 Putting Everything Together (Part 3): Training a Model
59 Putting Everything Together (Part 4): Making Predictions With a Trained Model
60 Putting Everything Together (Part 5): Saving and Loading a Trained Model
61 Exercise: Imposter Syndrome
62 PyTorch Workflow: Exercises and Extra-Curriculum
63 Introduction to Machine Learning Classification With PyTorch
64 Classification Problem Example: Input and Output Shapes
65 Typical Architecture of a Classification Neural Network (Overview)
66 Making a Toy Classification Dataset
67 Turning Our Data into Tensors and Making a Training and Test Split
68 Laying Out Steps for Modelling and Setting Up Device-Agnostic Code
69 Coding a Small Neural Network to Handle Our Classification Data
70 Making Our Neural Network Visual
71 Recreating and Exploring the Insides of Our Model Using nn.Sequential
72 Setting Up a Loss Function Optimizer and Evaluation Function for Our Classification Network
73 Going from Model Logits to Prediction Probabilities to Prediction Labels
74 Coding a Training and Testing Optimization Loop for Our Classification Model
75 Writing Code to Download a Helper Function to Visualize Our Models Predictions
76 Discussing Options to Improve a Model
77 Creating a New Model with More Layers and Hidden Units
78 Writing Training and Testing Code to See if Our New and Upgraded Model Performs Better
79 Creating a Straight Line Dataset to See if Our Model is Learning Anything
80 Building and Training a Model to Fit on Straight Line Data
81 Evaluating Our Models Predictions on Straight Line Data
82 Introducing the Missing Piece for Our Classification Model Non-Linearity
83 Building Our First Neural Network with Non-Linearity
84 Writing Training and Testing Code for Our First Non-Linear Model
85 Making Predictions with and Evaluating Our First Non-Linear Model
86 Replicating Non-Linear Activation Functions with Pure PyTorch
87 Putting It All Together (Part 1): Building a Multiclass Dataset
88 Creating a Multi-Class Classification Model with PyTorch
89 Setting Up a Loss Function and Optimizer for Our Multi-Class Model
90 Going from Logits to Prediction Probabilities to Prediction Labels with a Multi-Class Model
91 Training a Multi-Class Classification Model and Troubleshooting Code on the Fly
92 Making Predictions with and Evaluating Our Multi-Class Classification Model
93 Discussing a Few More Classification Metrics
94 PyTorch Classification: Exercises and Extra-Curriculum
95 What Is a Computer Vision Problem and What We Are Going to Cover
96 Computer Vision Input and Output Shapes
97 What Is a Convolutional Neural Network (CNN)
98 Discussing and Importing the Base Computer Vision Libraries in PyTorch
99 Getting a Computer Vision Dataset and Checking Out Its- Input and Output Shapes
100 Visualizing Random Samples of Data
101 DataLoader Overview Understanding Mini-Batch
102 Turning Our Datasets Into DataLoaders
103 Model 0: Creating a Baseline Model with Two Linear Layers
104 Creating a Loss Function: an Optimizer for Model 0
105 Creating a Function to Time Our Modelling Code
106 Writing Training and Testing Loops for Our Batched Data
107 Writing an Evaluation Function to Get Our Models Results
108 Setup Device-Agnostic Code for Running Experiments on the GPU
109 Model 1: Creating a Model with Non-Linear Functions
110 Mode 1: Creating a Loss Function and Optimizer
111 Turing Our Training Loop into a Function
112 Turing Our Testing Loop into a Function
113 Training and Testing Model 1 with Our Training and Testing Functions
114 Getting a Results Dictionary for Model 1
115 Model 2: Convolutional Neural Networks High Level Overview
116 Model 2: Coding Our First Convolutional Neural Network with PyTorch
117 Model 2: Breaking Down Conv2D Step by Step
118 Model 2: Breaking Down MaxPool2D Step by Step
119 Mode 2: Using a Trick to Find the Input and Output Shapes of Each of Our Layers
120 Model 2: Setting Up a Loss Function and Optimizer
121 Model 2: Training Our First CNN and Evaluating Its Results
122 Comparing the Results of Our Modelling Experiments
123 Making Predictions on Random Test Samples with the Best Trained Model
124 Plotting Our Best Model Predictions on Random Test Samples and Evaluating Them
125 Making Predictions Across the Whole Test Dataset and Importing Libraries to Plot a Confusion Matrix
126 Evaluating Our Best Models Predictions with a Confusion Matrix
127 Saving and Loading Our Best Performing Model
128 Recapping What We Have Covered Plus Exercises and Extra-Curriculum
129 What Is a Custom Dataset and What We Are Going to Cover
130 Importing PyTorch and Setting Up Device-Agnostic Code
131 Downloading a Custom Dataset of Pizza, Steak and Sushi Images
132 Becoming One With the Data (Part 1): Exploring the Data Format
133 Becoming One With the Data (Part 2): Visualizing a Random Image
134 Becoming One With the Data (Part 3): Visualizing a Random Image with Matplotlib
135 Transforming Data (Part 1): Turning Images Into Tensors
136 Transforming Data (Part 2): Visualizing Transformed Images
137 Loading All of Our Images and Turning Them Into Tensors With ImageFolder
138 Visualizing a Loaded Image From the Train Dataset
139 Turning Our Image Datasets into PyTorch DataLoaders
140 Creating a Custom Dataset Class in PyTorch High Level Overview
141 Creating a Helper Function to Get Class Names From a Directory
142 Writing a PyTorch Custom Dataset Class from Scratch to Load Our Images
143 Compare Our Custom Dataset Class to the Original ImageFolder Class
144 Writing a Helper Function to Visualize Random Images from Our Custom Dataset
145 Turning Our Custom Datasets Into DataLoaders
146 Exploring State of the Art Data Augmentation With Torchvision Transforms
147 Building a Baseline Model (Part 1): Loading and Transforming Data
148 Building a Baseline Model (Part 2): Replicating Tiny VGG from Scratch
149 Building a Baseline Model (Part 3): Doing a Forward Pass to Test Our Model Shapes
150 Using the Torchinfo Package to Get a Summary of Our Model
151 Creating Training and Testing loop Functions
152 Creating a Train Function to Train and Evaluate Our Models
153 Training and Evaluating Model 0 With Our Training Functions
154 Plotting the Loss Curves of Model 0
155 Discussing the Balance Between Overfitting and Underfitting and How to Deal With Each
156 Creating Augmented Training Datasets and DataLoaders for Model 1
157 Constructing and Training Model 1
158 Plotting the Loss Curves of Model 1
159 Plotting the Loss Curves of All of Our Models Against Each Other
160 Predicting on Custom Data (Part 1): Downloading an Image
161 Predicting on Custom Data (Part2): Loading In a Custom Image With PyTorch
162 Predicting on Custom Data (Part 3): Getting Our Custom Image Into the Right Format
163 Predicting on Custom Data (Part 4): Turning Our Models Raw Outputs Into Prediction Labels
164 Predicting on Custom Data (Part 5): Putting It All Together
165 Summary of What We Have Covered Plus Exercises and Extra-Curriculum
166 What Is Going Modular and What We Are Going to Cover
167 Going Modular Notebook (Part 1): Running It End to End
168 Downloading a Dataset
169 Writing the Outline for Our First Python Script to Setup the Data
170 Creating a Python Script to Create Our PyTorch DataLoaders
171 Turning Our Model Building Code into a Python Script
172 Turning Our Model Training Code into a Python Script
173 Turning Our Utility Function to Save a Model into a Python Script
174 Creating a Training Script to Train Our Model in One Line of Code
175 Going Modular: Summary, Exercises and Extra-Curriculum
176 Introduction: What is Transfer Learning and Why Use It
177 Where Can You Find Pretrained Models and What We Are Going to Cover
178 Installing the Latest Versions of Torch and Torchvision
179 Downloading Our Previously Written Code from Going Modular
180 Downloading Pizza, Steak, Sushi Image Data from Github
181 Turning Our Data into DataLoaders with Manually Created Transforms
182 Turning Our Data into DataLoaders with Automatic Created Transforms
183 Which Pretrained Model Should You Use
184 Setting Up a Pretrained Model with Torchvision
185 Different Kinds of Transfer Learning
186 Getting a Summary of the Different Layers of Our Model
187 Freezing the Base Layers of Our Model and Updating the Classifier Head
188 Training Our First Transfer Learning Feature Extractor Model
189 Plotting the Loss Curves of Our Transfer Learning Model
190 Outlining the Steps to Make Predictions on the Test Images
191 Creating a Function Predict On and Plot Images
192 Making and Plotting Predictions on Test Images
193 Making a Prediction on a Custom Image
194 Main Takeaways, Exercises and Extra Curriculum
195 What Is Experiment Tracking and Why Track Experiments
196 Getting Setup by Importing Torch Libraries and Going Modular Code
197 Creating a Function to Download Data
198 Turning Our Data into DataLoaders Using Manual Transforms
199 Turning Our Data into DataLoaders Using Automatic Transforms
200 Preparing a Pretrained Model for Our Own Problem
201 Setting Up a Way to Track a Single Model Experiment with TensorBoard
202 Training a Single Model and Saving the Results to TensorBoard
203 Exploring Our Single Models Results with TensorBoard
204 Creating a Function to Create SummaryWriter Instances
205 Adapting Our Train Function to Be Able to Track Multiple Experiments
206 What Experiments Should You Try
207 Discussing the Experiments We Are Going to Try
208 Downloading Datasets for Our Modelling Experiments
209 Turning Our Datasets into DataLoaders Ready for Experimentation
210 Creating Functions to Prepare Our Feature Extractor Models
211 Coding Out the Steps to Run a Series of Modelling Experiments
212 Running Eight Different Modelling Experiments in 5 Minutes
213 Viewing Our Modelling Experiments in TensorBoard
214 Loading In the Best Model and Making Predictions on Random Images from the Test Set
215 Making a Prediction on Our Own Custom Image with the Best Model
216 Main Takeaways, Exercises and Extra Curriculum
217 What Is a Machine Learning Research Paper?
218 Why Replicate a Machine Learning Research Paper?
219 Where Can You Find Machine Learning Research Papers and Code?
220 What We Are Going to Cover
221 Getting Setup for Coding in Google Colab
222 Downloading Data for Food Vision Mini
223 Turning Our Food Vision Mini Images into PyTorch DataLoaders
224 Visualizing a Single Image
225 Replicating a Vision Transformer – High Level Overview
226 Breaking Down Figure 1 of the ViT Paper
227 Breaking Down the Four Equations Overview and a Trick for Reading Papers
228 Breaking Down Equation 1
229 Breaking Down Equations 2 and 3
230 Breaking Down Equation 4
231 Breaking Down Table 1
232 Calculating the Input and Output Shape of the Embedding Layer by Hand
233 Turning a Single Image into Patches (Part 1: Patching the Top Row)
234 Turning a Single Image into Patches (Part 2: Patching the Entire Image)
235 Creating Patch Embeddings with a Convolutional Layer
236 Exploring the Outputs of Our Convolutional Patch Embedding Layer
237 Flattening Our Convolutional Feature Maps into a Sequence of Patch Embeddings
238 Visualizing a Single Sequence Vector of Patch Embeddings
239 Creating the Patch Embedding Layer with PyTorch
240 Creating the Class Token Embedding
241 Creating the Class Token Embedding – Less Birds
242 Creating the Position Embedding
243 Equation 1: Putting it All Together
244 Equation 2: Multihead Attention Overview
245 Equation 2: Layernorm Overview
246 Turning Equation 2 into Code
247 Checking the Inputs and Outputs of Equation
248 Equation 3: Replication Overview
249 Turning Equation 3 into Code
250 Transformer Encoder Overview
251 Combining Equation 2 and 3 to Create the Transformer Encoder
252 Creating a Transformer Encoder Layer with In-Built PyTorch Layer
253 Bringing Our Own Vision Transformer to Life – Part 1: Gathering the Pieces of the Puzzle
254 Bringing Our Own Vision Transformer to Life – Part 2: Putting Together the Forward Method
255 Getting a Visual Summary of Our Custom Vision Transformer
256 Creating a Loss Function and Optimizer from the ViT Paper
257 Training our Custom ViT on Food Vision Mini
258 Discussing what Our Training Setup Is Missing
259 Plotting a Loss Curve for Our ViT Model
260 Getting a Pretrained Vision Transformer from Torchvision and Setting it Up
261 Preparing Data to Be Used with a Pretrained ViT
262 Training a Pretrained ViT Feature Extractor Model for Food Vision Mini
263 Saving Our Pretrained ViT Model to File and Inspecting Its Size
264 Discussing the Trade-Offs Between Using a Larger Model for Deployments
265 Making Predictions on a Custom Image with Our Pretrained ViT
266 PyTorch Paper Replicating: Main Takeaways, Exercises and Extra-Curriculum
267 What is Machine Learning Model Deployment and Why Deploy a Machine Learning Model
268 Three Questions to Ask for Machine Learning Model Deployment
269 Where Is My Model Going to Go?
270 How Is My Model Going to Function?
271 Some Tools and Places to Deploy Machine Learning Models
272 What We Are Going to Cover
273 Getting Setup to Code
274 Downloading a Dataset for Food Vision Mini
275 Outlining Our Food Vision Mini Deployment Goals and Modelling Experiments
276 Creating an EffNetB2 Feature Extractor Model
277 Create a Function to Make an EffNetB2 Feature Extractor Model and Transforms
278 Creating DataLoaders for EffNetB2
279 Training Our EffNetB2 Feature Extractor and Inspecting the Loss Curves
280 Saving Our EffNetB2 Model to File
281 Getting the Size of Our EffNetB2 Model in Megabytes
282 Collecting Important Statistics and Performance Metrics for Our EffNetB2 Model
283 Creating a Vision Transformer Feature Extractor Model
284 Creating DataLoaders for Our ViT Feature Extractor Model
285 Training Our ViT Feature Extractor Model and Inspecting Its Loss Curves
286 Saving Our ViT Feature Extractor and Inspecting Its Size
287 Collecting Stats About Our ViT Feature Extractor
288 Outlining the Steps for Making and Timing Predictions for Our Models
289 Creating a Function to Make and Time Predictions with Our Models
290 Making and Timing Predictions with EffNetB2
291 Making and Timing Predictions with ViT
292 Comparing EffNetB2 and ViT Model Statistics
293 Visualizing the Performance vs Speed Trade-off
294 Gradio Overview and Installation
295 Gradio Function Outline
296 Creating a Predict Function to Map Our Food Vision Mini Inputs to Outputs
297 Creating a List of Examples to Pass to Our Gradio Demo
298 Bringing Food Vision Mini to Life in a Live Web Application
299 Getting Ready to Deploy Our App Hugging Face Spaces Overview
300 Outlining the File Structure of Our Deployed App
301 Creating a Food Vision Mini Demo Directory to House Our App Files
302 Creating an Examples Directory with Example Food Vision Mini Images
303 Writing Code to Move Our Saved EffNetB2 Model File
304 Turning Our EffNetB2 Model Creation Function Into a Python Script
305 Turning Our Food Vision Mini Demo App Into a Python Script
306 Creating a Requirements File for Our Food Vision Mini App
307 Downloading Our Food Vision Mini App Files from Google Colab
308 Uploading Our Food Vision Mini App to Hugging Face Spaces Programmatically
309 Running Food Vision Mini on Hugging Face Spaces and Trying it Out
310 Food Vision Big Project Outline
311 Preparing an EffNetB2 Feature Extractor Model for Food Vision Big
312 Downloading the Food 101 Dataset
313 Creating a Function to Split Our Food 101 Dataset into Smaller Portions
314 Turning Our Food 101 Datasets into DataLoaders
315 Training Food Vision Big: Our Biggest Model Yet!
316 Outlining the File Structure for Our Food Vision Big
317 Downloading an Example Image and Moving Our Food Vision Big Model File
318 Saving Food 101 Class Names to a Text File and Reading them Back In
319 Turning Our EffNetB2 Feature Extractor Creation Function into a Python Script
320 Creating an App Script for Our Food Vision Big Model Gradio Demo
321 Zipping and Downloading Our Food Vision Big App Files
322 Deploying Food Vision Big to Hugging Face Spaces
323 PyTorch Mode Deployment: Main Takeaways, Extra-Curriculum and Exercises
324 Thank You!

Homepage