Machine Learning with scikit-learn LiveLessons

Machine Learning with scikit-learn LiveLessons

English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 7h 17m | 3.30 GB

Introduction

Lesson 1: What is Machine Learning?

Learning objectives

1.1 Install

1.2 Understand the ML Libraries (new lesson, title TBD)

1.3 Describe the techniques used in machine learning

1.4 Understand the difference between “deep learning” and other ML techniques

1.5 Understand classification versus regression versus.clustering and over/underfitting

1.6 Perform dimensionality reduction, explain feature engineering, and utilize feature selection

1.7 Distinguish categorical versus ordinal versus continuous variables

1.8 Perform one-hot encoding

1.9 Utilize hyperparameters and grid search

1.10 Understand choose and metrics

Lesson 2: Exploring a Data Set

Learning objectives

2.1 Uncover anomalies and data integrity problems

2.2 Clean and massage your data

2.3 Choose features and a target

2.4 Implement a train/test split and choose model

Lesson 3: Classification

Learning objectives

3.1 Understand feature importances

3.2 Establish cut points in a decision tree

3.3 Utilize a common API

3.4 Use a more encouraging dataset

3.5 Compare multiple classifiers

3.6 Understand more about feature importances

3.7 Use multiclass classification

3.8 Understand prediction probabilities and decision boundaries

Lesson 4: Regression

Learning objectives

4.1 Sample data sets in scikit-learn

4.2 Compare a gaggle of regressors

4.3 Use linear models

4.4 Understand the pitfalls of linear models

4.5 Use non-linear regressors

Lesson 5: Clustering

Learning objectives

5.1 Compare clustering algorithms

5.2 Cluster to test a hypothesis

5.3 Cluster into N classes

5.4 Cluster into an unknown number of categories

5.5 Use density based clustering: DBScan and HDBScan

5.6 Evaluate clustering

Lesson 6: Hyperparameters

Learning objectives

6.1 Explore one hyperparameter

6.2 Explore many hyperparameters

6.3 Use GridsearchCV

Lesson 7: Feature Engineering and Feature Selection

Learning objectives

7.1 Understand a synthetic example

7.2 Understand dimensionality reduction

7.3 Use principal component analysis (PCA)

7.4 Use other decompositions: NMF, LDA, ICA, t-dist

7.5 Implement feature selection: Univariate

7.6 Implement feature selection: Model-based

7.7 Understand dimensionality expansion (polynomial features)

7.8 Use one-hot encoding

7.9 Scale with StandardScaler, RobustScaler, MinMaxScaler, Normalizer, and others

7.10 Bin values with quantiles or binarize

Lesson 8: Pipelines

Learning objectives

8.1 Understand imperative sequential processing

8.2 Use pipelines

8.3 Do pipelines with grid search

Lesson 9: Robust Train/Test Splits

Learning objectives

9.1 Understand splitting

9.2 Understand multiple splitting: KFold, LeaveOneOut, StratifiedKFold, etc

9.3 Use cross validation

Summary

Table of Contents

01 Machine Learning with scikit-learn LiveLessons – Introduction
02 Learning objectives
03 1.1 Install
04 1.2 Understand the ML Libraries (new lesson, title TBD)
05 1.3 Describe the techniques used in machine learning
06 1.4 Understand the difference between ‘deep learning’ and other ML techniques
07 1.5 Understand classification versus regression versus.clustering and over_underfitting
08 1.6 Perform dimensionality reduction, explain feature engineering, and utilize feature selection
09 1.7 Distinguish categorical versus ordinal versus continuous variables
10 1.8 Perform one-hot encoding
11 1.9 Utilize hyperparameters and grid search
12 1.10 Understand choose and metrics
13 Learning objectives
14 2.1 Uncover anomalies and data integrity problems
15 2.2 Clean and massage your data
16 2.3 Choose features and a target
17 2.4 Implement a train_test split and choose model
18 Learning objectives
19 3.1 Understand feature importances
20 3.2 Establish cut points in a decision tree
21 3.3 Utilize a common API
22 3.4 Use a more encouraging dataset
23 3.5 Compare multiple classifiers
24 3.6 Understand more about feature importances
25 3.7 Use multiclass classification
26 3.8 Understand prediction probabilities and decision boundaries
27 Learning objectives
28 4.1 Sample data sets in scikit-learn
29 4.2 Compare a gaggle of regressors
30 4.3 Use linear models
31 4.4 Understand the pitfalls of linear models
32 4.5 Use non-linear regressors
33 Learning objectives
34 5.1 Compare clustering algorithms
35 5.2 Cluster to test a hypothesis
36 5.3 Cluster into N classes
37 5.4 Cluster into an unknown number of categories
38 5.5 Use density based clustering – DBScan and HDBScan
39 5.6 Evaluate clustering
40 Learning objectives
41 6.1 Explore one hyperparameter
42 6.2 Explore many hyperparameters
43 6.3 Use GridsearchCV
44 Learning objectives
45 7.1 Understand a synthetic example
46 7.2 Understand dimensionality reduction
47 7.3 Use principal component analysis (PCA)
48 7.4 Use other decompositions – NMF, LDA, ICA, t-dist
49 7.5 Implement feature selection – Univariate
50 7.6 Implement feature selection – Model-based
51 7.7 Understand dimensionality expansion (polynomial features)
52 7.8 Use one-hot encoding
53 7.9 Scale with StandardScaler, RobustScaler, MinMaxScaler, Normalizer, and others
54 7.10 Bin values with quantiles or binarize
55 Learning objectives
56 8.1 Understand imperative sequential processing
57 8.2 Use pipelines
58 8.3 Do pipelines with grid search
59 Learning objectives
60 9.1 Understand splitting
61 9.2 Understand multiple splitting – KFold, LeaveOneOut, StratifiedKFold, etc
62 9.3 Use cross validation
63 Summary