scikit-learn –Test Predictions Using Various Models

scikit-learn –Test Predictions Using Various Models

English | MP4 | AVC 1920×1080 | AAC 44KHz 2ch | 2h 12m | 578 MB

A one-stop solution to test model accuracy with cross-validation

Scikit-learn has evolved as a robust library for machine learning applications in Python with support for a wide range of supervised and unsupervised learning algorithms.

This course begins by taking you through videos on linear models; with scikit-learn, you will take a machine learning approach to linear regression. As you progress, you will explore logistic regression. Then you will build models with distance metrics, including clustering. You will also look at cross-validation and post-model workflows, where you will see how to select a model that predicts well. Finally, you’ll work with Support Vector Machines to get a rough idea of how SVMs work, and also learn about the radial basis function (RBF) kernel.

Style and Approach
This course consists of practical videos on scikit-learn that target novices as well as intermediate users. It explores technical issues in depth, covers additional protocols, and supplies many real-life examples so that you are able to implement scikit-learn in your daily life.

What You Will Learn

  • Evaluate and overcome shortfalls in the linear regression model.
  • Using sparsity to regularize models.
  • Handle data and quantize an image.
  • Search with scikit-learn.
  • Optimize an SVM.
Table of Contents

01 The Course Overview
02 Fitting a Line Through Data
03 Evaluating and Overcoming Shortfalls of the Linear Regression Model
04 Optimizing the Ridge Regression Parameter
05 Using Sparsity to Regularize Models
06 Fundamental Approach to Regularization with LARS
07 Exploring Various Repositories and Datasets
08 Logistic Regression and Confusion Matrix
09 Varying the Classification Threshold in Logistic Regression
10 Analysis and Plotting an ROC Curve Without Context
11 UCI Breast Cancer Dataset
12 In a dataset, we observe sets of points gathered together. With k-means, we will categorize all the points into groups, or clusters.
13 Handling Data and Quantizing an Image
14 Finding the Closest Object in the Feature Space
15 Probabilistic Clustering with Gaussian Mixture Models
16 Using k-means for Outlier Detection
17 Using KNN for Regression
18 Cross-Validation
19 Search with scikit-learn
20 Metrics
21 Dummy Estimators and Persisting Models with joblib
22 Feature Selection
23 Classifying Data with a Linear SVM
24 Optimizing an SVM
25 Multiclass Classification with SVM
26 Support Vector Regression