Machine Learning Specialization by DeepLearning.AI

Machine Learning Specialization by DeepLearning.AI

English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 157 Lessons (23h 5m) | 3.45 GB

#BreakIntoAI with Machine Learning Specialization. Master fundamental AI concepts and develop practical machine learning skills in the beginner-friendly, 3-course program by AI visionary Andrew Ng

WHAT YOU WILL LEARN

Build ML models with NumPy & scikit-learn, build & train supervised models for prediction & binary classification tasks (linear, logistic regression)

Build & train a neural network with TensorFlow to perform multi-class classification, & build & use decision trees & tree ensemble methods

Apply best practices for ML development & use unsupervised learning techniques for unsupervised learning including clustering & anomaly detection

Build recommender systems with a collaborative filtering approach & a content-based deep learning method & build a deep reinforcement learning model

The Machine Learning Specialization is a foundational online program created in collaboration between DeepLearning.AI and Stanford Online. This beginner-friendly program will teach you the fundamentals of machine learning and how to use these techniques to build real-world AI applications.

This Specialization is taught by Andrew Ng, an AI visionary who has led critical research at Stanford University and groundbreaking work at Google Brain, Baidu, and Landing.AI to advance the AI field.

It provides a broad introduction to modern machine learning, including supervised learning (multiple linear regression, logistic regression, neural networks, and decision trees), unsupervised learning (clustering, dimensionality reduction, recommender systems), and some of the best practices used in Silicon Valley for artificial intelligence and machine learning innovation (evaluating and tuning models, taking a data-centric approach to improving performance, and more.)

By the end of this Specialization, you will have mastered key concepts and gained the practical know-how to quickly and powerfully apply machine learning to challenging real-world problems. If you’re looking to break into AI or build a career in machine learning, the new Machine Learning Specialization is the best place to start.

By the end of this Specialization, you will be ready to

  • Build machine learning models in Python using popular machine learning libraries NumPy and scikit-learn.
  • Build and train supervised machine learning models for prediction and binary classification tasks, including linear regression and logistic regression.
  • Build and train a neural network with TensorFlow to perform multi-class classification.
  • Apply best practices for machine learning development so that your models generalize to data and tasks in the real world.
  • Build and use decision trees and tree ensemble methods, including random forests and boosted trees.
  • Use unsupervised learning techniques for unsupervised learning: including clustering and anomaly detection.
  • Build recommender systems with a collaborative filtering approach and a content-based deep learning method.
  • Build a deep reinforcement learning model.
Table of Contents

advanced-learning-algorithms

neural-networks

neural-networks-intuition
1 welcome
2 neurons-and-the-brain
3 demand-prediction
4 example-recognizing-images

neural-network-model
5 neural-network-layer
6 more-complex-neural-networks
7 inference-making-predictions-forward-propagation

tensorflow-implementation
8 inference-in-code
9 data-in-tensorflow
10 building-a-neural-network

neural-network-implementation-in-python
11 forward-prop-in-a-single-layer
12 general-implementation-of-forward-propagation

speculations-on-artificial-general-intelligence-agi
13 is-there-a-path-to-agi

vectorization-optional
14 how-neural-networks-are-implemented-efficiently
15 matrix-multiplication
16 matrix-multiplication-rules
17 matrix-multiplication-code

neural-network-training

neural-network-training
18 tensorflow-implementation
19 training-details

activation-functions
20 alternatives-to-the-sigmoid-activation
21 choosing-activation-functions
22 why-do-we-need-activation-functions

multiclass-classification
23 multiclass
24 softmax
25 neural-network-with-softmax-output
26 improved-implementation-of-softmax
27 classification-with-multiple-outputs-optional

additional-neural-network-concepts
28 advanced-optimization
29 additional-layer-types

back-propagation-optional
30 what-is-a-derivative-optional
31 computation-graph-optional
32 larger-neural-network-example-optional

advice-for-applying-machine-learning

advice-for-applying-machine-learning
33 deciding-what-to-try-next
34 evaluating-a-model
35 model-selection-and-training-cross-validation-test-sets

bias-and-variance
36 diagnosing-bias-and-variance
37 regularization-and-bias-variance
38 establishing-a-baseline-level-of-performance
39 learning-curves
40 deciding-what-to-try-next-revisited
41 bias-variance-and-neural-networks

machine-learning-development-process
42 iterative-loop-of-ml-development
43 error-analysis
44 adding-data
45 transfer-learning-using-data-from-a-different-task
46 full-cycle-of-a-machine-learning-project
47 fairness-bias-and-ethics

skewed-datasets-optional
48 error-metrics-for-skewed-datasets
49 trading-off-precision-and-recall

decision-trees

decision-trees
50 decision-tree-model
51 learning-process

decision-tree-learning
52 measuring-purity
53 choosing-a-split-information-gain
54 putting-it-together
55 using-one-hot-encoding-of-categorical-features
56 continuous-valued-features
57 regression-trees-optional

tree-ensembles
58 using-multiple-decision-trees
59 sampling-with-replacement
60 random-forest-algorithm
61 xgboost
62 when-to-use-decision-trees

conversations-with-andrew-optional
63 andrew-ng-and-chris-manning-on-natural-language-processing

acknowledgments
64 acknowledgements_instructions

machine-learning

week-1-introduction-to-machine-learning

overview-of-machine-learning
65 welcome-to-machine-learning
66 applications-of-machine-learning

supervised-vs-unsupervised-machine-learning
67 what-is-machine-learning
68 supervised-learning-part-1
69 supervised-learning-part-2
70 unsupervised-learning-part-1
71 unsupervised-learning-part-2
72 jupyter-notebooks

practice-quiz-supervised-vs-unsupervised-learning

regression-model
73 linear-regression-model-part-1
74 linear-regression-model-part-2
75 cost-function-formula
76 cost-function-intuition
77 visualizing-the-cost-function
78 visualization-examples

practice-quiz-regression-model

train-the-model-with-gradient-descent
79 gradient-descent
80 implementing-gradient-descent
81 gradient-descent-intuition
82 learning-rate
83 gradient-descent-for-linear-regression
84 running-gradient-descent

practice-quiz-train-the-model-with-gradient-descent

week-2-regression-with-multiple-input-variables

multiple-linear-regression
85 multiple-features
86 vectorization-part-1
87 vectorization-part-2
88 gradient-descent-for-multiple-linear-regression

practice-quiz-multiple-linear-regression

gradient-descent-in-practice
89 feature-scaling-part-1
90 feature-scaling-part-2
91 checking-gradient-descent-for-convergence
92 choosing-the-learning-rate
93 feature-engineering
94 polynomial-regression

practice-quiz-gradient-descent-in-practice

week-2-practice-lab-linear-regression
95 week-2-practice-lab-linear-regression_instructions

week-3-classification

classification-with-logistic-regression
96 motivations
97 logistic-regression
98 decision-boundary

practice-quiz-classification-with-logistic-regression

cost-function-for-logistic-regression
99 cost-function-for-logistic-regression
100 simplified-cost-function-for-logistic-regression

practice-quiz-cost-function-for-logistic-regression

gradient-descent-for-logistic-regression
101 gradient-descent-implementation

practice-quiz-gradient-descent-for-logistic-regression

the-problem-of-overfitting
102 the-problem-of-overfitting
103 addressing-overfitting
104 cost-function-with-regularization
105 regularized-linear-regression
106 regularized-logistic-regression

practice-quiz-the-problem-of-overfitting

week-3-practice-lab-logistic-regression
107 week-3-practice-lab-logistic-regression_instructions

conversations-with-andrew-optional
108 andrew-ng-and-fei-fei-li-on-human-centered-ai

acknowledgments
109 acknowledgments_instructions

unsupervised-learning-recommenders-reinforcement-learning

unsupervised-learning

welcome-to-the-course
110 welcome

clustering
111 what-is-clustering
112 k-means-intuition
113 k-means-algorithm
114 optimization-objective
115 initializing-k-means
116 choosing-the-number-of-clusters

anomaly-detection
117 finding-unusual-events
118 gaussian-normal-distribution
119 anomaly-detection-algorithm
120 developing-and-evaluating-an-anomaly-detection-system
121 anomaly-detection-vs-supervised-learning
122 choosing-what-features-to-use

recommender-systems

collaborative-filtering
123 making-recommendations
124 using-per-item-features
125 collaborative-filtering-algorithm
126 binary-labels-favs-likes-and-clicks

recommender-systems-implementation-detail
127 mean-normalization
128 tensorflow-implementation-of-collaborative-filtering
129 finding-related-items

content-based-filtering
130 collaborative-filtering-vs-content-based-filtering
131 deep-learning-for-content-based-filtering
132 recommending-from-a-large-catalogue
133 ethical-use-of-recommender-systems
134 tensorflow-implementation-of-content-based-filtering

principal-component-analysis
135 reducing-the-number-of-features-optional
136 pca-algorithm-optional
137 pca-in-code-optional

reinforcement-learning

reinforcement-learning-introduction
138 what-is-reinforcement-learning
139 mars-rover-example
140 the-return-in-reinforcement-learning
141 making-decisions-policies-in-reinforcement-learning
142 review-of-key-concepts

state-action-value-function
143 state-action-value-function-definition
144 state-action-value-function-example
145 bellman-equation
146 random-stochastic-environment-optional

continuous-state-spaces
147 example-of-continuous-state-space-applications
148 lunar-lander
149 learning-the-state-value-function
150 algorithm-refinement-improved-neural-network-architecture
151 algorithm-refinement-greedy-policy
152 algorithm-refinement-mini-batch-and-soft-updates-optional
153 the-state-of-reinforcement-learning

summary-and-thank-you
154 summary-and-thank-you

conversations-with-andrew-optional
155 andrew-ng-and-chelsea-finn-on-ai-and-robotics

acknowledgments
156 acknowledgments_instructions
157 optional-opportunity-to-mentor-other-learners_instructions

Homepage