Machine Learning with Core ML in iOS 11

Machine Learning with Core ML in iOS 11

English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 2h 02m | 400 MB

CoreML Framework: A branch of artificial intelligence based on the idea that machines should be able to learn and adapt through experience

Core ML is an exciting new framework that makes running various machine learning and statistical models on macOS and iOS feel natively supported. The framework helps developers integrate already prepared statistical and machine learning models into their apps. You will now be able to create applications that have machine learning functionality built in.

Developers want to learn how to use the features inside Core ML to make their applications smarter when explored by users. These videos will show you just how to integrate machine learning into real-world applications. You will design the UI and create a Tap Gesture Recognizer using AVFoundations. You will be importing Python ML Libraries such as TensorFlow, Keras, Scikit-learn into the Spyder IDE, connecting Caffe dependencies, and configuring Caffe.

You will convert a Scikit-learn model—the Iris dataset—to a CoreML model in X-code to use it in your apps. You can also search for existing models and convert them into a CoreML model so that you can explore them inside X-code and add the functionality into your apps. You will have the power to build apps that display the intellectual ability to learn from the information provided by these models. Wow! This is powerful.

By the end of this course, you will be fluent in the Core ML framework upon completion. The videos will provide the tools needed to get up and running as quickly as possible.

This course is a perfect mix of concepts and practice that will help you to develop a real-world, augmented-reality, iOS 11 application from scratch. With a firm grounding in the fundamentals of the Swift language, and knowledge of how to use the key frameworks, you will be able to build an interesting application.

What You Will Learn

  • Use Core ML framework for applications
  • Xcode 9 – what type of project to build
  • Best practices for building and using trained ML models
  • Using preexisting trained models suitable for your applications
  • Analyzing images using the Vision framework
  • Best practices for building great ML experiences
  • Developing an application with ML functionality
Table of Contents

Introduction and Designing the UI
1 The Course Overview
2 How is Machine Learning Changing the World We Live in
3 Designing the UI
4 Coding Custom Classes

AVFoundation, AVCaptureSession, and TapGestureRecognizer
5 AVFoundation and AVCaptureSession
6 Coding Our Do-Catch Block
7 Optionals Review and Testing Our App
8 Instantiating a TapGestureRecognizer

Training and Converting a ML Model to a Core ML Model
9 Training a Core ML Model and Using Vision Framework
10 Learn How to Use Core ML Model Inside Our Project UI
11 Converting a Trained Model to .mlmodel

Importing Python ML Libraries in Spyder and Configuring Caffe
12 Installing Anaconda IDE and Spyder Console
13 Exploring the Python ML Libraries from the Terminal
14 Installing and Configuring Caffe Dependencies and Packages
15 Editing the OpenCV File in Python

Converting a Scikit-Learn Model to Core ML and Use it in Xcode
16 Converting a Scikit-Learn Model to Core ML
17 Using Coremltools for Conversion
18 Exploring the Converted Machine Learning Model
19 The Bonus Video — A Discussion on iOS Processes