English | 2020 | ISBN: 978-1800203587 | 374 Pages | PDF, EPUB, MOBI | 885 MB
A practical guide to learning visual perception for self-driving cars for computer vision and autonomous system engineers
The visual perception capabilities of a self-driving car are powered by computer vision. The work relating to self-driving cars can be broadly classified into three components – robotics, computer vision, and machine learning. This book provides existing computer vision engineers and developers with the unique opportunity to be associated with this booming field.
You will learn about computer vision, deep learning, and depth perception applied to driverless cars. The book provides a structured and thorough introduction, as making a real self-driving car is a huge cross-functional effort. As you progress, you will cover relevant cases with working code, before going on to understand how to use OpenCV, TensorFlow and Keras to analyze video streaming from car cameras. Later, you will learn how to interpret and make the most of lidars (light detection and ranging) to identify obstacles and localize your position. You’ll even be able to tackle core challenges in self-driving cars such as finding lanes, detecting pedestrian and crossing lights, performing semantic segmentation, and writing a PID controller.
By the end of this book, you’ll be equipped with the skills you need to write code for a self-driving car running in a driverless car simulator, and be able to tackle various challenges faced by autonomous car engineers.
What you will learn
- Understand how to perform camera calibration
- Become well-versed with how lane detection works in self-driving cars using OpenCV
- Explore behavioral cloning by self-driving in a video-game simulator
- Get to grips with using lidars
- Discover how to configure the controls for autonomous vehicles
- Use object detection and semantic segmentation to locate lanes, cars, and pedestrians
- Write a PID controller to control a self-driving car running in a simulator