Data Engineering Foundations LiveLessons Part 1: Using Spark, Hive, and Hadoop Scalable Tools

Data Engineering Foundations LiveLessons Part 1: Using Spark, Hive, and Hadoop Scalable Tools

English | MP4 | AVC 1280×720 | AAC 48KHz 2ch | 30 Lessons (6h 46m) | 3.92 GB

The perfect way to get started with scalable data engineering tools. All tools and examples are presented using a practical hands-on approach that can be reproduced on a freely provided virtual machine. By the completion of these LiveLessons, participants will have gained the understanding and experience to begin working within the big data engineering ecosystem.

Data Engineering Foundations Part1: Using Spark, Hive, and Hadoop Scalable Tools LiveLessons provides over six hours of video introducing you to the Apache Hadoop big data ecosystem. The tutorial includes background information and demonstrates the core components of data engineering and scalability, including Apache PySpark, Hadoop, Hadoop Distributed File Systems (HDFS), MapReduce, Hive, and the

Zeppelin web notebook. It also covers the use of basic Linux command line analytic tools. All lesson examples and open-source software used in these LiveLessons are freely available on a companion virtual machine that enables continued exploration of the lesson examples.

Learn How To

  • Understand basic data engineering concepts
  • Understand Apache Hadoop, MapReduce, and Spark operation
  • Understand scalable systems
  • Use Linux command line analytic tools
  • Use Apache Zeppelin web notebooks with different tools
  • Use Apache Hadoop and the Hadoop Distributed File System
  • Use Apache Hadoop MapReduce with Python
  • Use the Apache Hive Scalable Database
  • Use Apache PySpark with MapReduce
  • Use Apache PySpark with dataframes and Hive tables

Lesson 1: Background Concepts
In Lesson 1, Doug introduces you to the important concepts you need to know to understand the big data, Hadoop, and Spark ecosystem. He begins with a description of big data and big data analytic concepts and then presents Hadoop as a big data platform. He then turns to the basics of Hadoop and the Spark language to finish up the lesson.

Lesson 2: Working with Scalable Systems
In Lesson 2, Doug introduces you to working with scalable systems. The lessons start with Doug covering scalable computing concepts and then turns to a freely-available Linux-based virtual machine that is runnable on most laptop and desktop systems. Using this virtual machine, you can run most of the examples in the lessons. Doug also uses the virtual machine to demonstrate some of the Linux command line analytic tools and introduce the Zeppelin web notebook.

Lesson 3: Using the Hadoop HDFS File System
Doug explains the Hadoop Distributed File System (HDFS) in Lesson 3. He also presents a quick-start on how to use HDFS command line tools. Finally, he finishes up the lesson by explaining how to use the HDFS web interface.

Lesson 4: Using Hadoop MapReduce
In this lesson Doug explains and demonstrates how to use Hadoop MapReduce. He begins with an explanation of the MapReduce algorithm and how it operates in a clustered parallel environment. Doug then demonstrates how to run MapReduce examples and use the Hadoop streaming interface on your local machine. He concludes the lesson by demonstrating Hadoop performance using a four-node Hadoop cluster and the web-based MapReduce jobs interface.

Lesson 5: Using the Hive Scalable Database
In Lesson 5, Doug introduces the Hive scalable database. Based on Hadoop MapReduce, Hive is used to derive a new feature from an existing dataset. This important data engineering process is demonstrated from both the command line and the Zeppelin web notebook,

Lesson 6 : Using the Apache PySpark
In the final lesson of Part 1, Doug introduces PySpark. Based on the underlying Spark language, PySpark enables Python programmers to learn scalable data engineering. Before the hand-on lessons, Doug provides a solid introduction to Spark and PySpark operations. This background includes using the Spark web interface and demonstrates how to manage a SparkSession and a SparkContext for distributed operation. Examples of MapReduce programming and DataFrame operations are presented from both the command line and a Zeppelin notebook. The lesson concludes with the operations needed to transfer data to and from PySpark and Hive database tables.

Table of Contents

1 Data Engineering Foundations – Introduction
2 Learning objectives
3 Understand big data and data analytics concepts
4 Understand Hadoop as a big data platform
5 Understand Hadoop MapReduce basics
6 Understand Spark language basics
7 Learning objectives
8 Understand scalable concepts
9 Emulate scalable systems
10 Use Linux command line analytics tools
11 Use the Zeppelin web notebook
12 Learning objectives
13 Understand HDFS basics
14 Use HDFS command line tools
15 Use the HDFS web interface
16 Learning objectives
17 Understand the MapReduce paradigm and platform
18 Understand parallel MapReduce
19 Run MapReduce examples
20 Use the Streaming interface
21 Use the MapReduce (YARN) web interface
22 Learning objectives
23 Run a Hive ‘SQL’ example using the command line
24 Run a Hive example using a Zeppelin notebook
25 Learning objectives
26 Understand Spark language basics
27 Understand SparkSession and Context
28 Use PySpark for MapReduce programing
29 Run a PySpark example using a Zeppelin notebook
30 Data Engineering Foundations – Summary

Homepage