Python Parallel Programming Solutions

Python Parallel Programming Solutions

English | MP4 | AVC 1280×720 | AAC 48KHz 2ch | 4 Hours | 2.12 GB

Master efficient parallel programming to build powerful applications using Python

This course will teach you parallel programming techniques using examples in Python and help you explore the many ways in which you can write code that allows more than one process to happen at once.

Starting with introducing you to the world of parallel computing, we move on to cover the fundamentals in Python. This is followed by exploring the thread-based parallelism model using the Python threading module by synchronizing threads and using locks, mutex, semaphores queues, GIL, and the thread pool. Next you will be taught about process-based parallelism, where you will synchronize processes using message passing and will learn about the performance of MPI Python Modules.

Moving on, you’ll get to grips with the asynchronous parallel programming model using the Python asyncio module, and will see how to handle exceptions. You will discover distributed computing with Python, and learn how to install a broker, use Celery Python Module, and create a worker. You will understand anche Pycsp, the Scoop framework, and disk modules in Python. Further on, you will get hands-on in GPU programming with Python using the PyCUDA module and will evaluate performance limitations.

What You Will Learn

  • Synchronize multiple threads and processes to manage parallel tasks
  • Implement message passing communication between processes to build parallel applications
  • Program your own GPU cards to address complex problems
  • Manage computing entities to execute distributed computational tasks
  • Write efficient programs by adopting the event-driven programming model
  • Explore the cloud technology with DJango and Google App Engine
  • Apply parallel programming techniques that can lead to performance improvements
Table of Contents

Getting Started with Parallel Computing and Python
01. The Parallel Computing Memory Architecture
02. Memory Organization
03. Memory Organization (Continued)
04. Parallel Programming Models
05. Designing a Parallel Program
06. Evaluating the Performance of a Parallel Program
07. Introducing Python
08. Working with Processes in Python
09. Working with Threads in Python

Thread-Based Parallelism
10. Defining a Thread
11. Determining the Current Thread
12. Using a Thread in a Subclass
13. Thread Synchronization with Lock
14. Thread Synchronization with RLock
15. Thread Synchronization with Semaphores
16. Thread Synchronization with a Condition
17. Thread Synchronization with an Event
18. Using the with Statement
19. Thread Communication Using a Queue
20. Evaluating the Performance of Multithread Applications

Process-Based Parallelism
21. Spawning a Process
22. Naming a Process
23. Running a Process in the Background
24. Killing a Process
25. Using a Process in a Subclass
26. Exchanging Objects between Processes
27. Synchronizing Processes
28. Managing a State between Processes
29. Using a Process Pool
30. Using the mpi4py Python Module
31. Point-to-Point Communication
32. Avoiding Deadlock Problems
33. Using Broadcast for Collective Communication
34. Using Scatter for Collective Communication
35. Using Gather for Collective Communication
36. Using Alltoall for Collective Communication
37. The Reduction Operation
38. Optimizing the Communication

Asynchronous Programming
39. Using the concurrent.futures Python Modules
40. Event Loop Management with Asyncio
41. Handling Coroutines with Asyncio
42. Manipulating a Task with Asyncio
43. Dealing with Asyncio and Futures

Distributed Python
44. Using Celery to Distribute Tasks
45. Creating a Task with Celery
46. Scientific Computing with SCOOP
47. Handling Map Functions with SCOOP
48. Remote Method Invocation with Pyro4
49. Chaining Objects with Pyro4
50. Developing a Client-Server Application with Pyro4
51. Communicating Sequential Processes with PyCSP
52. A Remote Procedure Call with RPyC

GPU Programming with Python
53. Using the PyCUDA Module
54. Building a PyCUDA Application
55. Understanding the PyCUDA Memory Model with Matrix Manipulation
56. Kernel Invocations with GPU Array
57. Evaluating Element-Wise Expressions with PyCUDA
58. The MapReduce Operation with PyCUDA
59. GPU Programming with NumbaPro
60. Using GPU-Accelerated Libraries with NumbaPro
61. Using the PyOpenCL Module
62. Building a PyOpenCL Application
63. Evaluating Element-Wise Expressions with PyOpenCl
64. Testing Your GPU Application with PyOpenCL