Tutorials | Exercises | Abstracts | LC Workshops | Comments | Search | Privacy & Legal Notice


Introduction to Parallel Computing

Table of Contents

  1. Abstract
  2. Overview
    1. What is Parallel Computing?
    2. Why Use Parallel Computing?
  3. Concepts and Terminology
    1. von Neumann Computer Architecture
    2. Flynn's Classical Taxonomy
    3. Some General Parallel Terminology
  4. Parallel Computer Memory Architectures
    1. Shared Memory
    2. Distributed Memory
    3. Hybrid Distributed-Shared Memory
  5. Parallel Programming Models
    1. Overview
    2. Shared Memory Model
    3. Threads Model
    4. Message Passing Model
    5. Data Parallel Model
    6. Other Models
  6. Designing Parallel Programs
    1. Automatic vs. Manual Parallelization
    2. Understand the Problem and the Program
    3. Partitioning
    4. Communications
    5. Synchronization
    6. Data Dependencies
    7. Load Balancing
    8. Granularity
    9. I/O
    10. Limits and Costs of Parallel Programming
    11. Performance Analysis and Tuning
  7. Parallel Examples
    1. Array Processing
    2. PI Calculation
    3. Simple Heat Equation
    4. 1-D Wave Equation
  8. References and More Information


Abstract


This presentation covers the basics of parallel computing. Beginning with a brief overview and some concepts and terminology associated with parallel computing, the topics of parallel memory architectures and programming models are then explored. These topics are followed by a discussion on a number of issues related to designing parallel programs. The last portion of the presentation is spent examining how to parallelize several different types of serial programs.

Level/Prerequisites: None



Overview

What is Parallel Computing?



Overview

Why Use Parallel Computing?



Concepts and Terminology

von Neumann Architecture



Concepts and Terminology

Flynn's Classical Taxonomy

Single Instruction, Single Data (SISD):
  • A serial (non-parallel) computer
  • Single instruction: only one instruction stream is being acted on by the CPU during any one clock cycle
  • Single data: only one data stream is being used as input during any one clock cycle
  • Deterministic execution
  • This is the oldest and until recently, the most prevalent form of computer
  • Examples: most PCs, single CPU workstations and mainframes
SISD
Single Instruction, Multiple Data (SIMD):
  • A type of parallel computer
  • Single instruction: All processing units execute the same instruction at any given clock cycle
  • Multiple data: Each processing unit can operate on a different data element
  • This type of machine typically has an instruction dispatcher, a very high-bandwidth internal network, and a very large array of very small-capacity instruction units.
  • Best suited for specialized problems characterized by a high degree of regularity,such as image processing.
  • Synchronous (lockstep) and deterministic execution
  • Two varieties: Processor Arrays and Vector Pipelines
  • Examples:
    • Processor Arrays: Connection Machine CM-2, Maspar MP-1, MP-2
    • Vector Pipelines: IBM 9000, Cray C90, Fujitsu VP, NEC SX-2, Hitachi S820


SIMD

Multiple Instruction, Single Data (MISD):
  • A single data stream is fed into multiple processing units.
  • Each processing unit operates on the data independently via independent instruction streams.
  • Few actual examples of this class of parallel computer have ever existed. One is the experimental Carnegie-Mellon C.mmp computer (1971).
  • Some conceivable uses might be:
    • multiple frequency filters operating on a single signal stream
    • multiple cryptography algorithms attempting to crack a single coded message.


MISD

Multiple Instruction, Multiple Data (MIMD):
  • Currently, the most common type of parallel computer. Most modern computers fall into this category.
  • Multiple Instruction: every processor may be executing a different instruction stream
  • Multiple Data: every processor may be working with a different data stream
  • Execution can be synchronous or asynchronous, deterministic or non-deterministic
  • Examples: most current supercomputers, networked parallel computer "grids" and multi-processor SMP computers - including some types of PCs.


MIMD



Concepts and Terminology

Some General Parallel Terminology

Like everything else, parallel computing has its own "jargon". Some of the more commonly used terms associated with parallel computing are listed below. Most of these will be discussed in more detail later.

Task
A logically discrete section of computational work. A task is typically a program or program-like set of instructions that is executed by a processor.

Parallel Task
A task that can be executed by multiple processors safely (yields correct results)

Serial Execution
Execution of a program sequentially, one statement at a time. In the simplest sense, this is what happens on a one processor machine. However, virtually all parallel tasks will have sections of a parallel program that must be executed serially.

Parallel Execution
Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time.

Shared Memory
From a strictly hardware point of view, describes a computer architecture where all processors have direct (usually bus based) access to common physical memory. In a programming sense, it describes a model where parallel tasks all have the same "picture" of memory and can directly address and access the same logical memory locations regardless of where the physical memory actually exists.

Distributed Memory
In hardware, refers to network based memory access for physical memory that is not common. As a programming model, tasks can only logically "see" local machine memory and must use communications to access memory on other machines where other tasks are executing.

Communications
Parallel tasks typically need to exchange data. There are several ways this can be accomplished, such as through a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employed.

Synchronization
The coordination of parallel tasks in real time, very often associated with communications. Often implemented by establishing a synchronization point within an application where a task may not proceed further until another task(s) reaches the same or logically equivalent point.

Synchronization usually involves waiting by at least one task, and can therefore cause a parallel application's wall clock execution time to increase.

Granularity
In parallel computing, granularity is a qualitative measure of the ratio of computation to communication.

Observed Speedup
Observed speedup of a code which has been parallelized, defined as:

wall-clock time of serial execution
wall-clock time of parallel execution

One of the simplest and most widely used indicators for a parallel program's performance.

Parallel Overhead
The amount of time required to coordinate parallel tasks, as opposed to doing useful work. Parallel overhead can include factors such as:

Massively Parallel
Refers to the hardware that comprises a given parallel system - having many processors. The meaning of many keeps increasing, but currently BG/L pushes this number to 6 digits.

Scalability
Refers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate increase in parallel speedup with the addition of more processors. Factors that contribute to scalability include:



Parallel Computer Memory Architectures

Shared Memory

General Characteristics:

Uniform Memory Access (UMA):

Non-Uniform Memory Access (NUMA):

Advantages:

Disadvantages:



Parallel Computer Memory Architectures

Distributed Memory

General Characteristics:

Advantages:

Disadvantages:



Parallel Computer Memory Architectures

Hybrid Distributed-Shared Memory



Parallel Programming Models

Overview



Parallel Programming Models

Shared Memory Model

Implementations:



Parallel Programming Models

Threads Model

Implementations:



Parallel Programming Models

Message Passing Model

Implementations:



Parallel Programming Models

Data Parallel Model


Implementations:



Parallel Programming Models

Other Models

Hybrid:

Single Program Multiple Data (SPMD):

Multiple Program Multiple Data (MPMD):



Designing Parallel Programs

Automatic vs. Manual Parallelization



Designing Parallel Programs

Understand the Problem and the Program



Designing Parallel Programs

Partitioning

Domain Decomposition:

Functional Decomposition:



Designing Parallel Programs

Communications

Who Needs Communications?

Factors to Consider:



Designing Parallel Programs

Synchronization

Types of Synchronization:



Designing Parallel Programs

Data Dependencies

Definition:

Examples:

How to Handle Data Dependencies:



Designing Parallel Programs

Load Balancing

How to Achieve Load Balance:



Designing Parallel Programs

Granularity

Computation / Communication Ratio:

Fine-grain Parallelism:

  • Relatively small amounts of computational work are done between communication events

  • Low computation to communication ratio

  • Facilitates load balancing

  • Implies high communication overhead and less opportunity for performance enhancement

  • If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation.

Coarse-grain Parallelism:

  • Relatively large amounts of computational work are done between communication/synchronization events

  • High computation to communication ratio

  • Implies more opportunity for performance increase

  • Harder to load balance efficiently

Which is Best?

  • The most efficient granularity is dependent on the algorithm and the hardware environment in which it runs.

  • In most cases the overhead associated with communications and synchronization is high relative to execution speed so it is advantageous to have coarse granularity.

  • Fine-grain parallelism can help reduce overheads due to load imbalance.
Granularity


Designing Parallel Programs

I/O

The Bad News:

The Good News:



Designing Parallel Programs

Limits and Costs of Parallel Programming

Amdahl's Law:

Complexity:

Portability:

Resource Requirements:

Scalability: