0% found this document useful (0 votes)
170 views12 pages

Computer Architecture

This document contains a presentation by members Baavana Bandarupalli, Balaharihanth B, Balaji G, and Bhupesh Sidharth A on the topic of parallel processing and its challenges. It defines parallel processing as executing two or more instructions simultaneously to improve computer system performance. The goals of parallel processing are reducing wall clock time to solve problems and solving larger problems. Common types of parallel processing discussed are SIMD (single instruction multiple data) and MIMD (multiple instruction multiple data). Challenges to parallel processing limitations include register renaming, branch prediction, jump prediction, memory address alias analysis, and cache performance.

Uploaded by

KARISHMA SM
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
170 views12 pages

Computer Architecture

This document contains a presentation by members Baavana Bandarupalli, Balaharihanth B, Balaji G, and Bhupesh Sidharth A on the topic of parallel processing and its challenges. It defines parallel processing as executing two or more instructions simultaneously to improve computer system performance. The goals of parallel processing are reducing wall clock time to solve problems and solving larger problems. Common types of parallel processing discussed are SIMD (single instruction multiple data) and MIMD (multiple instruction multiple data). Challenges to parallel processing limitations include register renaming, branch prediction, jump prediction, memory address alias analysis, and cache performance.

Uploaded by

KARISHMA SM
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 12

SEM-3

PRESENTATION
COMPUTER ARCHITECTURE

MEMBERS
 BAAVANA BANDARUPALLI
 BALAHARIHANTH B
 BALAJI G
 BHUPESH SIDHARTH A
 CHANDARSHINI S
TOPIC

PARALLEL PROCESSING
&
CHALLENGES
WHAT IS PARALLEL PROCESSING?

PARALLEL PROCESSING METHOD TO IMPROVE


COMPUTER SYSTEM PERFORMANCE BY EXECUTING
TWO OR MORE INSTRUCTIONS SIMULTANEOUSLY.

PARALLEL PROCESSING WAS INTRODUCED


BECAUSE THE SEQUENTIAL PROCESS OF
EXECUTING INSTRUCTIONS TOOK A LOT OF TIME.
THE GOALS OF PARALLEL PROCESSING

ONE GOAL IS TO REDUCE THE “WALL


CLOCK” TIME OR THE AMOUNT OF REAL TIME
THAT YOU NEED TO WAIT FOR A PROBLEM TO
BESOLVED.

ANOTHER GOAL IS TO SOLVE BIGGER


PROBLEMS THAT MIGHT NOT FIT IN THE
LIMITED MEMORY OF A SINGLE CPU.
TYPES FOR PARALLEL PROCESSING

There are multiple types of parallel


processing, two of the most commonly used
types Include SIMD and MIMD.

SIMD-Single Instruction Multiple Data


MIMD-Multiple Instruction Multiple Data 
Single Instruction Multiple Data(SIMD)

It is a form of parallel processing in


which a computer will have two or more
processors follow the same instruction
set while each processor handles
different data.

 SIMD is typically used to analyze large


data sets that are based on the same
specified benchmarks.
Multiple Instruction Multiple Data(MIMD)

It is another common form of parallel


processing which each computer has two
or more of its own processors and will
get data from separate data streams.
PARALLEL PROCESSING CHALLENGES
Limitations of ILP

1.Register renaming
2.Branch prediction
3.Jump prediction
4.Memory address alias analysis
5.Perfect caches
1.Register renaming
There are an infinite number of virtual
registers available, and hence all WAW and WAR
hazards are avoided and an unbounded number of
instructions can begin execution simultaneously.

2.Branch prediction
Branch prediction is perfect. All
conditional branches are predicted exactly.

 
3.Jump prediction

All jumps (including jump register used for


return and computed jumps) are perfectly predicted.
When combined with perfect branch prediction, this is
equivalent to having a processor with perfect
speculation and an unbounded buffer of instructions
available for execution.
4.Memory address alias analysis
All memory addresses are known exactly, and
a load can be moved before a store provided that the
addresses are not identical. Note that this implements
perfect address alias analysis.

5.Perfect caches
All memory accesses take 1 clock cycle. In
practice, superscalar processors will typically consume
large amounts of ILP hiding cache misses, making these
results highly optimistic.
THANK YOU!!!

You might also like