Immediate Download Computer Architecture, Sixth Edition: A Quantitative Approach John L. Hennessy Ebooks 2024
Immediate Download Computer Architecture, Sixth Edition: A Quantitative Approach John L. Hennessy Ebooks 2024
com
https://textbookfull.com/product/computer-architecture-
sixth-edition-a-quantitative-approach-john-l-hennessy/
OR CLICK BUTTON
DOWLOAD EBOOK
https://textbookfull.com/product/computer-architecture-a-
quantitative-approach-sixth-edition-hennessy/
https://textbookfull.com/product/computer-architecture-a-
quantitative-approach-6th-edition-solutions-manual-john-l-
hennessy/
https://textbookfull.com/product/computer-architecture-john-l-
hennessy/
https://textbookfull.com/product/leading-matters-lessons-from-my-
journey-john-l-hennessy/
Computer Networks A Systems Approach 6th Edition Larry
L. Peterson
https://textbookfull.com/product/computer-networks-a-systems-
approach-6th-edition-larry-l-peterson/
https://textbookfull.com/product/machine-design-an-integrated-
approach-sixth-edition-robert-l-norton/
https://textbookfull.com/product/biota-grow-2c-gather-2c-cook-
loucas/
https://textbookfull.com/product/asco-sep-sixth-edition-martee-l-
hensley/
https://textbookfull.com/product/an-introduction-to-financial-
markets-a-quantitative-approach-1st-edition-paolo-brandimarte/
Computer Architecture Formulas
1. CPU time = Instruction count Clock cycles per instruction Clock cycle time
2. X is n times faster than Y: n = Execution time Y / Execution time X = Performance X / Performance Y
Execution time old 1
3. Amdahl’s Law: Speedupoverall = ------------------------------------------- = --------------------------------------------------------------------------------------------
Fraction enhanced
-
Execution time new (#1 – Fraction ) + ------------------------------------
enhanced
Speedup enhanced
2
4. Energydynamic 1 / 2 Capacitive load Voltage
2
5. Power dynamic 1 / 2 Capacitive load Voltage Frequency switched
7. Availability = Mean time to fail / (Mean time to fail + Mean time to repair)
N
8. Die yield = Wafer yield 1 / ( 1 + Defects per unit area Die area )
where Wafer yield accounts for wafers that are so bad they need not be tested and N is a parameter called
the process-complexity factor, a measure of manufacturing difficulty. N ranges from 11.5 to 15.5 in 2011.
9. Means—arithmetic (AM), weighted arithmetic (WAM), and geometric (GM):
n n n n
1
AM = --- Time i WAM = Weight i Time i GM = Time i
n
i =1 i =1 i=1
where Timei is the execution time for the ith program of a total of n in the workload, Weighti is the
weighting of the ith program in the workload.
10. Average memory-access time = Hit time + Miss rate Miss penalty
11. Misses per instruction = Miss rate Memory access per instruction
12. Cache index size: 2index = Cache size / (Block size Set associativity)
Total Facility Power
13. Power Utilization Effectiveness (PUE) of a Warehouse Scale Computer = --------------------------------------------------
IT Equipment Power
Rules of Thumb
1. Amdahl/Case Rule: A balanced computer system needs about 1 MB of main memory capacity and 1
megabit per second of I/O bandwidth per MIPS of CPU performance.
2. 90/10 Locality Rule: A program executes about 90% of its instructions in 10% of its code.
3. Bandwidth Rule: Bandwidth grows by at least the square of the improvement in latency.
4. 2:1 Cache Rule: The miss rate of a direct-mapped cache of size N is about the same as a two-way set-
associative cache of size N/2.
5. Dependability Rule: Design with no single point of failure.
6. Watt-Year Rule: The fully burdened cost of a Watt per year in a Warehouse Scale Computer in North
America in 2011, including the cost of amortizing the power and cooling infrastructure, is about $2.
In Praise of Computer Architecture: A Quantitative Approach
Sixth Edition
“Although important concepts of architecture are timeless, this edition has been
thoroughly updated with the latest technology developments, costs, examples,
and references. Keeping pace with recent developments in open-sourced architec-
ture, the instruction set architecture used in the book has been updated to use the
RISC-V ISA.”
—from the foreword by Norman P. Jouppi, Google
“Hennessy and Patterson wrote the first edition of this book when graduate stu-
dents built computers with 50,000 transistors. Today, warehouse-size computers
contain that many servers, each consisting of dozens of independent processors
and billions of transistors. The evolution of computer architecture has been rapid
and relentless, but Computer Architecture: A Quantitative Approach has kept pace,
with each edition accurately explaining and analyzing the important emerging
ideas that make this field so exciting.”
—James Larus, Microsoft Research
“Another timely and relevant update to a classic, once again also serving as a win-
dow into the relentless and exciting evolution of computer architecture! The new
discussions in this edition on the slowing of Moore's law and implications for
future systems are must-reads for both computer architects and practitioners
working on broader systems.”
—Parthasarathy (Partha) Ranganathan, Google
“I love the ‘Quantitative Approach’ books because they are written by engineers,
for engineers. John Hennessy and Dave Patterson show the limits imposed by
mathematics and the possibilities enabled by materials science. Then they teach
through real-world examples how architects analyze, measure, and compromise
to build working systems. This sixth edition comes at a critical time: Moore’s
Law is fading just as deep learning demands unprecedented compute cycles.
The new chapter on domain-specific architectures documents a number of prom-
ising approaches and prophesies a rebirth in computer architecture. Like the
scholars of the European Renaissance, computer architects must understand our
own history, and then combine the lessons of that history with new techniques
to remake the world.”
—Cliff Young, Google
This page intentionally left blank
Computer Architecture
A Quantitative Approach
Sixth Edition
John L. Hennessy is a Professor of Electrical Engineering and Computer Science at Stanford
University, where he has been a member of the faculty since 1977 and was, from 2000 to
2016, its 10th President. He currently serves as the Director of the Knight-Hennessy Fellow-
ship, which provides graduate fellowships to potential future leaders. Hennessy is a Fellow of
the IEEE and ACM, a member of the National Academy of Engineering, the National Acad-
emy of Science, and the American Philosophical Society, and a Fellow of the American Acad-
emy of Arts and Sciences. Among his many awards are the 2001 Eckert-Mauchly Award for
his contributions to RISC technology, the 2001 Seymour Cray Computer Engineering Award,
and the 2000 John von Neumann Award, which he shared with David Patterson. He has also
received 10 honorary doctorates.
In 1981, he started the MIPS project at Stanford with a handful of graduate students. After
completing the project in 1984, he took a leave from the university to cofound MIPS Com-
puter Systems, which developed one of the first commercial RISC microprocessors. As of
2017, over 5 billion MIPS microprocessors have been shipped in devices ranging from video
games and palmtop computers to laser printers and network switches. Hennessy subse-
quently led the DASH (Director Architecture for Shared Memory) project, which prototyped
the first scalable cache coherent multiprocessor; many of the key ideas have been adopted
in modern multiprocessors. In addition to his technical activities and university responsibil-
ities, he has continued to work with numerous start-ups, both as an early-stage advisor and
an investor.
David A. Patterson became a Distinguished Engineer at Google in 2016 after 40 years as a
UC Berkeley professor. He joined UC Berkeley immediately after graduating from UCLA. He
still spends a day a week in Berkeley as an Emeritus Professor of Computer Science. His
teaching has been honored by the Distinguished Teaching Award from the University of
California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Under-
graduate Teaching Award from IEEE. Patterson received the IEEE Technical Achievement
Award and the ACM Eckert-Mauchly Award for contributions to RISC, and he shared the IEEE
Johnson Information Storage Award for contributions to RAID. He also shared the IEEE John
von Neumann Medal and the C & C Prize with John Hennessy. Like his co-author, Patterson is
a Fellow of the American Academy of Arts and Sciences, the Computer History Museum,
ACM, and IEEE, and he was elected to the National Academy of Engineering, the National
Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He served on the
Information Technology Advisory Committee to the President of the United States, as chair
of the CS division in the Berkeley EECS department, as chair of the Computing Research
Association, and as President of ACM. This record led to Distinguished Service Awards from
ACM, CRA, and SIGARCH. He is currently Vice-Chair of the Board of Directors of the RISC-V
Foundation.
At Berkeley, Patterson led the design and implementation of RISC I, likely the first VLSI
reduced instruction set computer, and the foundation of the commercial SPARC architec-
ture. He was a leader of the Redundant Arrays of Inexpensive Disks (RAID) project, which led
to dependable storage systems from many companies. He was also involved in the Network
of Workstations (NOW) project, which led to cluster technology used by Internet companies
and later to cloud computing. His current interests are in designing domain-specific archi-
tectures for machine learning, spreading the word on the open RISC-V instruction set archi-
tecture, and in helping the UC Berkeley RISELab (Real-time Intelligent Secure Execution).
Computer Architecture
A Quantitative Approach
Sixth Edition
John L. Hennessy
Stanford University
David A. Patterson
University of California, Berkeley
Much of the improvement in computer performance over the last 40 years has been
provided by computer architecture advancements that have leveraged Moore’s
Law and Dennard scaling to build larger and more parallel systems. Moore’s
Law is the observation that the maximum number of transistors in an integrated
circuit doubles approximately every two years. Dennard scaling refers to the reduc-
tion of MOS supply voltage in concert with the scaling of feature sizes, so that as
transistors get smaller, their power density stays roughly constant. With the end of
Dennard scaling a decade ago, and the recent slowdown of Moore’s Law due to a
combination of physical limitations and economic factors, the sixth edition of the
preeminent textbook for our field couldn’t be more timely. Here are some reasons.
First, because domain-specific architectures can provide equivalent perfor-
mance and power benefits of three or more historical generations of Moore’s
Law and Dennard scaling, they now can provide better implementations than
may ever be possible with future scaling of general-purpose architectures. And
with the diverse application space of computers today, there are many potential
areas for architectural innovation with domain-specific architectures. Second,
high-quality implementations of open-source architectures now have a much lon-
ger lifetime due to the slowdown in Moore’s Law. This gives them more oppor-
tunities for continued optimization and refinement, and hence makes them more
attractive. Third, with the slowing of Moore’s Law, different technology compo-
nents have been scaling heterogeneously. Furthermore, new technologies such as
2.5D stacking, new nonvolatile memories, and optical interconnects have been
developed to provide more than Moore’s Law can supply alone. To use these
new technologies and nonhomogeneous scaling effectively, fundamental design
decisions need to be reexamined from first principles. Hence it is important for
students, professors, and practitioners in the industry to be skilled in a wide range
of both old and new architectural techniques. All told, I believe this is the most
exciting time in computer architecture since the industrial exploitation of
instruction-level parallelism in microprocessors 25 years ago.
The largest change in this edition is the addition of a new chapter on domain-
specific architectures. It’s long been known that customized domain-specific archi-
tectures can have higher performance, lower power, and require less silicon area
than general-purpose processor implementations. However when general-purpose
ix
x ■ Foreword
Foreword ix
Preface xvii
Acknowledgments xxv
xi
xii ■ Contents
*****
Oli sittenkin onni, ettei hän ollut ketään ottanut uskotukseen, ettei
kenellekään ollut aikeestaan puhunut. Ei kukaan osannut
aavistaakaan, mitä hän mietti poikansa onnen ja elämän vuoksi.
Yksin vain hän sen tiesi, eivätkä sitä muut koskaan saisi tietääkään.
Hän oli niin rakas, niin rakas! Hän oli niin hyvä ja lempeä!
»En, en koskaan.»
Sitten hän lähti. Kerran kirjoitti. Ei silloin tiennyt, milloin tulisi. Hän
kirjoitti vastaan, kertoi kuinka laitansa oli. Siihen ei vastausta tullut.
Silloin hän teki epätoivoisen tekonsa. Mitä hän silloin taisi? Oma ja
koko suvun häpeä vartoi. Ja hän päätti kärsiä ja ottaa miehekseen
Portaankorvan Aapelin…
Ainainen pelko oli lisäksi vaivannut, että Aapeli joskus saa kuulla
— ellei itse viimein ymmärrä — ettei poika olekaan hänen, ainainen
pelko, että silloin se pojan ruhjoo… Ja vaikka hän sydämensä
syvyydessä tunsi, että kerran se saapi tietää, kerran hän katkaisee
kärsimystensä kahleet, ja silloin on kaikki sileää… Mutta poika piti
ensin saada turvaan, varmaan turvaan…
Ja nyt se oli hänelle selvinnyt. Vielä nyt kun jaksaisi jonkun viikon
viettää tätä valhe-elämää! Kun vielä kaikki onnistuisi eikä Aapelin
epäluulo enenisi! Kohta, kohta tulee otollinen hetki!
— Niin! Niin!
Hän sai siitä vahvistusta, uusia voimia siihen, jonka oli päättänyt
tehdä.
Tätä aikaa hän oli koko kesän odottanut — tätä viikkoa, jolloin
tiesi muun väen ja isännänkin menevän viikon kestävälle
niittymatkalle. Nyt se oli tullut, ja emäntä oli sitä aikaa hyväksensä
käyttänyt.
Oskari oli saanut tietää koko elämänsä salaisuuden, äiti oli itkenyt
ja poika oli itkenyt, mutta äidin päätös aiottiin toteuttaa.
*****
Niin tulee hän kuin tietämättään taloon, nousee kuistille, lyö ovet
lukkoon ja menee sisälle. Hän saa pirtissä kynttilän palamaan ja nyt
hän taas jaksaa ajatella ja tuntee tointuvansa.
»Missä poika on?» karjaisi hän uudelleen, niin että pirtti jymisi.
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.