The dramatic increase in computer performance has been extraordinary, but not for all computations: it has key limits and structure.*Software architects, developers, and even data scientists need to understand how exploit the fundamental structure of computer performance to harness it for future applications.
Using a principled approach, Computer Architecture for Scientists covers the four key pillars of computer performance and imparts a high-level basis for reasoning with and understanding these concepts. These principles and models provide approachable high-level insights and quantitative modeling without distracting low-level detail. The pillars include:
Small is fast: how size scaling drives performance (miniaturization)
Hidden parallelism: how a sequential program can be executed faster with parallelism (instruction-level parallelism)
Dynamic locality: skirting physical limits, by arranging data in a smaller space (caches and reuse/locality)
Parallelism: increasing performance with teams of workers (multicore and cloud)
Finally, the text covers the GPU and machine-learning accelerators that have become important for more and more mainstream applications. The course provides a longer-term understanding of computer capabilities, performance, and limits to the wealth of computer scientists practicing data science, software development, or machine learning.
[Lire la suite]