Changes

Jump to: navigation, search

GPU621/History of Parallel Computing

1,864 bytes added, 20:17, 1 December 2020
no edit summary
{{GPU621/DPS921 Index | 20207}}
= History of Parallel Computing and the Advantage of Multi-core Systems =
We will be looking into the history and evolution in parallel computing by focusing on three distinct timelines:
-> Added References section
== Demise of Single-Core and Rise of Multi-Core Systems Preface ==
=== Parallel Programming vs. Concurrent Programming ===
[[File:P v c1.jpeg|thumb|left|500px|Source: https://miro.medium.com/max/1170/1*cFUbDHxooUtT9KiBy-0SXQ.jpeg]]
[[File:Parallel v concurrent2.png|thumb|noneright|500px|Source: https://i.stack.imgur.com/V5sMZ.png]]<br clear=all/>
== Demise of Single-Core and Rise of Multi-Core Systems ==
=== Transition from Single to Multi-Core ===
The transition from single to multi-core systems came from the need to address the limitations of manufacturing technologies for single-core systems. Single-core systems suffered by several limiting factors such as including: * individual transistor gate size, * physical limits in the design of integrated circuits which caused significant heat dissipation, and * synchronization issues with the coherency of data. Some instruction A common metric of measurement in the number of instructions a processor can execute simultaneously for a given program is called Instruction-Level Parallelism (ILP). In the case of single-level parallelism methods core processors, some ILP techniques were used to improve single-core performance such as superscalar pipelining which , speculative execution, and out-of-order execution. The superscalar ILP technique enables the processor to execute multiple instruction pipelines concurrently within a single clock cycle, but they it along with the two other techniques were not suited suitable for many applicationsas the number of instructions that can be run simultaneously for a specific program may vary. Such issues with instruction-level parallelism were predominantly dictated by the disparity between the speed by which the processor operated and the access latency of system memory, which costed the processor many cycles by having to stall and wait for the fetch operation from system memory to complete.
As manufacturing processes evolved in accordance with Moore’s Law which saw the size of a transistor shrink, it allowed for the number of transistors packed onto a single processor die (the physical silicon chip itself) to double roughly every two years. This enabled the available space on a processor die to grow, allowing more cores to fit on it than before. This led to an increased demand in thread-level parallelism which many applications benefitted from and were better suited for. The addition of multiple cores on a processor also increased the system's overall parallel computing capabilities.
[[File:Moore law graph.png|thumb|noneleft|500px|Source: https://en.wikipedia.org/wiki/File:Moore%27s_Law_Transistor_Count_1971-2018.png]][[File:Single v multi.png|thumb|noneright|500px620px|Source: https://www.researchgate.net/publication/332614728/figure/fig5/AS:751235892269058@1556120002090/Memory-management-of-single-core-and-multi-core-systems.png]]<br clear=all />
=== Developments in the first Multi-Core Processors ===
Fast forward to the 2000s, which saw a huge boom in the number of processors working in parallel, with numbers upward in the tens of thousands. Such examples in the evolution in parallel computing, High Performance Computing, and multi-core systems include the fastest supercomputer today, which is Japan's Fugaku. It boasts an impressive 7.3 million cores, all of which are, for the first time in a supercomputer, ARM-based. It uses a hybrid-memory model and a new network architecture that provides higher cohesion among all the nodes. The success of the new system is a radical paradigm-shift from the departure of traditional supercomputing towards that of ARM-powered systems. It is also proof the designers wanted to highlight that HPC still has much room for improvement and innovation.
 
== How multicore products were marketed ==
 
Since multicore systems offered a lot of extra processing power compared to the output of a single core processor, many companies were leaping at the opportunity to gain more power for their servers at the time. To capitalize on this, IBM started development of the first dual-core processor on the market, which became available on the market was the IBM POWER4 in 2001. It became highly successful and gave IBM a very strong foothold in the industry when sold as part of their eServer pSeries server, the IBM Regatta. They iterated more on the POWER series of processors, and in 2010, expanded the number of cores available from 2 to 8 with the release of the POWER7.
 
While IBM was dominating the market for server CPUs, there was still a hole in the market for integrating multicore into desktop computers. In may of 2015, AMD was the first company to release a dual-core desktop CPU, the Athlon 64 x2. With the cheapest in the line being $500 and the most powerful being $1000, It did not quite match IBM’s “twice the performance for half the cost”. However, they were still another large innovation in the industry by AMD, and a top competitor for the highest power CPU on the market.
No, J., Choudhary, A., Huang, W., Tafti, D., Resch, M., Gabriel, E., . . . Pressel, D. (n.d.). Parallel Computer. Retrieved November 18, 2020, from https://www.sciencedirect.com/topics/computer-science/parallel-computer
 
Wang, W. (n.d.). The Limitations of Instruction-Level Parallelism and Thread-Level Parallelism. Retrieved November 30, 2020, from https://wwang.github.io/teaching/CS5513_Fall19/lectures/ILP_Limitation.pdf
80
edits

Navigation menu