CPU vs. GPU




CPU vs. GPU

Initially, computers had graphics accelerators and not dedicated GPUs. The abandoned system would process the graphics through its central processing unit (CPU), and the graphics accelerator would improve its processing. The implication, therefore, is that the graphics would be processed by another hardware decoupled from the central processor. This implication introduced a new paradigm in real terms that evolved into GPU cloud computing.









Not only has performance increased, but also the quality of computation and the flexibility of graphics programming have steadily improved during that same time. Thanks to GPUs, the PCs and computer workstations had graphics accelerators, not graphics processing units (GPUs). The implication was that a graphics accelerator merely does that: it accelerates graphics. Furthermore, the word “accelerator” implied that the computer would perform the same rendering operations as before, only faster. As GPUs replaced graphics accelerators, we abandoned the old concept of graphics acceleration. We advanced to graphics processing, embracing the idea of processors that enable new graphics algorithms and effects.



GPUs and CPUs process tasks differently. A CPU can handle a few software threads at a time due to its sequential serial processing. Its architecture consists of a few cores optimized with large cache memory. On the other hand, GPUs comprise a great number of thousands of smaller cores designed to design multiple tasks simultaneously, which enables parallel processing. Therefore, GPUs can process thousands of threads accelerating software faster compared to a CPU.



Essentially, parallel processing allows the resolution of problems from multiple angles, whereas serial processing uses just one. For example, if we were to read a book, a CPU will start at page A and finish at page B; in contrast, a GPU would tear out the pages and read each one individually. This optimized system suites advanced calculations such as floating-point arithmetic and matrix arithmetic, performing functions related to video conversion and post-processing more effectively than the CPU.