What is Accelerated Computing?

Accelerated Computing

Graphics Accelerator computing uses specialized hardware to significantly speed up work. It often includes parallel processing that bundles commonly occurring tasks. It takes over complex work that could slow down processors. These processors typically perform tasks in a serial fashion.

The PC was the birthplace of accelerated computing. Supercomputers are where it all began. It is now available in every cloud service and smartphone. Companies of all stripes are now using it to transform businesses with data.

In an architecture often called heterogeneous computing, accelerated computers combine CPUs with other types of processors as equals.

Accelerated Computers: An Inside Look

The most popular accelerators are radeon graphics. Data processing units (DPUs), a new class, enable enhanced and accelerated networking. Each role is important in creating a balanced, unified system.

Today’s accelerated computing is used in both commercial and technical systems. It can handle data analytics, machine learning, simulations, and visualizations. This modern computing style delivers high performance and low energy consumption.

The Popularity of Accelerated Computing by PCs

Coprocessors, a specialized hardware that accelerates the work of a host CPU, have been around for a long time. The first coprocessors were introduced in 1980 with floating point processors, which added advanced math capabilities to the computer.

Graphic accelerators were in high demand due to the growth of video games and the graphical user interfaces. Nearly 50 companies had already started making graphics cards or chips by 1993.

NVIDIA’s GeForce 256 was launched in 1999. It was the first chip to outsource key CPU tasks such as rendering 3D images. It also used four parallel graphics pipelines.

NVIDIA called it the graphics processing unit (GPU), which is a way for NVIDIA to stake a claim on a new class of computer accelerators.

How researchers harness parallel processing

NVIDIA had sold 500 million GPUs by 2006. NVIDIA was the only graphics vendor in the field and saw the next big thing as a possibility.

Researchers were already creating their own code to harness the power of GPUs for tasks that are beyond the capabilities of CPUs. Brook was developed by the Stanford team led by Ian Buck. It is the first model that has been widely accepted to expand the C language for parallel processing.

Buck began his career at NVIDIA in 2006 as an intern. He is now vice president for accelerated computing. He was responsible for the launch of CUDA in 2006, which is a programming model that harnesses the GPU’s parallel-processing engines to complete any task.

CUDA was paired with a G80 processor to power a new series of NVIDIA GPUs. This enabled accelerated computing for a growing number of scientific and industrial applications.

HPC + GPUs = Accelerated Sciences

The GPU family destined for the data centre was expanded with a series of new architectures named after pioneers: Tesla, Fermi and Kepler; Maxwell, Pascal; Volta, Turing; Ampere.

These new GPUs were just like the 1990s graphics accelerators. They faced many competitors, including innovative parallel processors like the Inmos transporter.

Kevin Krewell, analyst at Tirias research, stated that only the GPU survived as the other had no software ecosystem. This was their death knell.

High performance computing experts around the globe have created accelerated HPC systems that use GPUs for pioneering science. Their research today covers everything from the astrophysics and black holes to genomic sequencing and beyond.

Oak Ridge National Lab published even a guide for HPC users on accelerated computing.

Infini Band Announces Accelerated Networks

Infini Band is used by many of these supercomputers. It’s a low-latency, fast link that allows for large, distributed networks with GPUs. NVIDIA purchased Mellanox in April 2020, a pioneer in Infini Band.

Six months later, NVIDIA unveiled its first DPU. This data processor is a breakthrough in security, storage, and network acceleration. Already, Blue Field DPUs are gaining popularity in cloud services, supercomputers, third-party software, and OEM systems.

Leave a Reply

Your email address will not be published. Required fields are marked *