If you’re pricing out a cloud server, you’re probably comparing pricing on a certain number of virtual CPUs, or Central Processing Units, as well as RAM and storage, and perhaps network fees. If you were building a gaming PC, you’d be pricing out all of those items, but you’d also be saving a major chunk of money for a graphics card, or GPU. GPUs are naturally intended to handle the processing of digital graphics in visually intensive tasks like gaming or animation.
With the rise of big data analytics and machine learning, however, GPUs are playing an increasingly important part in high performance computing. Cloud providers have started getting in on the game, enabling GPU-accelerated cloud servers with an eye on big data processing and other intensive applications.
For applications that support GPU acceleration, compute-intensive functions are passed off to the GPU from the CPU. Usually this is only a small piece of the overall code running through the CPU.
Even multi-core CPUs are intended for sequential processing of code. GPUs, on the other hand, operate at lower frequencies but are designed with thousands of cores intended to handle multiple functions at once. This is called “massively parallel” architecture.
For High Performance Computing (HPC) applications, data is translated into a graphical form so the GPU can process it, then returned to the CPU. Before the rise of General-purpose computing on graphics processing units (GPGPU), this would be a one-way process, where GPU-specific processing tasks were handed off by the CPU and then sent directly to the display after processing. GPGPU requires the use of coding languages that support running CPU code as a GPU shader, like OpenCL or Nvidia CUDA.
GPUs have been used for general-purpose computing tasks in applications like load-balancing clusters, physics engines, statistical physics, bioinformatics, audio signal processing, digital image and video processing, weather forecasting, climate research, financial analysis, medical imaging, databases, cryptography, cryptocurrency, antivirus, and intrusion detection.
Costs can pile up if you aren't using AWS for the right scenarios. Download this white paper to learn more.
The largest cloud providers have recently announced GPU instances in their clouds. VMware does have some virtualized GPU features as well, supporting Nvidia CUDA and other platforms for GPU-accelerated HPC.
As with any high-performance application, GPGPU faces latency issues when delivered from the cloud, as the datasets in use tend to be very large. It is best to keep all data in the cloud, process it there, and then generate a report rather than sending data back and forth. Storage is another factor, as HPC applications are often IOPS intensive. Therefore the best cloud-based HPC platform for GPGPU would be one with custom high-speed interconnects, flash-based storage, and increased RAM. Even the environments tailored to HPC from Softlayer or AWS may not deliver ideal performance for your application, however. It may be best to find a cloud provider that can tailor a system to your requirements to maximize performance.
While these applications may not be used daily by enterprises and mid-market businesses, HPC is reaching the mainstream. GPU manufacturers continue to push the ways in which GPU architecture can be used to process atypical workloads, helping to bring what used to be limited to supercomputers into the world of cloud, on-premise data centers, and even mobile and distributed computing.