We're Hiring!
Take the next step in your career and work on diverse technology projects with cross-functional teams.
LEARN MORE
Mountain West Farm Bureau Insurance
office workers empowered by business technology solutions
BLOG
11
22
2016
1.28.2022

Why Do Graphics Cards Matter in the Cloud?

Last updated:
9.16.2020
1.28.2022
No items found.
GPU cloud computing

If you’re pricing out a cloud server, you’re probably comparing pricing on a certain number of virtual CPUs, or Central Processing Units, as well as RAM and storage, and perhaps network fees. If you were building a gaming PC, you’d be pricing out all of those items, but you’d also be saving a major chunk of money for a graphics card, or GPU. GPUs are naturally intended to handle the processing of digital graphics in visually intensive tasks like gaming or animation.

With the rise of big data analytics and machine learning, however, GPUs are playing an increasingly important part in high performance computing. Cloud providers have started getting in on the game, enabling GPU-accelerated cloud servers with an eye on big data processing and other intensive applications.

 

How do GPUs accelerate compute tasks?

For applications that support GPU acceleration, compute-intensive functions are passed off to the GPU from the CPU. Usually this is only a small piece of the overall code running through the CPU.

Even multi-core CPUs are intended for sequential processing of code. GPUs, on the other hand, operate at lower frequencies but are designed with thousands of cores intended to handle multiple functions at once. This is called “massively parallel” architecture.

For High Performance Computing (HPC) applications, data is translated into a graphical form so the GPU can process it, then returned to the CPU. Before the rise of General-purpose computing on graphics processing units (GPGPU), this would be a one-way process, where GPU-specific processing tasks were handed off by the CPU and then sent directly to the display after processing. GPGPU requires the use of coding languages that support running CPU code as a GPU shader, like OpenCL or Nvidia CUDA.

GPUs have been used for general-purpose computing tasks in applications like load-balancing clusters, physics engines, statistical physics, bioinformatics, audio signal processing, digital image and video processing, weather forecasting, climate research, financial analysis, medical imaging, databases, cryptography, cryptocurrency, antivirus, and intrusion detection.

Challenges of GPU acceleration in the cloud

The largest cloud providers have recently announced GPU instances in their clouds. VMware does have some virtualized GPU features as well, supporting Nvidia CUDA and other platforms for GPU-accelerated HPC.

As with any high-performance application, GPGPU faces latency issues when delivered from the cloud, as the datasets in use tend to be very large. It is best to keep all data in the cloud, process it there, and then generate a report rather than sending data back and forth. Storage is another factor, as HPC applications are often IOPS intensive. Therefore the best cloud-based HPC platform for GPGPU would be one with custom high-speed interconnects, flash-based storage, and increased RAM. Even the environments tailored to HPC from Softlayer or AWS may not deliver ideal performance for your application, however. It may be best to find a cloud provider that can tailor a system to your requirements to maximize performance.

 

While these applications may not be used daily by enterprises and mid-market businesses, HPC is reaching the mainstream. GPU manufacturers continue to push the ways in which GPU architecture can be used to process atypical workloads, helping to bring what used to be limited to supercomputers into the world of cloud, on-premise data centers, and even mobile and distributed computing.

Recent Blog Posts

lunavi logo alternate white and yellow
2.15.2022
02
.
15
.
2022
Service Changes Coming to Microsoft 365 & Office 365

The NCE offers new subscription terms including 12-month and 36-month plans priced lower than monthly contracts. In addition, it is easier to add seats, cancellation policies are more consistent, and there are two promotional options to lock in a better rate for your current renewal. However, the mandatory new plans do include price adjustments.

Learn more
lunavi logo alternate white and yellow
1.28.2022
01
.
21
.
2022
Automate Your Cloud with Azure Bicep

Azure Bicep is a great way to implement Infrastructure as a Code to automate the provisioning of Azure resources. In this post, I’ll get you started by describing how Bicep language works as well as key differences and similarities between Bicep and ARM Templates.

Learn more
lunavi logo alternate white and yellow
1.28.2022
12
.
22
.
2021
Lunavi Response to Log4j Vulnerability

The log4j vulnerability is affecting many Apache systems. Learn how Lunavi is responding to this ongoing threat.

Learn more