Last week, we were joined by two speakers from NVIDIA Corporation to present the live webinar, Maximizing the utilization of GPU resources on-premise and in the cloud. Users are increasingly taking advantage of GPUs, containers, and cloud-resources for high-performance applications; effective management of GPU workloads is more important than ever. Based on the results from our 2020 State of the GPU Survey, which Rob Lalonde, Univa’s VP and General Manager, Cloud, discussed, approximately 83% of our customers currently use GPUs to run applications, and 80% of our customers run at least three distinct GPU workloads.
Barton Fiske, Senior Alliances Manager, Cloud, NVIDIA Corporation shared the latest news from NVIDIA and the computing in the age of AI. Adam Tetelman, Deep Learning Solutions Architect, NVIDIA Corporation, delved into the powerful Multi-instance GPU (MIG) on the NVIDIA A100 Tensor Core GPU. Instead of just sending a job to a GPU, MIG carves an A100 into seven sub-instances for its own dedicated stream multi-processing, its own memory, an L2 cache and bandwidth for hardware efficiency. Univa’s Bill Bryce, VP of Products, explored Univa Grid Engine’s robust GPU sharing policies on-prem and in the cloud.
The webinar then moved to an insightful panel discussion. The panel discussed expanding MIG support in other GPUs and taking advantage of the A100’s MIG support in the cloud.
We’d like to thank Barton Fiske and Adam Tetelman for their insights and participation. We’d also like to thank all attendees for their time and fantastic questions for the panel. The “Maximizing the utilization of GPU resources on-premise and in the cloud” webinar is an excellent opportunity to gain valuable insights and knowledge to help optimize your GPUs. Download the webinar today.