Millions of people today are relying upon companies like Amazon, Spotify and Netflix to provide them with book, music, movie and even grocery recommendations, normalizing artificial intelligence (AI) capabilities in our everyday lives. Powering such seemingly simple interactions, however, is a complex, innovative workflow involving numerous algorithms derived from a highly pragmatic subset of AI known as Machine Learning.
Whereas Machine Learning has been known to computer scientists for over three decades, general interest in the concept has only begun to emerge in the past couple of years. Today, it is being used more and more for tasks with extremely large volumes of data (“Big Data”), such as financial trading, airport security, medical diagnoses, and fraud detection. The copious volumes of (streamed) data has spurred the development of increasingly sophisticated algorithms aimed at achieving Deep Learning – i.e., use of Machine Learning algorithms that employ deeply layered neural networks as ‘the (virtual) brains’ behind the approach.
This combination of Deep Learning and Big Data introduces computational demands that well exceed the processing potential of even the best multicore central processing units (CPUs). Thus, General Purpose Graphics Processing Units (GPGPUs) have emerged as the de facto standard for Deep Learning. GPUs are well suited to the execution of algorithms for Deep Learning involving Big Data because they deliver results with compelling performance and energy-efficiency metrics. Their rise in popularity has been fueled by a thriving Deep Learning ecosystem, which has created enablements and enhancements for popular frameworks such as TensorFlow.
Whether they are making use of GPUs or not, most organizations are not as advanced as Amazon, Apple, Google or Facebook in terms of employing Deep Learning and Big Data in their production workflows. In contrast to these innovators, early adopters are focused on validating a single application (or even an aspect of an application) as they seek to embrace Machine Learning.
Most Univa customers seeking to employ Machine Learning do so by refactoring or introducing new code/algorithms/programs into either net-new or existing applications. They turn to us once they have had success in R&D prototyping on isolated laptops and workstations and want to discuss how to implement Machine Learning in production and across their organization. When they hit this stage, the typical problems they run into are:
Scaling, sharing, integration and containerization are just the beginning, and we’re looking forward to discussing all of these and more at our booth (#309) and on the conference floor at GTC this week.