Our recent webinar on the viability of (High Performance Computing) HPC in the cloud generated some interesting questions. In this first part of two, and in no specific order, here are some of the questions and our answers. Click here to download the webinar.
I have an HPC application that I’d like to ‘cloud-ify’. Is there an approach you can outline for me?
If you viewed the webinar, you won’t be surprised by my recommendation: Start with an application for which latency is not a factor. In practice, serial or embarrassingly parallel applications are ideal. In other words, at the outset, stick to the upper half of the granularity versus concurrency matrix. By focusing on high-granularity use cases (read: applications tolerant of latency) you are more likely to achieve early wins in the cloud, while being able to tease out the details regarding other constraining requirements like data locality. By systematically introducing more demanding requirements, such as latency, you can progress to tackling workloads that plot in the lower half of the granularity-concurrency matrix.
Can you quantify when the degree of latency becomes problematical from the perspective of an application? For instance, for a layout GUI, what is the maximum latency acceptable to a user?
Usability guru Jakob Nielsen approaches GUI latency through the lense of human perceptual abilities. In so doing, Nielsen establishes three limits that need to be considered when optimizing for web and application performance. At the attention-keeping limit of 10 seconds, Nielsen suggests use of progress indicators as well as the availability of an interruption trigger; in other words, when you can’t hide latency, you need to proactively solicit tolerance and provide an out!
Are the EDA software application vendors becoming more open to having customers run their software on 3rd party cloud resources?
“In some ways, EDA ISVs like Cadence, Mentor and Synopsys were early adopters of cloud computing as a paradigm,” states UberCloud president and co-founder Wolfgang Gentzsch. “Dating back to industry events like DAC 2012/2013 (Design Automation Conference), I well recall discussions with cloud executives from each of these companies. Apparently, the combination of their EDA software and consulting services, plus their customer’s hardware, resulted in some early wins for (what we now call) private clouds. Unfortunately, however, these EDA apps have not effectively transitioned to the public cloud. Technical considerations like latency aren’t preventing this transition, it’s non-technical ones like IP, licensing models, I suspect. Even with ANSYS as an established software provider, The UberCloud has not undertaken any experiments with EDA-oriented use cases. Of course, we remain extremely interested in closing this gap.”
This may be a stretch but, has any experimentation been done with your tools and BOINC?
Based on my cursory review, BOINC is ideally suited to processing embarrassingly parallel workloads on opportunistically available resources such as desktops. Direct experience within our own customer base has evolved into an expressed preference for the combination of Univa Grid Engine (potentially with our Univa Short Jobs add-on) and Univa UniCloud to ensure optimal, secure use of on-demand IT infrastructure on the ground and in the cloud. Because time-to-results is a requirement common to many of our customers across vertical markets, dedicated use of scaled-out cloud infrastructures is precisely what they seek. As noted in the webinar, a recent case study involving the Broad Institute provides a view into the kinds of outcomes that are achievable today.