Scientists and engineers use High Performance Computing (HPC) in order to solve complex problems. The requirements for success frequently include big compute, high-throughput, and fast networking. For some time, the cloud simply hasn't been an option for these workloads as they were too big to move, the latency was too high, or it was simply too costly. But, now all of that is changing.
The cloud providers are looking to bring HPC workloads into their infrastructures, as was obvious at this year's SC17 in Denver. Both AWS and Google were talking scale and speed to attendees, promoting abilities to scale parallel tasks beyond what is realistic in traditional local infrastructure. But getting the workloads to the cloud services is another story. To learn how to move file-based applications to the cloud non-disruptively, attendees approached Avere.
In the below video, insideHPC editor Rich Brueckner spoke to Bernie Behn, principal engineer at Avere Systems about how to get high-performance workloads from network-attached storage (NAS) environments onto the cloud.