Scientists and engineers use High Performance Computing (HPC) in order to solve complex problems. The requirements for success frequently include big compute, high-throughput, and fast networking. For some time, the cloud simply hasn't been an option for these workloads as they were too big to move, the latency was too high, or it was simply too costly. But, now all of that is changing.
The cloud providers are looking to bring HPC workloads into their infrastructures, as was obvious at this year's SC17 in Denver. Both AWS and Google were talking scale and speed to attendees, promoting abilities to scale parallel tasks beyond what is realistic in traditional local infrastructure. But getting the workloads to the cloud services is another story. To learn how to move file-based applications to the cloud non-disruptively, attendees approached Avere.
In the below video, insideHPC editor Rich Brueckner spoke to Bernie Behn, principal engineer at Avere Systems about how to get high-performance workloads from network-attached storage (NAS) environments onto the cloud.
The Internet of Things (IoT) is a term that's commonly used to describe everyday devices, such as lights, refrigerators, and even cars, that send and receive data via the Internet. This interconnectivity provides us with more information than ever before. For example, think about how much data you collect every day just walking around with your Fitbit. From counting steps to measuring heart rate, your Fitbit collects all of this data so that you can later visit their app to see your progress throughout the week, trends, and more. So you can imagine that this also means that there has been massive data growth requiring more and more storage resources.
Evaluating Your Options to Boost Alpha Throughput via Cloud Compute
In our last related post, we talked about what alpha throughput was and how technology can have an effect. By increasing their throughput, firms can work to increase potential and gain competitive advantage.
Industry 4.0 might not be a household name today but it is already having an impact in manufacturing. It is where manufacturing becomes a confluence of Internet of Things (IoT), cloud computing and big data analytics managed by Artificial intelligence (AI). In addition to quality control and production monitoring, it can dictate simple decisions as autonomously as possible.
Adopting this shift can be both rewarding and challenging. In this blog post, I will look at its upside and how Avere can help overcome some of those challenges.
DevOps environments can overcome cloud limitations by bridging on-premises resources with the public cloud using its high-performance file system technologies. In our last post (Part one of this two-part series) on using the cloud in DevOps environments, we discussed why it is beneficial to teams, but also reviewed the limitations that must be addressed. In this article, we'll discuss how to do that with Avere FXT technology.
DevOps is a software development framework pairing development and operations together to enable a speedy and agile workflow. The goal is to address critical business needs by combining traditionally separate business units together into a single working team. It’s designed to break down interdepartmental barriers through collaborations and communications. Public cloud, with its subscription based pricing model and unlimited resources (compute, network, and storage) can complement an existing DevOps framework by allowing agile deployment of additional resources to both development and IT whenever it is needed.
When used together, Avere FXT Edge Series filers with Dell EMC Elastic Cloud Storage (ECS) can deliver high performance and massively scalable storage. The combined solution enables enterprises to scale their performance and storage requirements separately with the flexibility to run their applications on premises and burst onto the public cloud, meeting short-term or unanticipated demand.
While “scale-out” is seen in many marketing materials these days, many are rightfully confused on what it means. It is listed as a key feature because, in this era of cloud, the cloud provides exactly what scale-out means — a large number of nodes that work together providing an aggregated performance that cannot be achieved by making a single large node work on its own1. The cloud is often the provider of the scale-out infrastructure, not necessarily the software product implementing the scale-out functionality, with a few exceptions, of course.
Next-generation sequencing (NGS) workloads require more data than ever before. However, it isn't just storage infrastructure that needs to be expanded, but also computing resources to run processes on this data, which can quickly add up costs that exceed most research organizations' budgets. At this point, the NGS compute demand exceeds the infrastructure capacity.
Cloud computing offers a great solution to these issues, offering pay-as-you-go pricing and access to virtually limitless compute capacity. Still, there are a few concerns about using cloud compute for genomics workloads, including moving the entire data set into cloud and rewriting applications to access data using object storage protocols instead of existing NAS.