As we wind down the year, we took some time to have some fun by asking two of our most vocal employees about what they predict 2018 to bring. Technical Director, Dan Nydick, and Director of Cloud Products, Scott Jeschonek, narrowed down all of their thoughts and settled on the following five trends that deserved to be captured as their official new year predictions.
A swath of revolutionary new technologies are transforming the once static data center into a highly dynamic environment. Driving this innovation are the demands of today’s modern applications and workloads which require new levels of agility and performance. In a recent webinar with Avere, Founder and chief scientist at DeepStorage.net, Howard Marks, discussed some of the major 2018 data center trends that IT should be paying close attention to––including orchestration, containers and hybrid cloud––and why these are ripe for adoption. Let's review his thoughts as we get ready for the new year to begin.
When we launched the Avere 5000 Series last year, the FXT 5600 model added the most performance, density, and capacity. Customers quickly adopted it to support cloud-ready workflows that demanded performance.
But, one thing is for certain, fast today is not fast tomorrow. And, we keep developing to let these workloads run in both the cloud and on-premises at their optimal performance. Our next step is the introduction of the Avere FXT 5850 Edge filer, which delivers double the performance, capacity and network bandwidth as the Avere FXT 5600.
Not understanding the give and take that exists between consistency, availability and partition tolerance has caused more than a few headaches as people try to shift and optimize applications in the cloud. Like those basic rules you learned in kindergarten, this is one of the biggies to keep in the back of your head now that you're all grown up and working in information technology.
While in Las Vegas for AWS re:Invent, we jumped at the opportunity to let our CEO and former Carnegie Mellon professor return to the board to review the CAP theorem. Grab a cup of coffee and watch this interactive discussion on the give and take between these three important characteristics of computing.
Scientists and engineers use High Performance Computing (HPC) in order to solve complex problems. The requirements for success frequently include big compute, high-throughput, and fast networking. For some time, the cloud simply hasn't been an option for these workloads as they were too big to move, the latency was too high, or it was simply too costly. But, now all of that is changing.
The cloud providers are looking to bring HPC workloads into their infrastructures, as was obvious at this year's SC17 in Denver. Both AWS and Google were talking scale and speed to attendees, promoting abilities to scale parallel tasks beyond what is realistic in traditional local infrastructure. But getting the workloads to the cloud services is another story. To learn how to move file-based applications to the cloud non-disruptively, attendees approached Avere.
In the below video, insideHPC editor Rich Brueckner spoke to Bernie Behn, principal engineer at Avere Systems about how to get high-performance workloads from network-attached storage (NAS) environments onto the cloud.
The Internet of Things (IoT) is a term that's commonly used to describe everyday devices, such as lights, refrigerators, and even cars, that send and receive data via the Internet. This interconnectivity provides us with more information than ever before. For example, think about how much data you collect every day just walking around with your Fitbit. From counting steps to measuring heart rate, your Fitbit collects all of this data so that you can later visit their app to see your progress throughout the week, trends, and more. So you can imagine that this also means that there has been massive data growth requiring more and more storage resources.
Evaluating Your Options to Boost Alpha Throughput via Cloud Compute
In our last related post, we talked about what alpha throughput was and how technology can have an effect. By increasing their throughput, firms can work to increase potential and gain competitive advantage.
Each day, organizations are moving more workloads to the cloud, and these workloads are getting bigger. For those with decades of data that are often in the petabytes, transferring large data sets to the cloud isn't so simple. It's not only a matter of moving the data, but also of what happens once the data is in the cloud.
First, it can be time-consuming when trying to use traditional methods of data movement. Second, file-based applications that currently use NAS protocols in the data center are now going to be facing object-based protocols. Traditionally, this would involve re-writing of applications, which is also time-consuming and not cost-effective.
People in the financial services industry, especially in the hedge fund space, are always looking for new and profitable trades. However, with large quantities of data available to more players than ever before, the challenge of find an 'edge' is greater than ever. This challenge reaches back from the analysts into the IT department, as the work is completely dependent on a firm’s ability to not only crunch data in less and less time, but access large amounts of it readily.