A swath of revolutionary new technologies are transforming the once static data center into a highly dynamic environment. Driving this innovation are the demands of today’s modern applications and workloads which require new levels of agility and performance. In a recent webinar with Avere, Founder and chief scientist at DeepStorage.net, Howard Marks, discussed some of the major 2018 data center trends that IT should be paying close attention to––including orchestration, containers and hybrid cloud––and why these are ripe for adoption. Let's review his thoughts as we get ready for the new year to begin.
When we launched the Avere 5000 Series last year, the FXT 5600 model added the most performance, density, and capacity. Customers quickly adopted it to support cloud-ready workflows that demanded performance.
But, one thing is for certain, fast today is not fast tomorrow. And, we keep developing to let these workloads run in both the cloud and on-premises at their optimal performance. Our next step is the introduction of the Avere FXT 5850 Edge filer, which delivers double the performance, capacity and network bandwidth as the Avere FXT 5600.
Scientists and engineers use High Performance Computing (HPC) in order to solve complex problems. The requirements for success frequently include big compute, high-throughput, and fast networking. For some time, the cloud simply hasn't been an option for these workloads as they were too big to move, the latency was too high, or it was simply too costly. But, now all of that is changing.
The cloud providers are looking to bring HPC workloads into their infrastructures, as was obvious at this year's SC17 in Denver. Both AWS and Google were talking scale and speed to attendees, promoting abilities to scale parallel tasks beyond what is realistic in traditional local infrastructure. But getting the workloads to the cloud services is another story. To learn how to move file-based applications to the cloud non-disruptively, attendees approached Avere.
In the below video, insideHPC editor Rich Brueckner spoke to Bernie Behn, principal engineer at Avere Systems about how to get high-performance workloads from network-attached storage (NAS) environments onto the cloud.
Caching and tiering are common terms when talking about flash storage. Simply put, caching is when data is stored so future requests for that data can be served more quickly. The data stored in a cache might be the result of an earlier computation, or a copy of data stored is slower storage media. Tiering is used to find more permanent locations for data, moving less active data to lower performance but more cost-effective storage and vice versa.
One of the most talked about subjects are the differences between file system and object interfaces. As cloud grows in popularity, it's important to consider how existing NAS storage can integrate with new object storage, and how to make existing applications work on both types of storage. This quickly becomes a difficult task due to the differences in storage protocols.
When used together, Avere FXT Edge Series filers with Dell EMC Elastic Cloud Storage (ECS) can deliver high performance and massively scalable storage. The combined solution enables enterprises to scale their performance and storage requirements separately with the flexibility to run their applications on premises and burst onto the public cloud, meeting short-term or unanticipated demand.
When people talk about moving local file-based data to the cloud, reluctance can always be heard in their voices. Fear, uncertainty, and doubt regarding security and performance remain, despite the cloud's growth.
We received an Ask Avere Almost Anything from someone battling these issues. What possible advantage does the cloud bring that tips the scale in its direction? Avere principal engineer Bernie Behn approaches this question from the standpoint of data maintenance and protection. Watch the latest video for his recommendations and learn what he thinks is the biggest reason for moving local storage to the cloud.
Let’s see, what’s after zettabyte? How about a yottabyte, which is 1,000 zettabytes, which is 1,000 exabytes, which is 1,000 petabytes—is that right? Okay, probably you aren’t personally thinking about data at such scale. But on the subject of data growth, industry analysts agree those big numbers are coming. IDC has predicted that in just three years the digital universe will expand to 44 zettabytes and that the world be using some 30 billion Internet-connected devices.
Quick Answer - Ask Avere Anything
Not all file systems are the same, especially when you need scalability. We recently received an Ask Avere Almost Anything question about comparing types of file systems, specifically distributed file systems verses clustered file systems.
We went strait to our engineering team with this one. Jim Zelenka stepped up to the plate and sat down to explain the difference. The biggest advantage of a clustered file system, like the Avere OS, is that it scales easily and efficiently.
The list of things you must do with enterprise security log data is long and includes storing, mirroring, exporting, migrating, archiving, and referencing it. What you definitely don’t want to do is modify or delete it. But that’s exactly what your log-management application may be doing automatically. If that’s the case, you could be losing valuable forensic data, permitting unauthorized modifications, and forever linking your company’s security analytics to your current vendor. What you need is a way to help you more easily do everything you need to do with your log data while protecting against what you don’t want done with it.