A swath of revolutionary new technologies are transforming the once static data center into a highly dynamic environment. Driving this innovation are the demands of today’s modern applications and workloads which require new levels of agility and performance. In a recent webinar with Avere, Founder and chief scientist at DeepStorage.net, Howard Marks, discussed some of the major 2018 data center trends that IT should be paying close attention to––including orchestration, containers and hybrid cloud––and why these are ripe for adoption. Let's review his thoughts as we get ready for the new year to begin.
Not understanding the give and take that exists between consistency, availability and partition tolerance has caused more than a few headaches as people try to shift and optimize applications in the cloud. Like those basic rules you learned in kindergarten, this is one of the biggies to keep in the back of your head now that you're all grown up and working in information technology.
While in Las Vegas for AWS re:Invent, we jumped at the opportunity to let our CEO and former Carnegie Mellon professor return to the board to review the CAP theorem. Grab a cup of coffee and watch this interactive discussion on the give and take between these three important characteristics of computing.
Each day, organizations are moving more workloads to the cloud, and these workloads are getting bigger. For those with decades of data that are often in the petabytes, transferring large data sets to the cloud isn't so simple. It's not only a matter of moving the data, but also of what happens once the data is in the cloud.
First, it can be time-consuming when trying to use traditional methods of data movement. Second, file-based applications that currently use NAS protocols in the data center are now going to be facing object-based protocols. Traditionally, this would involve re-writing of applications, which is also time-consuming and not cost-effective.
People in the financial services industry, especially in the hedge fund space, are always looking for new and profitable trades. However, with large quantities of data available to more players than ever before, the challenge of find an 'edge' is greater than ever. This challenge reaches back from the analysts into the IT department, as the work is completely dependent on a firm’s ability to not only crunch data in less and less time, but access large amounts of it readily.
Transferring active data, whether it's to another traditional NAS device or to the cloud, is often considered to be a difficult task. Using traditional methods, like rsync or SCP, to move datasets require some careful consideration on how long it will take, how long the system will need to be offline, and how that will affect your users and their productivity.
Industry 4.0 might not be a household name today but it is already having an impact in manufacturing. It is where manufacturing becomes a confluence of Internet of Things (IoT), cloud computing and big data analytics managed by Artificial intelligence (AI). In addition to quality control and production monitoring, it can dictate simple decisions as autonomously as possible.
Adopting this shift can be both rewarding and challenging. In this blog post, I will look at its upside and how Avere can help overcome some of those challenges.
Data mirroring is a common practice for organizations looking to increase data availability and reduce user disruptions if a primary storage filer goes offline for any reasons. As a form of disaster recovery, file system mirroring is used not to just keep a static backup of data, but to maintain a continuously updating "mirror" of the data. Both traditional network-attached storage (NAS) and cloud can be used to mirror a file system. Regardless of what type of storage you use, however, traditional full data replication can be costly and difficult to manage, especially as you start to scale your infrastructure.
Backtesting is more important today than ever before. A hedge firm’s competitive advantage lies in the quality of its investment ideas. And the quality of its investment ideas relies on comprehensive and fast simulations of trade scenarios. So, what do you do when you run out of backtesting capacity?
Caching and tiering are common terms when talking about flash storage. Simply put, caching is when data is stored so future requests for that data can be served more quickly. The data stored in a cache might be the result of an earlier computation, or a copy of data stored is slower storage media. Tiering is used to find more permanent locations for data, moving less active data to lower performance but more cost-effective storage and vice versa.
You’ve got a talented quant team that devises sophisticated strategies out of signals they locate in troves of data. They are looking at everything from shipping data to the shadows of Shanghai office buildings as seen from a satellite. And they’ve just found a new signal, potentially a great one.
There’s just one problem. You don’t have the capacity to backtest it quickly enough to trade on it by the time the markets open the next morning, thus losing a possibly golden opportunity for alpha.