As we wind down the year, we took some time to have some fun by asking two of our most vocal employees about what they predict 2018 to bring. Technical Director, Dan Nydick, and Director of Cloud Products, Scott Jeschonek, narrowed down all of their thoughts and settled on the following five trends that deserved to be captured as their official new year predictions.
Not understanding the give and take that exists between consistency, availability and partition tolerance has caused more than a few headaches as people try to shift and optimize applications in the cloud. Like those basic rules you learned in kindergarten, this is one of the biggies to keep in the back of your head now that you're all grown up and working in information technology.
While in Las Vegas for AWS re:Invent, we jumped at the opportunity to let our CEO and former Carnegie Mellon professor return to the board to review the CAP theorem. Grab a cup of coffee and watch this interactive discussion on the give and take between these three important characteristics of computing.
Evaluating Your Options to Boost Alpha Throughput via Cloud Compute
In our last related post, we talked about what alpha throughput was and how technology can have an effect. By increasing their throughput, firms can work to increase potential and gain competitive advantage.
People in the financial services industry, especially in the hedge fund space, are always looking for new and profitable trades. However, with large quantities of data available to more players than ever before, the challenge of find an 'edge' is greater than ever. This challenge reaches back from the analysts into the IT department, as the work is completely dependent on a firm’s ability to not only crunch data in less and less time, but access large amounts of it readily.
DevOps environments can overcome cloud limitations by bridging on-premises resources with the public cloud using its high-performance file system technologies. In our last post (Part one of this two-part series) on using the cloud in DevOps environments, we discussed why it is beneficial to teams, but also reviewed the limitations that must be addressed. In this article, we'll discuss how to do that with Avere FXT technology.
DevOps is a software development framework pairing development and operations together to enable a speedy and agile workflow. The goal is to address critical business needs by combining traditionally separate business units together into a single working team. It’s designed to break down interdepartmental barriers through collaborations and communications. Public cloud, with its subscription based pricing model and unlimited resources (compute, network, and storage) can complement an existing DevOps framework by allowing agile deployment of additional resources to both development and IT whenever it is needed.
Backtesting is more important today than ever before. A hedge firm’s competitive advantage lies in the quality of its investment ideas. And the quality of its investment ideas relies on comprehensive and fast simulations of trade scenarios. So, what do you do when you run out of backtesting capacity?
You’ve got a talented quant team that devises sophisticated strategies out of signals they locate in troves of data. They are looking at everything from shipping data to the shadows of Shanghai office buildings as seen from a satellite. And they’ve just found a new signal, potentially a great one.
There’s just one problem. You don’t have the capacity to backtest it quickly enough to trade on it by the time the markets open the next morning, thus losing a possibly golden opportunity for alpha.
What was once a small regional show has grown to be so much more. On Monday of this week, the AWS Summit took place at the Javits Center in New York City. With somewhere between 7,000 and 9,000 attendees, it felt more like a national event than a Summit and drew attendees from not only the Northeast Corridor, but around the globe.
Next-generation sequencing (NGS) workloads require more data than ever before. However, it isn't just storage infrastructure that needs to be expanded, but also computing resources to run processes on this data, which can quickly add up costs that exceed most research organizations' budgets. At this point, the NGS compute demand exceeds the infrastructure capacity.
Cloud computing offers a great solution to these issues, offering pay-as-you-go pricing and access to virtually limitless compute capacity. Still, there are a few concerns about using cloud compute for genomics workloads, including moving the entire data set into cloud and rewriting applications to access data using object storage protocols instead of existing NAS.