Ask Avere: Improving WAN Performance

Posted by Gretchen Weaver on Tue, Feb 07, 2017 @ 03:00 PM

WAN Performace Ask Avere In many industries, network bandwidth limitations are a big problem. When the pipe isn't big enough for the traffic, users aren't working at expected efficiency. Although not unique to post-production studios, these companies often have remote artists pounding centralized storage resources all at once due to tight deadlines.

What is WAN Latency?

Basically, latency is the amount of time it takes for data to travel across a network between client and server. WAN latency applies to wide-area-network connections, or networking used to connect users over a large geographical area. It's typically measured in RTT (round-trip time). 

However, when using a Avere, you are removing traffic that typically would be traveling back and forth over the network connection and optimizing the use of smaller network bandwidth connections. 

So when we got the question from Julian about how to use Avere FXTs to solve this WAN performance problem without investing in a more bandwidth, we turned to our media team for an explanation. Configuring Avere FXT Edge filers to reduce latency keeps users productive, no matter where they physically sit in relation to data storage, including in the cloud.

Avere can help reduce WAN latency and optimize your wide-area-network connection by:

  • Reducing network traffic 50:1
  • Creating a high-performance WAN caching layer between remote locations and your data center or the cloud
  • Automatically placing active data in the fastest storage making it responsive to users
  • Supporting centralized storage initiatives by reducing the need to locate large pools of storage closer to remote locations

Not only do clients use Avere to reduce network latency to remote locations, but also to boost connectivity to cloud service provider locations whose distance may otherwise make use impossible.

How Does Avere Overcome Network Bandwidth Limits?

Transcript:

Hi, I'm Danny Seitz. I'm on the Media and Entertainment Team here at Avere Systems. I was posed a question, "Can Avere overcome network bandwidth limitations?"

We are vastly improving performance by caching the working dataset close to compute, and offloading operations from your WAN. So, you don't need as wide of a pipe as you might think.

In this example, you have your local NAS and a remote render farm or compute farm. Basically what's happening is that files are requested by the render farm to the FXT cluster. We'll look for those files in the cache. If they're not present, then we reach across the WAN to get them. So the initial read would feel pretty much like it would without the FXT. But subsequent reads are offloaded from the WAN to this low-latency link between compute and the FXT. It feels very much like local data at that point.

Also, writes are acknowledged immediately and aggregated so that we perform fewer write operations across the WAN as well.

Photo Credit: iStockphoto.com

--

Read how Iceland's RVX reduced WAN latency between its offices and colo in this on-demand webinar: Transparent Footprints: Optimizing HPC Workloads with Colocated Infrastructure.

HPC in colocation environments

 

Topics: Enterprise Storage, Data Center Management, Technology Community