« Stuff The Internet Says On Scalability For September 16, 2011 | Main | Big List of Scalabilty Conferences »
Thursday
Sep152011

Paper: It's Time for Low Latency - Inventing the 1 Microsecond Datacenter

In It's Time for Low Latency  Stephen Rumble et al. explore the idea that it's time to rearchitect our stack to live in the modern era of low-latency datacenter instead of high-latency WANs. The implications for program architectures will be revolutionary.  Luiz André Barroso, Distinguished Engineer at Google, sees ultra low latency as a way to make computer resources, to be as much as possible, fungible, that is they are interchangeable and location independent, effectively turning a datacenter into single computer.

 Abstract from the paper:

The operating systems community has ignored network latency for too long. In the past, speed-of-light delays in wide area networks and unoptimized network hardware have made sub-100µs round-trip times impossible. However, in the next few years datacenters will be deployed with low-latency Ethernet. Without the burden of propagation delays in the datacenter campus and network delays in the Ethernet devices, it will be up to us to finish the job and see this benefit through to applications. We argue that OS researchers must lead the charge in rearchitecting systems to push the boundaries of low latency datacenter communication. 5-10µs remote procedure calls are possible in the short term – two orders of magnitude better than today. In the long term, moving the network interface on to the CPU core will make 1µs times feasible.

Related Articles

Reader Comments (4)

This is an interesting paper, but being HOTOS it's primarily a position paper and still work in progress.

Areas for improvement:

There is no mention interrupt moderation or direct cache access. The authors argue for moving the NIC closer to the processor, but what would the comparison be with DCA in terms of latency. Another comparison point would be AMD's HTX interface for which network (RDMA) adapters exist.

There is no discussion network process offloading, ie checksum offload. Are checksums even useful in this environment?

There is no discussion of the significance of pipelining. Server workloads can frequently pipeline large numbers of requests. The throughput will not increase with lower latency if there is extensive pipelining.

The comparison points for technology improvements focus on CPU speed, memory size, disk capacity, network bandwidth. The important areas missed are cache and memory performance improvements and bus latency/bandwidth improvements.

Even with lower latency communication the receiver will backlog if the service time is not similarly small. Where are the useful RPC functions that are this trivial?

September 15, 2011 | Unregistered CommenterA Reader

I think this is a great example of the future being unevenly distributed. Big datacenter companies probably have good reason to think about this. You and I, meanwhile, are struggling to use the full power of the datacenters or just racks that we *have*. Scalable software has come a long way but not to the point that it's the no-brainer choice for Joe Web Developer.

A lot of apps can live with today's latency. Non-interactive batch ops are often latency-proof, even data-placement-sensitive stuff like sorting. Parallel requests can reduce the impact of latency. And half a millisecond for an intra-DC round trip (less within a rack) can be zippy enough when you merely want to send something to the user in 100ms.

Not to say the low-latency stuff discussed isn't exciting. Just that it's the far future, not the present, for many of us.

September 17, 2011 | Unregistered CommenterRandall

Welcome to 2007! Well designed interconnects for high performance computer systems have been at 1 microsecond latency. application call to remote application return, for 4-5 years now.

October 15, 2011 | Unregistered CommenterLarry Stewart

Why not also link to some people that are already doing what is mentioned in the paper ?:

"The operating system can not be in the loop for normal message exchanges: data must pass directly between the application and the NIC."

Some people already do that:

http://info.iet.unipi.it/~luigi/netmap/

November 13, 2011 | Unregistered CommenterLennie

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>