Latency is Everywhere and it Costs You Sales - How to Crush it
Update 8: The Cost of Latency by James Hamilton. James summarizing some latency info from Steve Souder, Greg Linden, and Marissa Mayer. Speed [is] an undervalued and under-discussed asset on the web.
Update 7: How do you know when you need more memcache servers?. Dathan Pattishall talks about using memcache not to scale, but to reduce latency and reduce I/O spikes, and how to use stats to know when more servers are needed.
Update 6: Stock Traders Find Speed Pays, in Milliseconds. Goldman Sachs is making record profits off a 500 millisecond trading advantage. Yes, latency matters. As an interesting aside, Libet found 500 msecs is about the time it takes the brain to weave together an experience of consciousness from all our sensor inputs.
Update 5: Shopzilla's Site Redo - You Get What You Measure. At the Velocity conference Phil Dixon, from Shopzilla, presented data showing a 5 second speed up resulted in a 25% increase in page views, a 10% increase in revenue, a 50% reduction in hardware, and a 120% increase traffic from Google. Built a new service oriented Java based stack. Keep it simple. Quality is a design decision. Obsessively easure everything. Used agile and built the site one page at a time to get feedback. Use proxies to incrementally expose users to new pages for A/B testing. Oracle Coherence Grid for caching. 1.5 second page load SLA. 650ms server side SLA. Make 30 parallel calls on server. 100 million requests a day. SLAs measure 95th percentile, averages not useful. Little things make a big difference.
Update 4: Slow Pages Lose Users. At the Velocity Conference Jake Brutlag (Google Search) and Eric Schurman (Microsoft Bing) presented study data showing delays under half a second impact business metrics and delay costs increase over time and persist. Page weight not key. Progressive rendering helps a lot.
Update 3: Nati Shalom's Take on this article. Lots of good stuff on designing architectures for latency minimization.
Update 2: Why Latency Lags Bandwidth, and What it Means to Computing by David Patterson. Reasons: Moore's Law helps BW more than latency; Distance limits latency; Bandwidth easier to sell; Latency help BW, but not vice versa; Bandwidth hurts latency; OS overhead hurts latency more than BW. Three ways to cope: Caching, Replication, Prediction. We haven't talked about prediction. Games use prediction, i.e, project where a character will go, but it's not a strategy much used in websites.
Update: Efficient data transfer through zero copy. Copying data kills. This excellent article explains the path data takes through the OS and how to reduce the number of copies to the big zero.
Latency matters. Amazon found every 100ms of latency cost them 1% in sales. Google found an extra .5 seconds in search page generation time dropped traffic by 20%. A broker could lose $4 million in revenues per millisecond if their electronic trading platform is 5 milliseconds behind the competition.
The Amazon results were reported by Greg Linden in his presentation Make Data Useful. In one of Greg's slides Google VP Marissa Mayer, in reference to the Google results, is quoted as saying "Users really respond to speed." And everyone wants responsive users. Ka-ching! People hate waiting and they're repulsed by seemingly small delays.
The less interactive a site becomes the more likely users are to click away and do something else. Latency is the mother of interactivity. Though it's possible through various UI techniques to make pages subjectively feel faster, slow sites generally lead to higher customer defection rates, which lead to lower conversation rates, which results in lower sales. Yet for some reason latency isn't a topic talked a lot about for web apps. We talk a lot about about building high-capacity sites, but very little about how to build low-latency sites. We apparently do so at the expense of our immortal bottom line.
I wondered if latency went to zero if sales would be infinite? But alas, as Dan Pritchett says, Latency Exists, Cope!. So we can't hide the "latency problem" by appointing a Latency Czar to conduct a nice little war on latency. Instead, we need to learn how to minimize and manage latency. It turns out a lot of problems are better solved that way.
How do we recover that which is most meaningful--sales--and build low-latency systems?
I'm excited that the topic of latency came up. There are a few good presentations on this topic I've been dying for a chance to reference. And latency is one of those quantifiable qualities that takes real engineering to create. A lot of what we do is bolt together other people's toys. Building high-capacity low-latency system takes mad skills. Which is fun. And which may also account for why we see latency a core design skill in real-time and market trading type systems, but not web systems. We certainly want our nuclear power plant plutonium fuel rod lowering hardware to respond to interrupts with sufficient alacrity. While less serious, trading companies are always in a technological arms race to create lower latency systems. He with the fastest system creates a sort of private wire for receiving and acting on information faster than everyone else. Knowing who has the bestest price the firstest is a huge advantage. But if our little shopping cart takes an extra 500 milliseconds to display, the world won't end. Or will it?
Latency Defined
My unsophisticated definition of latency is that it is the elapsed time between A and B where A and B are something you care about. Low-latency and high-latency are relative terms. The latency requirements for a femptosecond laser are far different than for mail delivery via the pony express, yet both systems can be characterized by latency. A system has low-latency if it's low enough to meet requirements, otherwise it's a high-latency system.
Latency Explained
The best explanation of latency I've ever read is still It's the Latency, Stupid by admitted network wizard Stuart Cheshire. A wonderful and detailed rant explaining latency as it relates to network communication, but the ideas are applicable everywhere.
Stuart's major point: If you have a network link with low bandwidth then it's an easy matter of putting several in parallel to make a combined link with higher bandwidth, but if you have a network link with bad latency then no amount of money can turn any number of them into a link with good latency.
I like the parallel with sharding in this observation. We put shards in parallel to increase capacity, but request latency through the system remains the same. So if we want to increase interactivity we have to address every component in the system that introduces latency and minimize or remove it's contribution. There's no "easy" scale-out strategy for fixing latency problems.
Sources of Latency
My parents told me latency was brought by Santa Clause in the dead of night, but that turns out not to be true! So where does latency come from?
Draw out the list of every hop a client request takes and the potential number of latency gremlins is quite impressive.
The Downsides of Latency
Lower sales may be the terminal condition of latency problems, but the differential diagnosis is made of many and varied ailments. As latency increases work stays queued at all levels of the system which puts stress everywhere. It's like dementia, the system forgets how to do anything. Some of the problems you may see are: Queues grow; Memory grows; Timeouts cascade; Memory grows; Paging increases; Retries cascade; State machines reset; Locks are held longer; Threads block; Deadlock occurs; Predictability declines; Throughput declines; Messages drop; Quality plummets.For a better list take a look at The Many Flavors of System Latency.. along the Critical Path of Peak Performance by Todd Jobson. A great analysis of the subject.
Managing Latency
The general algorithm for managing latency is:Hardly a revelation, but it's actually rare for applications to view their work flow in terms of latency. This is part of the Log Everything All the Time mantra. Time stamp every part of your system. Look at mean latency, standard deviation, and outliers. See if you can't make the mean a little nicer, pinch in that standard deviation, and chop off some of those spikes. With latency variability is the name of the game, but that doesn't mean that variability can't be better controlled and managed. Target your latency slimming efforts where it matters the most and you get the most bang for your buck.
Next we will talk about various ideas for what you can do about latency once you've found it.
Dan Pritchett's Lessons for Managing Latency
Dan Pritchett is one of the few who has openly written on architecting for latency. Here are some of Dan's suggestions for structuring systems to manage latency:Clearly each of these principles is a major topic all on their own. For more details please read: Dan Pritchett has written a few excellent papers on managing latency: The Challenges of Latency, Architecting for Latency, Latency Exists, Cope!.
GigaSpaces Lessons for Lowering Latency
GigsSpaces is an in-memory grid vendor and as such is on the vanguard of the RAM is the New Disk style of application building. In this approach disk is pushed aside for keeping all data in RAM. Following this line of logic GigaSpaces came up with these low latency architecture principles:The thinking is the primary source of latency in a system centers around accessing disk. So skip the disk and keep everything in memory. Very logical. As memory is an order of magnitude faster than disk it's hard to argue that latency in such a system wouldn't plummet.
Latency is minimized because objects are in kept memory and work requests are directed directly to the machine containing the already in-memory object. The object implements the request behavior on the same machine. There's no pulling data from a disk. There isn't even the hit of accessing a cache server. And since all other object requests are also served from in-memory objects we've minimized the Service Dependency Latency problem as well.
GigaSpaces isn't the only player in this market. You might want to also take a look at: Scaleout Software, Grid Gain, Teracotta, GemStone, and Coherence. We'll have more on some of these products later.
Miscellaneous Latency Reduction Ideas
Why don't most users experience high data rates? pinpoints poor network design as one major source of latency: On a single high performance network today, measured latencies are typically ~1.5x - 3x that expected from the speed of light in fiber. This is mostly due to taking longer than line-of-site paths. Between different networks (via NAPs) latency is usually much worse. Some extra distance is required, based on the availability of fiber routes and interconnects, but much more attention should be given to minimizing latency as we design our network topologies and routing.
Application Server Architecture Matters Again
With the general move over the past few years to a standard shared nothing two-tierish architecture, discussion of application server architectures has become a neglected topic, mainly because there weren't application servers anymore. Web requests came in, data was retrieved from the database, and results were calculated and returned to the user. No application server. The web server became the application server. This was quite a change from previous architectures which were more application server oriented. Though they weren't called application servers, they were call daemons or even just servers (as in client-server).Let's say we buy into RAM is the New Disk. This means we'll have many persistent processes filled with many objects spread over many boxes. A stream of requests are directed at each process and those requests must be executed in each process. How should those processes be designed?
Sure, having objects in memory reduces latency, but it's very easy through poor programming practice to lose all of that advantage. And then some. Fortunately we have a ton of literature on how to structure servers. I have a more thorough discussion here in Architecture Discussion. Also take a look at SEDA, an architecture for highly concurrent servers and ACE, an OO network programming toolkit in C++.
A few general suggestions:
Colocate
Locating applications together reduces latency by reducing data hops. The number and location of network hops a message has to travel through is a big part of the end-to-end latency of a system.For example, from New York to the London Stock Exchange a round trip message takes 84 milliseconds to send, from Frankfurt it take 18 milliseconds, and from Tokyo it takes 208 milliseconds. If you want to minimize latency then the clear strategy is to colocate your service in the London Stock Exchange. Distance is minimized and you can probably use a faster network too.
Virtualization technology makes it easier than ever to compose separate systems together. Add a cloud infrastructure to that and it becomes almost easy to dramatically lower latencies by colocating applications.
Minimize the Number of Hops
Latency increases with each hop in a system. The fewer hops the less latency. So put those hops on a diet. Some hop reducing ideas are:Build Your own Field-programmable Gate Array (FPGA)
This one may seem a little off the wall, but creating your own custom FPGA may be a killer option for some problems. A FPGA is a semiconductor device containing programmable logic. Typical computer programs are a series of instructions that are loaded and interpreted by a general purpose microprocessor, like the one in your desk top computer. Using a FPGA it's possible to bypass the overhead of a general purpose microprocessor and code your application directly into silicon. For some classes of problems the performance increases can be dramatic.FPGAs are programmed with your task specific algorithm. Usually something compute intensive like medical imaging, modeling bond yields, cryptography, and matching patterns for deep packet inspections. I/O heavy operations probably won't benefit from FPGAs. Sure, the same algorithm could be run on a standard platform, but the advantage FPGAs have is even though they may run at a relatively low clock rates, FPGAs can perform many calculations in parallel. So perhaps orders-of-magnitude more work is being performed each clock cycle. Also, FPGAs often use content addressable memory which provides a significant speedup for indexing, searching, and matching operations. We also may see a move to FPGAs because they use less power. Stay lean and green.
In embedded projects FPGAs and ASICS (application-specific integrated circuit) are avoided like the plague. If you can get by with an off-the-shelf microprocessors (Intel, AMD, ARM, PPC, etc) you do it. It's a time-to-market issue. Standard microprocessors are, well, standard, so that makes them easy to work with. Operating systems will already have board support packages for standard processors, which makes building a system faster and cheaper. Once custom hardware is involved it becomes a lot of work to support the new chip in hardware and software. Creating a software only solution is much more flexible in a world where constant change rules. Hardware resists change. So does software, but since people think it doesn't we have to act like software is infinitely malleable.
Sometimes hardware is the way to go. If you are building a NIC that has to process packets at line speed the chances are an off-the-shelf processor won't be cost effective and may not be fast enough. Your typical high end graphics card, for example, is a marvel of engineering. Graphics cards are so powerful these days distributed computation projects like Folding@home get a substantial amount of their processing power from graphics cards. Traditional CPUs are creamed by NVIDIA GeForce GPUs which perform protein-folding simulations up to 140 times faster. The downside is GPUs require very specialized programming, so it's easier to write for a standard CPU and be done with it.
That same protein folding power can be available to your own applications. ACTIV Financial, for example, uses a custom FGPA for low latency processing of high speed financial data flows. ACTIV's competitors use a traditional commodity box approach where financial data is processed by a large number of commodity servers. Let's say an application takes 12 servers. Using a FPGA the number of servers can be collapsed down to one because more instructions are performed simultaneously which means fewer machines ar needed. Using the FPGA architecture they process 20 times more messages than they did before and have reduced latency from one millisecond down to less than 100 microseconds.
Part of the performance improvement comes from the high speed main memory and network IO access FPGAs enjoy with the processor. Both Intel and AMD make it relatively easy to connect FPGAs to their chips. Using these mechanisms data moves back and forth between your processing engine and the main processor with minimal latency. In a standard architecture all this communication and manipulation would happen over a network.
FPGAs are programmed using hardware description languages like Verilog and VHDL. You can't get away from the hardware when programming FPGAs, which is a major bridge to cross for us software types. Many moons ago I took a Verilog FPGA programming class. It's not easy, nothing is ever easy, but it is possible. And for the right problem it might even be worth it.
Reader Comments (21)
Great and at long last a discussion of latency within the context of web operations.
I've been trying to write about it based on my conversations with people in the web ops world, and one notion that I will have to re-read the blog to see if it's the case, but on the first pass, it is recommended a "watch everything" approach.
I'm seeing that, in the end, the only latency that initially matters is what real users experience. From there, a heuristic can help to identify where to drill-down to address or mitigate.
Conversely, the more "bottom-up" approach I've seen, operations gets flooded with a ton of alerts and don't know which one is having one level of real latency impact (server and network are firing off intermittent alerts, but what is the real-user latency). Or...I've seen scenarios where everything shows "green" at the lower systems level, but you're still getting end-user latency....slowness by a thousand cuts which stay under the radar.
Looking forward to hearing more...my glimmer of insights into this (not nearly as detailed but more in small dribbles as I speak with people)....http://www.realuserwebops.com
Excellent post!
This is one of the best articles I've ever seen assembled.
I work on high performance, cloud-based Drupal sites, and own a business that does this for customers. We've also developed CDN brokering technologies, to be able to distribute load, and worked on performance strategies for the full stack. There are so many layers that have to be thought about, that it's somewhat difficult to build these systems. You end up finding a solution that works, and then driving multiple customers towards a unified solution so they can share the cost of maintaining them, and improving them as you learn.
When dealing with latency management, it's hard to deliver a "solution" as much as a "process", which is pretty common for complex engineering systems.
Can't thank you enough for posting the article. I'd love to talk to you more about what you're doing.
Best,
Jonathan Lambert
An excellent, thorough article. Thank you very much.
Latency is very important in video games, here's an article on how it's measured:
http://cowboyprogramming.com/2008/05/30/measuring-responsiveness-in-video-games/
I had a similar requirement: we are building a high-volume reservation system for hotels and we have one business process in there (luckily read-only) which is enormous complex. Our goal was also to minimize latency (measured as milliseconds between entry into infrastructure and leave of infrastructure) with an acceptable cost point.
As we developed this project completely green field, we could do whatever we believed was the best. Based on our analysis we recognized, that in a classical App Server/SQL architecture, most time was actually spent within the DB between the business logic and table access (yes, pretty much everything was done in a stored procedure yet); however, even with the best optimizations we could only achieve 300 to 600 milliseconds.
Given the knowledge that most time is spent within the stored procedure, and there virtually all in data access, we designed a solution, which would optimize that code path.
The result can be seen here: http://hetras.com/2008/07/29/a-great-week-for-hetras-quote-is-going-into-real-implementation-phase/
Based on our first numbers coming in, we achieve between factor 100 and 500 in improving the response time (!). Example at hand: Taking a users username/password, check whether it authenticates and check which permissions this user has (all data stored in a normalized table): MS SQL Server: ~5 milliseconds, our architecture: 7 to 10 micro(!)seconds.
Long story short, there are a lot of ways to improve latency, and even if there are extreme requirements (as we have), the are solutions for that. A lot of work, yes, but possible.
Funnily, we didn't employ one of your ideas to optimize the latency yet (of course some we could apply in addition) - I basically want to show that there are even more ways to achieve an outstanding result (end user 'latency' will go from around 1+ second to around 200 milliseconds. That's what I call a nice win.)
Cheers, Michael
@Jonathan I'm glad you found the article useful.
@Anonymous Very cool and clever article on measuring video game latency. Thanks.
@Michael Excellent job on getting your latency down to 20 msecs. But when I read your article I did notice a little partitioning, replication, loose coupling, and caching in your Quote servers :-)
Excellent article.
There is one other form of latency which is touched, but not expanded, upon. That is how the application architecture effects both development and business latency. If the application architecture is tightly coupled, it makes it very difficult to add new features, resolve defects, etc.
This increases development time, and also increases the amount of time it takes for the business to respond to customer requests and competitive threats. And this is what really costs sales.
> but not expanded, upon.
There's a lot more to say on each topic and and application architectures is certainly one of them. I think Dan Pritchett does a good job expanding on large scale application architectures in his articles on latency. In my Architecture Discussion link there's a lot of detail on different process architectures. And I'm open to suggestions! :-)
Todd,
I am in awe. This is an excellent article, well-researched, well-linked, well-documented, and well-written.
I'll follow up in a post as well, but just had to drop a note to say thank you for such a great post.
Lori
Latency has always been a matter of concern for high scalable web applications & in particular, the ecommerce applications that require a high responsive server platform to manage the multi step queries..I recently used many versions of shopping carts for this matter. I particularly like exploring the different additional ecommerce modules of anyhttp://www.justskins.com/ecommerce/shopping-cart-softwares-reviewed/513"> PHP Shopping Cart that are available in the open source category, since they are the ones who are using the most up to date programming techniques to manage latency.
Thanks for a great article and the latency "resource center"!
I particularly appreciate and agree with your statement "So if we want to increase interactivity we have to address every component in the system that introduces latency and minimize or remove it's contribution.".
We have found that, not only, latency has a direct cost associated with it, but that it is being magnified when loads are fluctuating and system scalability is required. When you need to continuously grow and shrink your environment, the cost of latency and the resulting bottlenecks is growing exponentially. And indeed, "there's no "easy" scale-out strategy for fixing latency problems", but it is possible :). In fact, a recent http://blog.gigaspaces.com/2008/11/07/scaling-the-web-layer-%e2%80%93-the-web-container-benchmark/">benchmark we did on an industry-standard web based application running on Amazon EC2/EBS cloud is proving just that…http://blog.gigaspaces.com/2008/11/07/scaling-the-web-layer-%e2%80%93-the-web-container-benchmark/">check it out and let me know what you think.
Cheers,
Alit
GigaSpaces Technologies
Todd - great article indeed.
Took me sometime but i compiled my personal take on this great article.
Here is a snippet from this take:
Sources for latency – is it the network or the software?
When discussing latency most people fall into one of two main camps: the "networking" camp and the "software architecture" camp. The former tends to think that the impact of software on latency is negligible, especially when it comes to Web applications.
Marc Abrams says "The bulk of this time is the round trip delay, and only a tiny portion is delay at the server. This implies that the bottleneck in accessing pages over the Internet is due to the Internet itself, and not the server speed."
The "software architecture" camp tends to believe that network latency is a given and there is little we can do about it. The bulk of latency that we can control lies within the software/application architecture. Dan Pritchett's Lessons for Managing Latency provides guidelines for an application architecture that addresses latency requirements using loosely-coupled components, asynchronousinterfaces, horizontal scale from the start, active/active architecture and by avoiding ACID and pessimistic transactions.
The full analysis is provided http://natishalom.typepad.com/nati_shaloms_blog/2008/12/latency-is-everywhere-and-it-costs-you-sales-how-to-crush-it-my-personal-take-away.html">here
Since I have a fragile ego I mainly paid attention to the parts where you brutally attack me :-) Let me protect my tattered honor by saying there is a section on garbage collection in the document. And the common fallacy happens to be true. You shard to make requests independent and thus bound average latencies within a small deviation. There wouldn't be much of a point otherwise. If you block on a shared resource then that's not really sharding.
Nice article though. Thanks.
a few important issues that seems to be missing from this otherwise comprehensive article
- latency vs. throughput
those two attributes of a system are often interchangable and can be acheived one at the expense of the other.
it is not something that is unique to web applications or even to computer systems in general but rather a
funndemental engineering principle.
one can take the common pipeline architecture of a CPU to see that the more stages you add to the pipe
the more throughput you get because the time to finish the last stage is getting shorter but the overall latency
of every instruction is getting longer because it has to go through more stages.
the basic architecture of a server is another example,
the way to achieve the shortest latency in a server is to dedicate a thread to every request so that there will be
a minimal amount of context switches, the problem with that aproach is that it has a limited throughput
due to the high price of managing a large amount of threads.
a better approach is to use a SEDA model that adopt the pipeline metaphore and allows for better throughput at
the expense of latency due to context switches from one stage to the other.
having said that, it is possible to improve both throughput and latency by scaning your code for hotspots
and reducing or even eliminate them, but eventually it's the architecture of the code will determine if it is
throughput or latency oriented and a good architect will choose the required properties in advance.
- latency vs. perceived latency
ask any GUI expert and he will tell you that what's really important in MMI(man-machine interface)
is the perceived latency which is somewhat different than the nominal latency measured by a machine.
there are a few reasons for that -
we(as humans) can't tell the difference between 1 ms and 100ms but we can definitely tell the difference
between 100ms and 500ms not to mention 1s, so there is no point in puting much effort to reduce the latency
from 10ms to 5ms is the end user is a human.
what we(as humans) are looking for is responsiveness so sometimes it is better to improve responsiveness
by breaking down a long task so that every part will seem to be very fast instead of reducing the overall
latency of the operation that will still look long in the eyes of the user.
my few cents,
Asy.
Todd your right the article actually touches on GC hickups, somehow i missed it:
"Use Small Memory Chunks When Using Java. GC in Java kills latency. One way to minimize the impact of garbage collection on latency is to use more VMs and less memory in each VM instead of VM with a lot of memory. This prevents a large GC run and makes latency more predictable."
There is another factor that seems to get lost in these discussions. That is the efficiency of data messaging integrations among applications, platforms and networks. Transforming messages eats time. If the message structure is subject to change, the flexibility the integration code to adapt to change is crucial. If the data architecture is virtualized and metadata-based, the metadata archives need to be "live" (accurately reflecting format and usage now) in order to rapidly revamp the interfaces to manage messaging changes along the workflow.
The fastest integrations are historically hand-coded. More recently, the automation of integration to eliminate hand-coding has tended to be proprietarily associated with hardware/software accelerators or middleware components. So the code is good for one platform, but not good for another. And to make an infrastructure change, the whole cycle of recoding has to begin again.
There are other solutions, openware automation tools for integration. There is one, as far as I know, that invokes translation code in run-time at speeds adequate for micro-second latency environments. That is my company's IDE, Volante Designer.
Here is a presentation we (GemStone Systems) recently did with Push Technology talking about how their product 'Diffusion' and 'GemFire' can reduce latency throughout an architecture and thereby lower costs. I thought that it was fitting to go along with Todd's great article.
http://www.slideshare.net/gemstonesystems/true-cost-of-latency
Hi Todd
Really great article, latency is a difficult concept to communicate especially to non-technical website business owners - however its hugely important to monitor.
One of our challenges is finding ways to simply communicate what latency is and how it can effect a business - you have done that. We presented at Velocity this month and launched our Latency Simulator there - it's an open-source Internet Explorer add-on that simulates the effects of network latency; hopefully some of your readers will find it useful.
Direct download:
http://www.aptimize.com/downloads/file/latency-simulator.zip
Cheers, Nicola
Very informative post, Thanks.
I liked the talk about the application logic in hardware using FPGA an ASIC. Long back I read an article regarding Reconfigurable FPGA. These can be reprogrammed on the go. Just as we run software programs on CPUs now a days, one day we will be able to run "hardware programs" on these chips.
This increases development time, and also increases the amount of time it takes for the business to respond to customer requests and competitive threats. And this is what really costs sales.