« Six Ways Twitter May Reach its Big Hairy Audacious Goal of One Billion Users | Main | Hot Scalability Links for June 3, 2010 »
Friday
Jun042010

Strategy: Cache Larger Chunks - Cache Hit Rate is a Bad Indicator

Isn't the secret to fast, scalable websites to cache everything? Caching, if not the secret sauce of many a website, is it at least a popular condiment. But not so fast says Peter Zaitsev in Beyond great cache hit ratio. The point Peter makes is that we read about websites like Amazon and Facebook that can literally make hundreds of calls to satisfy a user request. Even if you have an awesome cache hit ratio, pages can still be slow because making and processing all those requests takes time. The solution is to remove requests all together. You do this by caching larger blocks so you have to make fewer requests. 

The post has a lot of good advice worth reading: 1) Make non cacheable blocks as small as possible, 2) Maximize amount of uses of the cache item, 3) Control invalidation, 4) Multi-Get.

 

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments (4)

so where is the link?

June 6, 2010 | Unregistered Commenteranon

The article title is linked.

June 6, 2010 | Registered CommenterHighScalability Team

linkage:But not so fast says Peter Zaitsev in Beyond great cache hit ratio.

http://www.mysqlperformanceblog.com/2010/05/19/beyond-great-cache-hit-ratio/

June 6, 2010 | Unregistered Commentereverly

there's no secret: cache (in business logic way) must be used in higher absctract level, with consolidated data.

June 9, 2010 | Unregistered Commenterfulvius

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>