« Many updates against MySQL | Main | Strategy: Serve Pre-generated Static Files Instead Of Dynamic Pages »
Sunday
Aug172008

Strategy: Drop Memcached, Add More MySQL Servers

Update 2: Michael Galpin in Cache Money and Cache Discussions likes memcached for it's expiry policy, complex graph data, process data, but says MySQL has many advantages: SQL, Uniform Data Access, Write-through, Read-through, Replication, Management, Cold starts, LRU eviction.
Update: Dormando asks Should you use memcached? Should you just shard mysql more?. The idea of caching is the most important part of caching as it transports you beyond a simple CRUD worldview. Plan for caching and sharding by properly abstracting data access methods. Brace for change. Be ready to shard, be ready to cache. React and change to what you push out which is actually popular, vs over planning and wasting valuable time.


Feedster's François Schiettecatte wonders if Fotolog's 21 memcached servers wouldn't be better used to further shard data by adding more MySQL servers? He mentions Feedster was able to drop memcached once they partitioned their data across more servers. The algorithm: partition until all data resides in memory and then you may not need an additional memcached layer.

Parvesh Garg goes a step further and asks why people think they should be using MySQL at all?

Related Articles

 

  • The Death of Read Replication by Brian Aker. Caching layers have replaced read replication. Cache can't fix a broken database layer. Partition the data that feeds the cache tier: "Keep your front end working through the cache. Keep all of your data generation behind it."
  • Read replication with MySQL by François Schiettecatte. Read replication is dead and it should be used only for backup purposes. Take the memory used for caching and give it to your database servers.
  • Replication++, Replication 2.0, Replication.Next by Ronald Bradford. What should read replication be used for?
  • Replication, caching, and partitioning by Greg Linden. Caching overdone because it adds complexity, latency on a cache miss, and inefficiently uses cluster resources. Hitting disk is the problem. Shard more and get your data in memory.
  • Reader Comments (8)

    The article in it's own right, is good, but not because it says to drop memcached, but because it implies that you should evaluate your system and what would work best for it, rather than just slapping whatever you read some other site did in there and hoping for the best.

    That said, I still think that 9 times out of 10, memcached will improve your setup, and it's a hell of a lot easier than just "sharding your data". That's not easy.

    December 31, 1999 | Unregistered CommenterUltimateBrent

    So you are saying cache because it's easier than sharding. Interesting point. What if you need to shard for scalability already? Would say still use caching? Or would you use aggressive caching as means of preventing sharding?

    December 31, 1999 | Unregistered CommenterTodd Hoff

    I'll cache aggressively to avoid having to balkanize data I'm responsible for, as long as that's possible. Even after partitioning it out is the only choice, though, caching still has a lot of value. And, I think in general at least, caching is cheaper than sharding.

    That's an interesting article you pointed to. Somebody had an interesting "other side of the coin" argument about analysis paralysis ... a balance, like most things in life.

    December 31, 1999 | Unregistered CommenterForrest

    While I wish we had more partitioning in place, we use memcached since the authorization and handshake portions of the MySQL requests make them take longer than our memcached requests. So in the interest of page performance, if we can cache it then we do since it's faster than MySQL even with in memory InnoDB tables. The added overhead of dealing with caching is an acceptable loss since it is generally is faster. :)

    December 31, 1999 | Unregistered CommenterJon

    Cache is ALWAYS good because it avoid unnecessary read ops (read: unnecessary I/O ops = very, very slow) on the database.

    Memcached is even better than normal MySQL cache, for several reasons (it consolidates all caches, works with multiple servers, etc).

    However, there is only so much performance that cache can bring to the system.

    And cache doesn't help at all with write ops.

    For pure, linear scalability; it's sharding. However, it's hard - since it must be done at applications level.
    There may be an answer to this though, just found it. Looks very interesting - sharding's scalability, but with no apps rewrite.

    I'll let you know once I've verified this some more.

    December 31, 1999 | Unregistered Commentersufehmi

    A correction for my previous comment :

    WAS : unnecessary I/O ops

    Should be : unnecessary disk ops

    Thanks.

    December 31, 1999 | Unregistered Commentersufehmi

    Forget about the memcached servers and use S3 for caching. Even easier. http://code.google.com/p/bigcache/

    December 31, 1999 | Unregistered CommenterTravis Reeder

    wow .. this article that I want .. I am looking for a chance article about mysql, thanks, good info ..

    December 31, 1999 | Unregistered Commenterdody farial

    PostPost a New Comment

    Enter your information below to add a new comment.
    Author Email (optional):
    Author URL (optional):
    Post:
     
    Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>