Strategy: Drop Memcached, Add More MySQL Servers

Update 2: Michael Galpin in Cache Money and Cache Discussions likes memcached for it's expiry policy, complex graph data, process data, but says MySQL has many advantages: SQL, Uniform Data Access, Write-through, Read-through, Replication, Management, Cold starts, LRU eviction.
Update: Dormando asks Should you use memcached? Should you just shard mysql more?. The idea of caching is the most important part of caching as it transports you beyond a simple CRUD worldview. Plan for caching and sharding by properly abstracting data access methods. Brace for change. Be ready to shard, be ready to cache. React and change to what you push out which is actually popular, vs over planning and wasting valuable time.
Feedster's François Schiettecatte wonders if Fotolog's 21 memcached servers wouldn't be better used to further shard data by adding more MySQL servers? He mentions Feedster was able to drop memcached once they partitioned their data across more servers. The algorithm: partition until all data resides in memory and then you may not need an additional memcached layer.
Parvesh Garg goes a step further and asks why people think they should be using MySQL at all?
Related Articles
Reader Comments (8)
The article in it's own right, is good, but not because it says to drop memcached, but because it implies that you should evaluate your system and what would work best for it, rather than just slapping whatever you read some other site did in there and hoping for the best.
That said, I still think that 9 times out of 10, memcached will improve your setup, and it's a hell of a lot easier than just "sharding your data". That's not easy.
So you are saying cache because it's easier than sharding. Interesting point. What if you need to shard for scalability already? Would say still use caching? Or would you use aggressive caching as means of preventing sharding?
I'll cache aggressively to avoid having to balkanize data I'm responsible for, as long as that's possible. Even after partitioning it out is the only choice, though, caching still has a lot of value. And, I think in general at least, caching is cheaper than sharding.
That's an interesting article you pointed to. Somebody had an interesting "other side of the coin" argument about analysis paralysis ... a balance, like most things in life.
While I wish we had more partitioning in place, we use memcached since the authorization and handshake portions of the MySQL requests make them take longer than our memcached requests. So in the interest of page performance, if we can cache it then we do since it's faster than MySQL even with in memory InnoDB tables. The added overhead of dealing with caching is an acceptable loss since it is generally is faster. :)
Cache is ALWAYS good because it avoid unnecessary read ops (read: unnecessary I/O ops = very, very slow) on the database.
Memcached is even better than normal MySQL cache, for several reasons (it consolidates all caches, works with multiple servers, etc).
However, there is only so much performance that cache can bring to the system.
And cache doesn't help at all with write ops.
For pure, linear scalability; it's sharding. However, it's hard - since it must be done at applications level.
There may be an answer to this though, just found it. Looks very interesting - sharding's scalability, but with no apps rewrite.
I'll let you know once I've verified this some more.
A correction for my previous comment :
WAS : unnecessary I/O ops
Should be : unnecessary disk ops
Thanks.
Forget about the memcached servers and use S3 for caching. Even easier. http://code.google.com/p/bigcache/
wow .. this article that I want .. I am looking for a chance article about mysql, thanks, good info ..