Scaling Twitter: Making Twitter 10000 Percent Faster
Update 6: Some interesting changes from Twitter's Evan Weaver: everything in RAM now, database is a backup; peaks at 300 tweets/second; every tweet followed by average 126 people; vector cache of tweet IDs; row cache; fragment cache; page cache; keep separate caches; GC makes Ruby optimization resistant so went with Scala; Thrift and HTTP are used internally; 100s internal requests for every external request; rewrote MQ but kept interface the same; 3 queues are used to load balance requests; extensive A/B testing for backwards capability; switched to C memcached client for speed; optimize critical path; faster to get the cached results from the network memory than recompute them locally.
Update 5: Twitter on Scala. A Conversation with Steve Jenson, Alex Payne, and Robey Pointer by Bill Venners. A fascinating discussion of why Twitter moved to the Java JVM for their server infrastructure (long lived processes) and why they moved to Scala to program against it (high level language, static typing, functional). Ruby is used on the front-end but wasn't performant or reliable enough for the back-end.
Update 4: Improving Running Components at Twitter by Evan Weaver. Tells how Twitter changed their infrastructure to go from handling 3 requests to 139 requests a second. They moved to a messaging model, asynchronous process, 3 levels of cache, and moved their middleware to a mixture C and Scala/JVM.
Update 3: Upgrading Twitter without service disruptions by Gojko Adzic. Lots of good updates on the new Twitter architecture.
Update 2: a commenter in Twitter Fails Macworld Keynote Test said this entry needs to be updated. LOL. My uneducated guess is it's not a language or architecture problem, but more a problem of not being able to add hardware fast enough into their data center. The predictability of this problem is debatable, but once you have it, it's hard to fix.
Update: Twitter releases Starling - light-weight persistent queue server that speaks the MemCache protocol. It was built to drive Twitter's backend, and is in production across Twitter's cluster.
Twitter started as a side project and blew up fast, going from 0 to millions of page views within a few terrifying months. Early design decisions that worked well in the small melted under the crush of new users chirping tweets to all their friends. Web darling Ruby on Rails was fingered early for the scaling problems, but Blaine Cook, Twitter's lead architect, held Ruby blameless:
For us, it’s really about scaling horizontally - to that end, Rails and Ruby haven’t been stumbling blocks, compared to any other language or framework. The performance boosts associated with a “faster” language would give us a 10-20% improvement, but thanks to architectural changes that Ruby and Rails happily accommodated, Twitter is 10000% faster than it was in January.
If Ruby on Rails wasn't to blame, how did Twitter learn to scale ever higher and higher?
Update: added slides Small Talk on Getting Big. Scaling a Rails App & all that Jazz
Site: http://twitter.com
Information Sources
The Platform
The Stats
The Architecture
- For example, if getting a count is slow, you can memoize the count into memcache in a millisecond.
- Getting your friends status is complicated. There are security and other issues. So rather than doing a query, a friend's status is updated in cache instead. It never touches the database. This gives a predictable response time frame (upper bound 20 msecs).
- ActiveRecord objects are huge so that's why they aren't cached. So they want to store critical attributes in a hash and lazy load the other attributes on access.
- 90% of requests are API requests. So don't do any page/fragment caching on the front-end. The pages are so time sensitive it doesn't do any good. But they cache API requests.
- Use message a lot. Producers produce messages, which are queued, and then are distributed to consumers. Twitter's main functionality is to act as a messaging bridge between different formats (SMS, web, IM, etc).
- Send message to invalidate friend's cache in the background instead of doing all individually, synchronously.
- Started with DRb, which stands for distributed Ruby. A library that allows you to send and receive messages from remote Ruby objects via TCP/IP. But it was a little flaky and single point of failure.
- Moved to Rinda, which a shared queue that uses a tuplespace model, along the lines of Linda. But the queues are persistent and the messages are lost on failure.
- Tried Erlang. Problem: How do you get a broken server running at Sunday Monday with 20,000 users waiting? The developer didn't know. Not a lot of documentation. So it violates the use what you know rule.
- Moved to Starling, a distributed queue written in Ruby.
- Distributed queues were made to survive system crashes by writing them to disk. Other big websites take this simple approach as well.
- They do a review and push out new mongrel servers. No graceful way yet.
- An internal server error is given to the user if their mongrel server is replaced.
- All servers are killed at once. A rolling blackout isn't used because the message queue state is in the mongrels and a rolling approach would cause all the queues in the remaining mongrels to fill up.
- A lot of down time because people crawl the site and add everyone as friends. 9000 friends in 24 hours. It would take down the site.
- Build tools to detect these problems so you can pinpoint when and where they are happening.
- Be ruthless. Delete them as users.
- Plan to partition in the future. Currently they don't. These changes have been enough so far.
- The partition scheme will be based on time, not users, because most requests are very temporally local.
- Partitioning will be difficult because of automatic memoization. They can't guarantee read-only operations will really be read-only. May write to a read-only slave, which is really bad.
- Their API is the most important thing Twitter has done.
- Keeping the service simple allowed developers to build on top of their infrastructure and come up with ideas that are way better than Twitter could come up with. For example, Twitterrific, which is a beautiful way to use Twitter that a small team with different priorities could create.
Lessons Learned
- Index everything. Rails won't do this for you.
- Use explain to how your queries are running. Indexes may not be being as you expect.
- Denormalize a lot. Single handedly saved them. For example, they store all a user IDs friend IDs together, which prevented a lot of costly joins.
- Avoid complex joins.
- Avoid scanning large sets of data.
- You want to know when you deploy an application that it will render correctly.
- They have a full test suite now. So when the caching broke they were able to find the problem before going live.
- Scale changes what can be stupid.
- Trying to load 3000 friends at once into memory can bring a server down, but when there were only 4 friends it works great.
Reader Comments (76)
Loved your article, it echoes a lot of themes I've been talking about for awhile on my blog, so I wrote about the Twitter case based on your article here:
http://smoothspan.wordpress.com/2007/09/14/twitter-scaling-story-mirrors-the-multicore-language-timetable/
I wonder what the RoR haters will make up now to say that ruby doesn't scale.
They loved jumping on the ruby hate bandwagon when twitter was going through it's difficulties. Little bo beep has been quite silent since.
Caching was the answer? Shock. Gasp. Awe. Just like PHP?!? Crazy!
I think you're referring to http://rufy.com/starfish/doc/">Starfish, not Starling.
Great article!
No, its not Starfish. In the video of his presentation, he mentions "so I wrote Starling..."
great article (and site) Todd. thanks for pulling all this information together. It's a great resource
ps. @Dave: Blaine referred to his 'starling' messaging framework at the SJ Ruby Conference earlier in the year.
So, let's be clear, the biased source in defense mode says themselves they could have been 20% faster just by selecting a different language (note that it doesn't exactly say what the performance hit of the Rails framework itself is, so let's just go with 20% improvement by changing languages and ignore potential problems in (1) their coding decisions and (2) their chosen framework).... Wow, sign me up for an easy 20% improvement!
Yeah, yeah, I know, I'll hear the usual tripe about how amazing fast Ruby is to develop with. Visual Basic is pretty easy too, as is PHP, but I don't use those either.
Sounds like Ruby on Rails _was_ to blame as the 10000 percent improvement was reached by essentially removing the "on rails" part of the equation by extensive caching. This seems to be the real weakness of RoR; Ruby in itself seems OK performance-wise, slower than PHP for example but not catastrophically so. PHP is slower than Java but scales nicely anyway. The database abstraction in "on rails" is a real performance killer though and all the high traffic sites that use RoR successfully (twitter, penny arcade, ...) seems to have taken steps to avoid using the database abstraction on live page views by extensive caching.
Of course, caching is a necessary tool for scaling regardless of the platform but with a less inefficient abstraction layer than the one in RoR it is possible to grow more before you have to recode stuff for caching.
Excellent article.
I agree with one of the other commenters that it's surprising they have this running from a single MySQL server. Wow. The fact that twitter tends to be very write-heavy, and MySQL isn't exactly perfect for multimaster replication architectures probably has a lot to do with that. I wonder what they are planning to do for future growth? Obviously this will not continue to work as-is..
--
Dustin Puryear
Author, "Best Practices for Managing Linux and UNIX Servers"
http://www.puryear-it.com/pubs/linux-unix-best-practices
I like the comment were the speed of the language isn't anywhere as important as the scalability of the language.
Moore's Law of computer speed will eventually come to an end. Parallelism will take over and any language that can thrive in that regard will work.
Twitter is proof. 0-millions in months??
And exactly how long was Twitter down when they were having their scaling problems? Weeks? I don't think so.
It scaled...and is scaling.
cbmeeks
http://cbmeeks.blogspot.com/
This was a very interesting read. I wonder if/when the Twitter people will upgrade to the new 2.0 of Rails and if so, I wonder how that will affect performance??
Thanks! a lot of helpful links are useful and useful to me in the future!
"Of course, caching is a necessary tool for scaling regardless of the platform but with a less inefficient abstraction layer than the one in RoR it is possible to grow more before you have to recode stuff for caching."
Most of this post went to great pains to show that the 20% or so language inefficiency consequence of Twitter's choice of Ruby was easily made up for by the architecture that it enabled easily. But the commenter's point is valid that the Rails part of their Ruby architecture made it harder for them to scale easily without a code rewrite. But who cares, Ruby on Rails still seems to encourage smart, DIY programming, and as the analysis in the blog post pointed out Twitter proved this by writing their own queueing system called Starling in under 200 lines of Ruby that handles all their pub/sub needs.
The difficulty is that the carriers that allow their customers to recharge prepaid cards take our money to do so; in effect, Twitter (and any other service that offers free delivery of SMS messages) becomes a source of free money. It's inherently unsustainable.
More generally, the point of this slide was that it's not a good scaling practice to allow "abusive" users to undermine continued access to "legitimate" users (and that the definition of both of those terms is subject to your own particular situation).
There's always room for creativity - until we're able to deal directly with Italian carriers to ensure that we don't act as a prepaid card refill service, Italian users are able to send messages via SMS, and are able to receive messages via AIM or the Mobile Web (and soon Email as well).
thanks for pulling all this information together. It's a great resource
The only thing which haven't become clear for me is how in fact they are handling their external API calls.
Yes, that's said that it generates lots of traffic, but the exact process of performing all that action isn't so obvious...
great article (and site) Todd. thanks for pulling all this information together. It's a great resource
ps. @Dave: Blaine referred to his 'starling' messaging framework at the SJ Ruby Conference earlier in the year.
Nice! I've bookmarked it http://www.searchallinone.com/Other/Rays_Report_-_Marc_Lancaster_Sports_-_from_TBO-com_Blogs-4/ :D
I like the comment were the speed of the language isn't anywhere as important as the scalability of the language.
No, its not Starfish. In the video of his presentation, he mentions "so I wrote Starling..."
No, its not Starfish. In the video of his presentation, he mentions "so I wrote Starling..."
No, its not Starfish. In the video of his presentation, he mentions "so I wrote Starling..."
No, its not Starfish. In the video of his presentation, he mentions "so I wrote Starling wonderful
No, its not Starfish. In the video of his presentation thnks
very nice pulling all this information together. It's a great resource
I think you're referring to Starfish, not Starling.