Monday
Feb232009

Database Sharding at Netlog, with MySQL and PHP

Jurriaan Persyn is a Lead Web Developer at Netlog, a social portal site that gets 50 million unique visitors and 5+ billion page views per month. In this paper Jurriaan goes into a lot of excellent nuts and bolts details about how they used sharding to scale their system. If you are pondering sharding as a solution to your scaling problems you'll want to read this paper. As the paper is quite well organized there's no reason to write a summary, but I especially liked this part from the conclusion:

If you can do with simpler solutions (better hardware, more hardware, server tweaking and tuning, vertical partitioning, sql query optimization, ...) that require less development cost, why invest lots of effort in sharding? On the other hand, when your visitor statistics really start blowing through the roof, it is a good direction to go. After all, it worked for us.

Click to read more ...

Sunday
Feb222009

Building and Scaling a Startup on Rails: 12 Things We Learned the Hard Way

Garry Tan, cofounder of Posterous, lists 12 lessons for scaling that apply to more than just Rails.

  • Use cloud storage for static files.
  • Use HTTP Cache Control to tell the browser what it can cache.
  • Use Sphinx for text search.
  • Use InnoDB for more crash resistant and faster writes.
  • Don't use textbook Rails ActiveRecord objects. Use New Relic to find exactly what is slow in your system.
  • Use memcache later so you find your database bottlenecks now.
  • Use mongrel proctitle to find your slow queries. You are only as fast as your slowest queries.
  • Use asynchronous job queuing to do work in parallel.
  • Use monitoring so you'll know when your site went down and why.
  • Learn by reading the source code, fixing problems, and submitting them back to the community.
  • Use new plugins. Old plugins can't be trusted.
  • Use new information. Old information can't be trusted.

    Click to read more ...

  • Saturday
    Feb212009

    Google AppEngine - A Second Look

    Update 6:: Back to the Future for Data Storage. We are in the middle of a renaissance in data storage with the application of many new ideas and techniques; there's huge potential for breaking out of thinking about data storage in just one way.
    Update 5: Building Scalable Web Applications with Google App Engine by Brett Slatkin.

    Update 4: Why Google App Engine is broken and what Google must do to fix it by Aral Balkan. We don't care that it can scale. We care that it does scale. And that it scales when you need it the most. Issues: 1MB limit on data structures; 1MB limit on data structures; the short-term high CPU quota; quotas in general; Admin? What's that?
    Update 3: BigTable Blues. Catherine Devlin couldn't port an application to GAE because it can't do basic filtering and can't search 5,000 records without timing out: "Querying from 5000 records - too much for the mighty BigTable, apparently." Followup: not the future database. "90% of the work of this project has been trying to figure out workarounds and kludges for its bizzare limitations."
    Update 2: Having doubts about AppEngine. Excellent and surprisingly civil debate on if GAE is a viable delivery platform for real applications. Concerns swirl over poor performance, lack of a roadmap, perpetual beta status, poor support, and a quota system as torture chamber model of scalability. GAE is obviously part of Google's grand plan (browser, gears, android, etc) to emasculate Microsoft, so the future looks bright, but is GAE a good choice now?
    Update: Here are a few experience reports of developers using GAE. Diwaker Gupta likes how easy it is to get started on the good documentation. Doesn't like all the limits and poor performance. James here and here also likes the ease of use but finds the data model takes some getting used to and is concerned the API limits won't scale for a real site. He doesn't like how external connections are handled and wants a database where the schema is easier to manage. These posts mirror some of my own concerns. GAE is scalable for Google, but it may not be scalable for my application.

    It's been a few days now since GAE (Google App Engine) was released and we had our First Look. It's high time for a retrospective. Too soon? Hey, this is Internet time baby. So how is GAE doing? I did get an invite so hopefully I'll have a more experience grounded take a little later. I don't know Python and being the more methodical type it may take me a while. To perform our retrospective we'll take a look at the three sources of information available to us: actual applications in the AppGallery, blogspew, and developer issues in the forum.

    The result: a cautious thumbs up. The biggest issue so far seems to be the change in mindset needed by developers to use GAE. BigTable is not MySQL. The runtime environment is not a VM. A service based approach is not the same as using libraries. A scalable architecture is not the same as one based on optimizing speed. A different approach is needed, but as of yet Google doesn't give you all the tools you need to fully embrace the red pill vision.

    I think this quote by Brandon Smith in a thread on how to best implement sessions in GAE nicely sums up the new perspective:

    Consider the lack of your daddy's sessions a feature. It's what will make your app scale on Google's infrastructure.

    In other words: when in Rome. But how do we know what the Romans do when the Romans do what they do?

    Brett Morgan expands our cultural education in a thread on slow GAE databases performance when he talks about why MySQL thinking won't work on BigTable:

    It might look almost look like a sql db when you squint, but it's
    optimized for a totally different goal. If you think that each
    different entity you retrieve could be retrieving a different disk
    block from a different machine in the cluster, then suddenly things
    start to make sense. avg() over a column in a sql server makes sense,
    because the disk accesses are pulling blocks in a row from the same
    disk (hopefully), or even better, all from the same ram on the one
    computer. With DataStore, which is built on top of BigTable, which is
    built on top of GFS, there ain't no such promise. Each entity in
    DataStore is quite possibly a different file in gfs.

    So if you build things such that web requests are only ever pulling a
    single entity from DataStore - by always precomputing everything -
    then your app will fly on all the read requests. In fact, if that
    single entity gets hot - is highly utilized across the cluster - then
    it will be replicated across the cluster.

    Yes, this means that everything that we think we know about building
    web applications is suddenly wrong
    . But this is actually a good thing.
    Having been on the wrong side of trying to scale up web app code, I
    can honestly say it is better to push the requirements of scaling into
    the face of us developers so that we do the right thing from the
    beginning. It's easier to solve the issues at the start, than try and
    retrofit hacks at the end of the development cycle.


    A truly excellent explanation of the differences between MySQL thinking and GAE thinking.

    Now, if you can't use MySQL's avg feature, how can an average be calculated using BigTable? Brett advises:

    Instead of calculating the results at query time, calculate them when
    you are adding the records. This means that displaying the results is
    just a lookup, and that the calculation costs are amortized over each
    record addition.

    Clearly this is more work for the programmer and at first blush doesn't seem worth the effort, especially when you are used to the convenience of MySQL. That's why in the same thread Barry Hunter insightfully comments that GAE may not be for everyone:

    This might be a very naive observation, but I perhaps wonder then if
    GAE is the tool for you.

    As I see it the App Engine is for applications that are meant to
    scale, scale and really scale. Sounds like an application with a few
    hundred hits daily could easily run on traditional hosting platforms.

    It's a completely different mindset.
    ...
    Again maybe I am missing something, but the DataStore isn't designed to
    be super fast at the small scale, but rather handle large amounts of
    data, and be distributed
    (and because its distributed it can appear
    very fast at large scale).

    So you break down your database access into very simple processes.
    Assume your database access is VERY slow, and rethink how you do
    things. (Of course the piece in the puzzle 'we' are missing is
    MapReduce! - the 'processing' part of the BigTable mindset)


    Before developers can take full advantage of GAE these types of lessons need to be extracted and popularized with the same ferocity the multi-tier RDBMS framework has been marketed. It will be a long difficult transition.

    Interestingly, many lessons from AWS are not transferable to GAE. AWS has a VM model whereas GAE has an application centric model. They are inverses of each other.

    In AWS you have a bag of lowish level components out of which you architect your application. You can write all the fine low level implementations bits you desire. A service layer is then put in front of everything to hide the components. In GAE you have a high level application component and you build out your application using services. You can't build any low level components in GAE. In AWS the goal is to drive load to the CPU because CPU and bandwidth are plentiful. In GAE you get very limitted CPU, certainly none to burn on useless activities like summing up an average over a whole slice of data returned from SimpleDB. And in GAE the amount of data returnable from the database is small so your architecture needs to be very smart about how data is stored and accessed.

    Very different approaches that lead to very different applications.

    Applications

    The number of applications has exploded. I am always amazed at how enthusiastic and productive people can be when they are actually interested in what they are doing. It happens so rarely. True, most applications aren't even up to Facebook standards yet, but it's early days. What's impressive is how fast they were created and deployed. That speaks volumes about the efficacy of the application centric development model.Will it be as effective delivering "real" apps? That's a question I'm not sure about.

    So far application performance is acceptable. Certainly nothing spectacular. What can you do about it? Nada.

    I like the sketch application because people immediately and quite predictably drew lewd depictions of various body parts. I also like this early incarnation of a forum app. A forum is one of the ideas I thought might work well on AppEngine because the scalable storage problem is solved. I do wonder how the performance will be with a fine tuned caching layer? Vorby is a movie quote site showing a more realistic level of complexity. It has tabs, long lists of text, some graphical elements, some more complex screens, and ratings. It shows you can make applications you wouldn't mind people using.

    An option I'd like to see in the App Gallery is a view source link. Developers could indicate when adding an application if others can view their application source. Then when browsing the gallery we could all learn by looking at real working code. This is how html spread so quickly. Anyone could view the source for any page, copy paste, and you're on your way! With an application centric model the view source viral spread approach would also work.

    Blogspew

    As expected there's lots of blog activity on GAE:

  • As to all those people complaining their favorite language isn't available, take a chill pill, Urubatan asks us When will programmers learn that a language is just a tool?. I mostly agree with this take, but I also agree with a commenter who observed that it's a lot harder for a team of developers to turn on a dime and adopt a whole new everything.
  • Garrick Van Buren says Free & Open Is Its Own Lock-in. The idea being it's worthing paying something you know works, allows you to experiment, and you are aligned with their zeitgeist. Leaving that for "free" isn't a good deal.
  • evan_tech: google app engine limitations. Don't focus on minor problems. The big problems are: all code runs only in response to HTTP fetches, No long connections means no "comet" (server-push messaging), playing around with your data is hard as there's no way to perform operations on your data except by uploading code to the server, Table scans are slow and you can't cache because it's so slow you hit your CPU limit, bulk operations are hard, and no arbitrary queries.
  • RedMonk Clouds Rolling In: The Google App Engine Q&A gives covers a lot of GAE territory. List some of the cons: Python only, not database export, lock-in, and no cron. "...all of the current offerings have limitations that throttle their usage. Many of which are related to the lack of open standards. Apart from the mostly standard Python implementation, App Engine is decidedly non-standard."
  • Alex Bosworth pits AWS vs Google App Engine in a death match. Alex thinks: To be succinct, based on where the Google App Engine is today, I would say AWS still has a strong lead in application hosting, and I would not currently consider writing an application for Google’s current platform. Cons: Lockin, The page-view limitation is quite low, no memcache, No long running pages, or cron jobs, Storage size limitation, One language, No requests unless they are through Google’s API. Pros: it’s free, looks pretty rocking, integrates with Google accounts.
  • Joyent is countering by offering free infrastructure for high volume python applications. Joyent only asks "that you provide Joyent unlimited access to your customer information and clickstream data." Your data has a lot of value. Google is also very aware of that. More in my Why Does Google Do What Google Does? post. Though the Joyent's building blocks approach is very different than Google's application centric approach. We'll see which matters more: the model or facilities?
  • Niall Kennedy in Google App Engine for developers does a great job contrasting the complexity of your normal website setup with an application approach. Normally you: purchase dedicated servers or virtualized slices, capacity plan, configure web server, install Python, Apache, setup MySQL in scalable fault tolerant configuration, insert caching layer, add monitoring layer, add static file serving and bulk file serving, make it all work together, spend your life keeping it working and responding to failures. Nicely drawn contrast to upload and go.
  • TechCrunch's AppEngine test application couldn't handle a TechCrunch level of load, which is a little concerning. This means usage limits are set a bit low and with no pricing model to work from it's reasonable to be concerned about the cost. Nobody wants a cell phone overage nightmare for their website costs.
  • Groovy: Google Datastore and the shift from a RDBMS. An excellent comparison of how BigTable differs from a RDBMS. The conclusion: The end result of this, is that the standard way a developer writes out the table schema for a RDBMS should be dumped almost entirely when considering an app using Google Datastore.
  • Service Level Automation in the Datacenter: What Google App Engine is NOT. It's a web play only, it's not a cloud in the sense of datacenter infrastructure IT can move to. You can't implement: Portal Services, SOA architectures, Business Process Automation, Enterprise integration, HPC, and Server and desktop virtualization.

    A lot has been made of the risk of lock-in. I don't really agree with this as everything is based around services, which you can port to another infrastructure. What's more the problem is developers will be acquiring a sort of learned helplessness. It's not that developers can't port to another environment, they simply won't know how to anymore because they will have never had to do it themselves. Their system design and infrastructure muscles will have atrophied so much from disuse that they'll no longer be able to walk without the aide of their Google crutches. More in another post.

    Developer Forum

    The best way to figure out how a system is doing is to read the developer support forum. What problems and successes are real developers experiencing trying to get real work done? The forum is a hoppin'. As of this writing over 1300 developers have registered and nearly 400 topics are active. What are developers talking about?

  • Please support my favorite language: PHP, Ruby, etc. Hey, they had to start somewhere and Python is as good as anything else. A language is just a tool you know :-)
  • The usual this doesn't work in my environment type of questions. Far fewer than I would expect though.
  • The switch away from RDBMS thinking isn't coming naturally. A lot of questions wondering how to access BigTable like MySQL and that won't work. There are no joins in GQL, so how do you do normal things like get all the comments for a blog post?
  • Lots of how do use this or that API questions. Lack of certain commonly used APIs, like XML parsers is being being encountered.
  • Concern there's no database export. You can bulk upload data, but you have to write your own program to get it out again.
  • People are hitting limits like the 1MB upload limit on all requests. The 1000 database return limit is mentioned a lot. This is very different than the AWS model which advocates moving work to your CPU so it makes sense to return large sets of data. Google limits your CPU usage and the amount of data you can return so you have to be smart how you store and query data.
  • The pure service model has profound limitations for certain application types. An issue of how to do image processing came up. Usually a compiled class is used because using pure Python is slow. But you can't load these types of classes in AppEngine. And you can't parallelize the work by farming it out to other CPUs. You are stuck. Here's were a .Net type managed object model might help.
  • Surprisingly, fulltext search is not supported.
  • Sessions are another how do I it on GAE question. People are used to frameworks handling session storage.
  • One user was surprised at how slow database access was with BigTable. It takes GAE almost 3 seconds to save 50 of dummy records (consisting of just 2 text fields). A nice thread about how best to use BigTable developed. BigTable is meant to scale and you have to do things differently than you do in a MySQL world.

    Many "how do I" questions come up because of the requirement for service level interfaces. For example, something as simple as a hostname to IP mapping can't be done because you don't have socket level access. Someone, somewhere must make a service out of it. Make an external service is a common response to problems. You must make a service external to the GAE environment to get things to work which means you have to develop in multiple environments. This sort of sucks. To get cron functionality do I really need to create an external service outside of GAE?

    The outcome of all this is probably an accelerated servicifaction of everything. What were once simple library calls must now be exposed with service level interfaces. It's not that I think HTTP is too heavy, but as development model it is extremely painful. You are constantly hitting road blocks instead of getting stuff done.
  • Thursday
    Feb192009

    GIS Application Hosting

    Share the experience of hosting highly scalable/reliable GIS based application which involves Map Server, Spatially enabled database, j2ee, Routing Applications etc.

    Click to read more ...

    Thursday
    Feb192009

    Heavy upload server scalability

    Hi, We are running a backup solution that uploads every night the files our clients worked on during the day (Cabonite-like). We have currently about 10GB of data per night, via http PUT requests (1 per file), and the files are written as-is on a NAS. Our architecture is basically compound of a load balancer (hardware, sticky sessions), 5 servers (Tomcat under RHEL4/5, ) and a NAS (nfs 3). Since our number of clients is rising, (as is our system load) how would you recommend we could scale our infrastructure? hardware and software? Should we go towards NAS sharding, more servers, NIO on tomcat...? Thanks for your inputs!

    Click to read more ...

    Wednesday
    Feb182009

    Numbers Everyone Should Know

    Google AppEngine Numbers

    This group of numbers is from Brett Slatkin in Building Scalable Web Apps with Google App Engine.

    Writes are expensive!

  • Datastore is transactional: writes require disk access
  • Disk access means disk seeks
  • Rule of thumb: 10ms for a disk seek
  • Simple math: 1s / 10ms = 100 seeks/sec maximum
  • Depends on:
    * The size and shape of your data
    * Doing work in batches (batch puts and gets)

    Reads are cheap!

  • Reads do not need to be transactional, just consistent
  • Data is read from disk once, then it's easily cached
  • All subsequent reads come straight from memory
  • Rule of thumb: 250usec for 1MB of data from memory
  • Simple math: 1s / 250usec = 4GB/sec maximum
    * For a 1MB entity, that's 4000 fetches/sec

    Numbers Miscellaneous

    This group of numbers is from a presentation Jeff Dean gave at a Engineering All-Hands Meeting at Google.

  • L1 cache reference 0.5 ns
  • Branch mispredict 5 ns
  • L2 cache reference 7 ns
  • Mutex lock/unlock 100 ns
  • Main memory reference 100 ns
  • Compress 1K bytes with Zippy 10,000 ns
  • Send 2K bytes over 1 Gbps network 20,000 ns
  • Read 1 MB sequentially from memory 250,000 ns
  • Round trip within same datacenter 500,000 ns
  • Disk seek 10,000,000 ns
  • Read 1 MB sequentially from network 10,000,000 ns
  • Read 1 MB sequentially from disk 30,000,000 ns
  • Send packet CA->Netherlands->CA 150,000,000 ns

    The Lessons

  • Writes are 40 times more expensive than reads.
  • Global shared data is expensive. This is a fundamental limitation of distributed systems. The lock contention in shared heavily written objects kills performance as transactions become serialized and slow.
  • Architect for scaling writes.
  • Optimize for low write contention.
  • Optimize wide. Make writes as parallel as you can.

    The Techniques

    Keep in mind these are from a Google AppEngine perspective, but the ideas are generally applicable.

    Sharded Counters

    We always seem to want to keep count of things. But BigTable doesn't keep a count of entities because it's a key-value store. It's very good at getting data by keys, it's not interested in how many you have. So the job of keeping counts is shifted to you.

    The naive counter implementation is to lock-read-increment-write. This is fine if there a low number of writes. But if there are frequent updates there's high contention. Given the the number of writes that can be made per second is so limited, a high write load serializes and slows down the whole process.

    The solution is to shard counters. This means:
  • Create N counters in parallel.
  • Pick a shard to increment transactionally at random for each item counted.
  • To get the real current count sum up all the sharded counters.
  • Contention is reduced by 1/N. Writes have been optimized because they have been spread over the different shards. A bottleneck around shared state has been removed.

    This approach seems counter-intuitive because we are used to a counter being a single incrementable variable. Reads are cheap so we replace having a single easily read counter with having to make multiple reads to recover the actual count. Frequently updated shared variables are expensive so we shard and parallelize those writes.

    With a centralized database letting the database be the source of sequence numbers is doable. But to scale writes you need to partition and once you partition it becomes difficult to keep any shared state like counters. You might argue that so common a feature should be provided by GAE and I would agree 100 percent, but it's the ideas that count (pun intended).
  • Paging Through Comments

    How can comments be stored such that they can be paged through
    in roughly the order they were entered?

    Under a high write load situation this is a surprisingly hard question to answer. Obviously what you want is just a counter. As a comment is made you get a sequence number and that's the order comments are displayed. But as we saw in the last section shared state like a single counter won't scale in high write environments.

    A sharded counter won't work in this situation either because summing the shared counters isn't transactional. There's no way to guarantee each comment will get back the sequence number it allocated so we could have duplicates.

    Searches in BigTable return data in alphabetical order. So what is needed for a key is something unique and alphabetical so when searching through comments you can go forward and backward using only keys.

    A lot of paging algorithms use counts. Give me records 1-20, 21-30, etc. SQL makes this easy, but it doesn't work for BigTable. BigTable knows how to get things by keys so you must make keys that return data in the proper order.

    In the grand old tradition of making unique keys we just keep appending stuff until it becomes unique. The suggested key for GAE is: time stamp + user ID + user comment ID.

    Ordering by date is obvious. The good thing is getting a time stamp is a local decision, it doesn't rely on writes and is scalable. The problem is timestamps are not unique, especially with a lot of users.

    So we can add the user name to the key to distinguish it from all other comments made at the same time. We already have the user name so this too is a cheap call.

    Theoretically even time stamps for a single user aren't sufficient. What we need then is a sequence number for each user's comments.

    And this is where the GAE solution turns into something totally unexpected. Our goal is to remove write contention so we want to parallelize writes. And we have a lot available storage so we don't have to worry about that.

    With these forces in mind, the idea is to create a counter per user. When a user adds a comment it's added to a user's comment list and a sequence number is allocated. Comments are added in a transactional context on a per user basis using Entity Groups. So each comment add is guaranteed to be unique because updates in an Entity Group are serialized.

    The resulting key is guaranteed unique and sorts properly in alphabetical order. When paging a query is made across entity groups using the ID index. The results will be in the correct order. Paging is a matter of getting the previous and next keys in the query for the current page. These keys can then be used to move through index.

    I certainly would have never thought of this approach. The idea of keeping per user comment indexes is out there. But it cleverly follows the rules of scaling in a distributed system. Writes and reads are done in parallel and that's the goal. Write contention is removed.

    Related Articles

    Monday
    Feb162009

    Handle 1 Billion Events Per Day Using a Memory Grid

    Moshe Kaplan of RockeTier shows the life cycle of an affiliate marketing system that starts off as a cub handling one million events per day and ends up a lion handling 200 million to even one billion events per day. The resulting system uses ten commodity servers at a cost of $35,000. Mr. Kaplan's paper is especially interesting because it documents a system architecture evolution we may see a lot more of in the future: database centric --> cache centric --> memory grid. As scaling and performance requirements for complicated operations increase, leaving the entire system in memory starts to make a great deal of sense. Why use cache at all? Why shouldn't your system be all in memory from the start?

    General Approach to Evolving the System to Scale

  • Analyze the system architecture and the main business processes. Detect the main hardware bottlenecks and the related business process causing them. Focus efforts on points of greatest return.
  • Rate the bottlenecks by importance and provide immediate and practical recommendation to improve performance.
  • Implement the recommendations to provide immediate relief to problems. Risk is reduced by avoiding a full rewrite and spending a fortune on more resources.
  • Plan a road map for meeting next generation solutions.
  • Scale up and scale out when redesign is necessary.

    One Million Event Per Day System

  • The events are common advertising system operations like: ad impressions, clicks, and sales.
  • Typical two tier system. Impressions and banner sales are written directly to the database.
  • The immediate goal was to process 2.5 million events per day so something needed to be done.

    2.5 Million Event Per Day System

  • PerfMon was used to check web server and DB performance counters. CPU usage was at 100% at peak usage.
  • Immediate fixes included: tuning SQL queries, implementing stored procedures, using a PHP compiler, removing include files and fixing other programming errors.
  • The changes successfully double the performance of the system within 3 months. The next goal was to handle 20 million events per day.

    20 Million Event Per Day System

  • To make this scaling leap a rethinking of how the system worked was in order.
  • The main load of the system was validating inputs in order to prevent forgery.
  • A cache was maintained in the application servers to cut unnecessary database access. The result was 50% reduction in CPU utilization.
  • An in-memory database was used to accumulate transactions over time (impression counting, clicks, sales recording).
  • A periodic process was used to write transactions from the in-memory database to the database server.
  • This architecture could handle 20 million events using existing hardware.
  • Business projections required a system that could handle 200 million events.

    200 Million Event Per Day System

  • The next architectural evolution was to a scale out grid product. It's not mentioned in the paper but I think GigaSpaces was used.
  • A Layer 7 load balancer is used to route requests to sharded application servers. Each app server supports a different set of banners.
  • Data is still stored in the database as the data is used for statistics, reports, billing, fraud detection and so on.
  • Latency was slashed because logic was separated out of the HTTP request/response loop into a separate process and database persistence is done offline. At this point architecture supports near-linear scaling and it's projected that it can easily scale to a billion events per day.

    Related Articles

  • GridGain: One Compute Grid, Many Data Grids

    Click to read more ...

  • Saturday
    Feb142009

    Scaling Digg and Other Web Applications

    Joe Stump, Lead Architect at Digg, gave this presentation at the Web 2.0 Expo. I couldn't find the actual presentation, but fortunately Kris Jordan took some great notes. That's how key moments in history are accidentally captured forever. Joe was also kind enough to respond to my email questions with a phone call. In this first part of the post Joe shares some timeless wisdom that you may or may not have read before. I of course take some pains to extract all the wit from the original presentation in favor of simple rules. What really struck me however was how Joe thought MemcacheDB Will be the biggest new kid on the block in scaling. MemcacheDB has been around for a little while and I've never thought of it in that way. Well learn why Joe is so excited by MemcacheDB at the end of the post.

    Impressive Stats

  • 80th-100th largest site in the world
  • 26 million uniques a month
  • 30 million users.
  • Uniques are only half that traffic. Traffic = unique web visitors + APIs + Digg buttons.
  • 2 billion requests a month
  • 13,000 requests a second, peak at 27,000 requests a second.
  • 3 Sys Admins, 2 DBAs, 1 Network Admin, 15 coders, QA team
  • Lots of servers.

    Scaling Strategies

  • Scaling is specialization. When off the shelf solutions no longer work at a certain scale you have to create systems that work for your particular needs.
  • Lesson of web 2.0: people love making crap and sharing it with the world.
  • Web 2.0 sucks for scalability. Web 1.0 was flat with a lot of static files. Additional load is handled by adding more hardware. Web 2.0 is heavily interactive. Content can be created at a crushing rate.
  • Languages don't scale. 100% of the time bottlenecks are in IO. Bottlenecks aren't in the language when you are handling so many simultaneous requests. Making PHP 300% faster won't matter. Don't optimize PHP by using single quotes instead of double quotes when the database is pegged.
  • Don’t share state. Decentralize. Partitioning is required to process a high number of requests in parallel.
  • Scale out instead of up. Expect failures. Just add boxes to scale and avoid the fail.
  • Database-driven sites need to be partitioned to scale both horizontally and vertically. Horizontal partitioning means store a subset of rows on a different machines. It is used when there's more data than will fit on one machine. Vertical partitioning means putting some columns in one table and some columns in another table. This allows you to add data to the system without downtime.
  • Data are separated into separate clusters: User Actions, Users, Comments, Items, etc.
  • Build a data access layer so partitioning is hidden behind an API.
  • With partitioning comes the CAP Theorem: you can only pick two of the following three: Strong Consistency, High Availability, Partition Tolerance.
  • Partitioned solutions require denormalization and has become a big problem at Digg. Denormalization means data is copied in multiple objects and must be kept synchronized.
  • MySQL replication is used to scale out reads.
  • Use an asynchronous queuing architecture for near-term processing. - This approach pushes chunks of processing to another service and let's that service schedule the processing on a grid of processors. - It's faster and more responsive than cron and only slightly less responsive than real-time. - For example, issuing 5 synchronous database requests slows you down. Do them in parallel. - Digg uses Gearman. An example use is to get a permalink. Three operations are done parallel: get the current logged, get the permalink, and grab the comments. All three are then combined to return a combined single answer to the client. It's also used for site crawling and logging. It's a different way of thinking. - See Flickr - Do the Essential Work Up-front and Queue the Rest and The Canonical Cloud Architecture for more information.
  • Bottlenecks are in IO so you have tune the database. When the database is bigger than RAM the disk is hit all the time which kills performance. As the database gets larger the table can't be scanned anymore. So you have to: - denormalize - avoid joins - avoid large scans across databases by partitioning - cache - add read slaves - don't use NFS
  • Run numbers before you try and fix a problem to make sure things actually will work.
  • Files like for icons and photos are handled by using MogileFS, a distributed file system. DFSs support high request rates because files are distributed and replicated around a network.
  • Cache forever and explicitly expire.
  • Cache fairly static content in a file based cache.
  • Cache changeable items in memcached
  • Cache rarely changed items in APC. APC is a local cache. It's not distributed so no other program have access to the values.
  • For caching use the Chain of Responsibility pattern. Cache in MySQL, memcached APC, and PHP globals. First check PHP globals as the fastest cache. If not present check APC, memcached and on up the chain.
  • Digg's recommendation engine is a custom graph database that is eventually consistent. Eventually consistent means that writes to one partition will eventually make it to all the other partitions. After a write reads made one after another don't have to return the same value as they could be handled by different partitions. This is a more relaxed constraint than strict consistency which means changes must be visible at all partitions simultaneously. Reads made one after another would always return the same value.
  • Assume 1 million people a day will bang on any new feature so make it scalable from the start. Example: the About page on Digg did a live query against the master database to show all employees. Just did a quick hack to get out. Then a spider went crazy and took the site down.

    Miscellaneous

  • Digg buttons were a major key to generating traffic.
  • Uses Debian Linux, Apache, PHP, MySQL.
  • Pick a language you enjoy developing in, pick a coding standard, add inline documentation that's extractable, use a code repository, and a bug tracker. Likes PHP, Track, and SVN.
  • You are only as good as your people. Have to trust guy next to you that he's doing his job. To cultivate trust empower people to make decisions. Trust that people have it handled and they'll take care of it. Cuts down on meetings because you know people will do the job right.
  • Completely a Mac shop.
  • Almost all developers are local. Some people are remote to offer 24 hour support.
  • Joe's approach is pragmatic. He doesn't have a language fetish. People went from PHP, to Python/Ruby, to Erlang. Uses vim. Develops from the command line. Has no idea how people constantly change tool sets all the time. It's not very productive.
  • Services (SOA) decoupling is a big win. Digg uses REST. Internal services return a vanilla structure that's mapped to JSON, XML, etc. Version in URL because it costs you nothing, for example: /1.0/service/id/xml. Version both internal and external services.
  • People don't understand how many moving parts are in a website. Something is going to happen and it will go down.

    MemcacheDB: Evolutionary Step for Code, Revolutionary Step for Performance

    Imagine Kevin Rose, the founder of Digg, who at the time of this presentation had 40,000 followers. If Kevin diggs just once a day that's 40,000 writes. As the most active diggers are the most followed it becomes a huge performance bottleneck. Two problems appear. You can't update 40,000 follower accounts at once. Fortunately the queuing system we talked about earlier takes care of that. The second problem is the huge number of writes that happen. Digg has a write problem. If the average user has 100 followers that’s 300 million diggs day. That's 3,000 writes per second, 7GB of storage per day, and 5TB of data spread across 50 to 60 servers. With such a heavy write load MySQL wasn’t going to work for Digg. That’s where MemcacheDB comes in. In Initial tests on a laptop MemcacheDB was able to handle 15,000 writes a second. MemcacheDB's own benchmark shows it capable of 23,000 writes/second and 64,000 reads/second. At those write rates it's easy to see why Joe was so excited about MemcacheDB's ability to handle their digg deluge. What is MemcacheDB? It's a distributed key-value storage system designed for persistent. It is NOT a cache solution, but a persistent storage engine for fast and reliable key-value based object storage and retrieval. It conforms to memcache protocol(not completed, see below), so any memcached client can have connectivity with it. MemcacheDB uses Berkeley DB as a storing backend, so lots of features including transaction and replication are supported. Before you get too excited keep in mind this is a key-value store. You read and write records by a single key. There aren't multiple indexes and there's no SQL. That's why it can be so fast. Digg uses MemcacheDB to scale out the huge number of writes that happen when data is denormalized. Remember it's a key-value store. The value is usually a complete application level object merged together from a possibly large number of normalized tables. Denormalizing introduces redundancies because you are keeping copies of data in multiple records instead of just one copy in a nicely normalized table. So denormalization means a lot more writes as data must be copied to all the records that contain a copy. To keep up they needed a database capable of handling their write load. MemcacheDB has the performance, especially when you layer memcached's normal partitioning scheme on top. I asked Joe why he didn't turn to one of the in-memory data grid solutions? Some of the reasons were:
  • This data is generated from many different databases and takes a long time to generate. So they want it in a persistent store.
  • MemcacheDB uses the memcache protocol. Digg already uses memcache so it's a no-brainer to start using MemcacheDB. It's easy to use and easy to setup.
  • Operations is happy with deploying it into the datacenter as it's not a new setup.
  • They already have memcached high availability and failover code so that stuff already works.
  • Using a new system would require more ramp-up time.
  • If there are any problems with the code you can take a look. It's all open source.
  • Not sure those other products are stable enough. So it's an evolutionary step for code and a revolutionary step for performance. Digg is looking at using MemcacheDB across the board.

    Related Articles

  • Scaling Digg and Other Web Applications by Kris Jordan.
  • MemcacheDB
  • Joe Stump's Blog
  • MemcachedRelated Tags on HighScalability
  • Caching Related Tags on HighScalability
  • BigTable
  • SimpleDB
  • Anti-RDBMS: A list of distributed key-value stores
  • An Unorthodox Approach to Database Design : The Coming of the Shard
  • Flickr Architecture
  • Episode 4: Scaling Large Web Sites with Joe Stump, Lead Architect at DIGG

    Click to read more ...

  • Thursday
    Feb122009

    MySpace Architecture

    Update:Presentation: Behind the Scenes at MySpace.com. Dan Farino, Chief Systems Architect at MySpace shares details of some of MySpace's cool internal operations tools. MySpace.com is one of the fastest growing site on the Internet with 65 million subscribers and 260,000 new users registering each day. Often criticized for poor performance, MySpace has had to tackle scalability issues few other sites have faced. How did they do it? Site: http://myspace.com

    Information Sources

  • Presentation: Behind the Scenes at MySpace.com
  • Inside MySpace.com

    Platform

  • ASP.NET 2.0
  • Windows
  • IIS
  • SQL Server

    What's Inside?

  • 300 million users.
  • Pushes 100 gigabits/second to the internet. 10Gb/sec is HTML content.
  • 4,500+ web servers windows 2003/IIS 6.0/APS.NET.
  • 1,200+ cache servers running 64-bit Windows 2003. 16GB of objects cached in RAM.
  • 500+ database servers running 64-bit Windows and SQL Server 2005.
  • MySpace processes 1.5 Billion page views per day and handles 2.3 million concurrent users during the day
  • Membership Milestones: - 500,000 Users: A Simple Architecture Stumbles - 1 Million Users:Vertical Partitioning Solves Scalability Woes - 3 Million Users: Scale-Out Wins Over Scale-Up - 9 Million Users: Site Migrates to ASP.NET, Adds Virtual Storage - 26 Million Users: MySpace Embraces 64-Bit Technology
  • 500,000 accounts was too much load for two web servers and a single database.
  • At 1-2 Million Accounts - They used a database architecture built around the concept of vertical partitioning, with separate databases for parts of the website that served different functions such as the log-in screen, user profiles and blogs. - The vertical partitioning scheme helped divide up the workload for database reads and writes alike, and when users demanded a new feature, MySpace would put a new database online to support it. - MySpace switched from using storage devices directly attached to its database servers to a storage area network (SAN), in which a pool of disk storage devices are tied together by a high-speed, specialized network, and the databases connect to the SAN. The change to a SAN boosted performance, uptime and reliability.
  • At 3 Million Accounts - the vertical partitioning solution didn't last because they replicated some horizontal information like user accounts across all vertical slices. With so many replications one would fail and slow down the system. - individual applications like blogs on sub-sections of the Web site would grow too large for a single database server - Reorganized all the core data to be logically organized into one database - split its user base into chunks of 1 million accounts and put all the data keyed to those accounts in a separate instance of SQL Server
  • 9 Million–17 Million Accounts - Moved to ASP.NET which used less resources than their previous architecture. 150 servers running the new code were able to do the same work that had previously required 246. - Saw storage bottlenecks again. Implementing a SAN had solved some early performance problems, but now the Web site's demands were starting to periodically overwhelm the SAN's I/O capacity—the speed with which it could read and write data to and from disk storage. - Hit limits with the 1 million-accounts-per-database division approach as these limits were exceeded. - Moved to a virtualized storage architecture where the entire SAN is treated as one big pool of storage capacity, without requiring that specific disks be dedicated to serving specific applications. MySpace now standardized on equipment from a relatively new SAN vendor, 3PARdata
  • Added a caching tier—a layer of servers placed between the Web servers and the database servers whose sole job was to capture copies of frequently accessed data objects in memory and serve them to the Web application without the need for a database lookup.
  • 26 Million Accounts - Moved to 64-bit SQL server to work around their memory bottleneck issues. Their standard database server configuration uses 64 GB of RAM.
  • Horizontally Federated Database. Databases are partition by purpose. Have profile, email databases etc. Partition is based on user range. 1 Million users live in each database. So you have Profile1, Profile2 all the way up to Profile300 as they have 300 million users.
  • Doesn't use ASP cache because they don't have a high enough hit rate on the front-end. The middle tier cache does have a high hit rate.
  • Failure isolation. Segment requests into web server by database. Allow only 7 threads per database. So if the database is slow only those threads will slowdown and the traffic in the other threads will flow.

    Operations

  • PerfCollector. Centralized collection of performance data via UDP. More reliable than Windows and allows any client to connect and see stats.
  • Web Based Stack Dump Tool. Can right-click on a problem server and get stack dump of the .Net managed threads. Used to have to RDC into system and attach a debugger and 1/2 later get an answer. Slow, nonscalable, and tedious. Not just a stack dump, gives a lot of context about what the thread is doing. Troubleshooting is easier because you can see 90 threads are blocked on a database so the database may be down.
  • Web Base Heap Dump Tool. Dumps all memory allocations. Very useful for developers. Save hours of doing it by hand.
  • Profiler. Traces a request from start to finish and produces a report. See URL, methods, status, everything that will help you identify a slow request. Looks at lock contentions, are a lot of exceptions being thrown, anything that might be interesting. Very light weight. It's running on one box in every VIP (group of 100 servers) in production. Samples 1 thread every 10 seconds. Always tracing in background.
  • Powershell. Microsoft's new shell that runs in process and pass objects between commands versus parsing text output. MySpace develops a lot of commandlets to support operations.
  • Developed their own asynchronous communication technology to get around windows networking problems and treat servers as a group. Can ship a .cs file, compile it, run it, and ship the response back.
  • Codespew. Pushes code updates on their communication technology. Used to do 5 code pushes a day, now down to 1 a week.

    Lessons Learned

  • You can build big websites using Microsoft tech.
  • A cache should have been used from the beginning.
  • The cache is a better place to store transitory data that doesn't need to be recorded in a database, such as temporary files created to track a particular user's session on the Web site.
  • Built in OS features to detect denial of service attacks can cause inexplicable failures.
  • Distribute your data to geographically diverse data centers to handle power failures.
  • Consider using virtualized storage/clustered file systems from the start. It allows you to massively parallelize IO access while being able to add disk as needed without any reorganization needed.
  • Develop tools that work in a production environment. Can't simulate everything in test environment. The scale and variety of uses APIs are put to can't be simulated in QA during testing. Legitimate users and hackers will run into corner cases that weren't hit in testing, though QA will find most of the problems.
  • Throw hardware at problems. Easier than changing their backend software to a new way of doing things. The example is they add a new database server for every million users. It might be more efficient to change their approach to more efficiently use the database hardware, but it's easier just to add servers. For now.

    Click to read more ...

  • Monday
    Feb092009

    Paper: Consensus Protocols: Two-Phase Commit  

    Henry Robinson has created an excellent series of articles on consensus protocols. Henry starts with a very useful discussion of what all this talk about consensus really means: The consensus problem is the problem of getting a set of nodes in a distributed system to agree on something - it might be a value, a course of action or a decision. Achieving consensus allows a distributed system to act as a single entity, with every individual node aware of and in agreement with the actions of the whole of the network. In this article Henry tackles Two-Phase Commit, the protocol most databases use to arrive at a consensus for database writes. The article is very well written with lots of pretty and informative pictures. He did a really good job. In conclusion we learn 2PC is very efficient, a minimal number of messages are exchanged and latency is low. The problem is when a co-ordinator fails availability is dramatically reduced. This is why 2PC isn't generally used on highly distributed systems. To solve that problem we have to move on to different algorithms and that is the subject of other articles.

    Click to read more ...