Friday
Mar052010

Strategy: Planning for a Power Outage Google Style

We can all learn from problems. The Google App Engine team has created a teachable moment through a remarkably honest and forthcoming post-mortem for February 24th, 2010 outage post, chronicling in elaborate detail a power outage that took down Google App Engine for a few hours.

The world is ending! The cloud is unreliable! Jump ship! Not. This is not evidence that the cloud is a beautiful, powerful and unsinkable ship that goes down on its maiden voyage. Stuff happens, no matter how well you prepare. If you think private datacenters don't go down, well, then I have some rearangeable deck chairs to sell you. The goal is to keep improving and minimizing those failure windows. From that perspective there is a lot to learn from the problems the Google App Engine team encountered and how they plan to fix them.

Please read the article for all the juicy details, but here's what struck me as key:

Click to read more ...

Thursday
Mar042010

How MySpace Tested Their Live Site with 1 Million Concurrent Users

This is a guest post by Dan Bartow, VP of SOASTA, talking about how they pelted MySpace with 1 million concurrent users using 800 EC2 instances. I thought this was an interesting story because: that's a lot of users, it takes big cajones to test your live site like that, and not everything worked out quite as expected. I'd like to thank Dan for taking the time to write and share this article.

In December of 2009 MySpace launched a new wave of streaming music video offerings in New Zealand, building on the previous success of MySpace music.  These new features included the ability to watch music videos, search for artist’s videos, create lists of favorites, and more. The anticipated load increase from a feature like this on a popular site like MySpace is huge, and they wanted to test these features before making them live. 

If you manage the infrastructure that sits behind a high traffic application you don’t want any surprises.  You want to understand your breaking points, define your capacity thresholds, and know how to react when those thresholds are exceeded.  Testing the production infrastructure with actual anticipated load levels is the only way to understand how things will behave when peak traffic arrives. 

For MySpace, the goal was to test an additional 1 million concurrent users on their live site stressing the new video features.  The key word here is ‘concurrent’.  Not over the course of an hour or day… 1 million users concurrently active on the site. It should be noted that 1 million virtual users are only a portion of what MySpace typically has on the site during its peaks.  They wanted to supplement the live traffic with test traffic to get an idea of the overall performance impact of the new launch on the entire infrastructure.  This requires a massive amount of load generation capability, which is where cloud computing comes into play. To do this testing, MySpace worked with SOASTA to use the cloud as a load generation platform. 

Here are the details of the load that was generated during testing.

Click to read more ...

Wednesday
Mar032010

Hot Scalability Links for March 3, 2010

  • Getting Real about NoSQL and the SQL-Isn't-Scalable Lie by Dennis Forbes. Buoyed by Canada's Olympic success, Dennis is going for the gold in that least real of sports, the NoSQL vs SQL pursuit.
  • Design Patterns for Distributed Non-Relational Databases by Todd Lipcon. Great coverage of consistent hashing, consitency models, data models, storage layouts, log-structured merge trees, and gossip protocols.
  • Brewer's CAP Conjecture is False. Jim Starkey makes the case the CAP is crap.
  • Kaazing Pushes Web Sockets to Make Browsers Real Time. Bi-directional communication comes to the web, but shouldn't sockets be able to accept connections too?
  • 4 Months with Cassandra, a love story. Cloudkick likes its Linear scalability, Massive write performance, Low operational costs. We'll likely keep moving more data into Cassandra as we need to, but for some data the ability to write arbitrary SQL queries is still very useful. 
  • Click to read more ...

    Tuesday
    Mar022010

    Using the Ambient Cloud as an Application Runtime

    This is an excerpt from my article Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud.

    The future looks many, big, complex, and adaptive:

    1. Many clouds.
    2. Many servers.
    3. Many operating systems.
    4. Many languages.
    5. Many storage services.
    6. Many database services.
    7. Many software services.
    8. Many adjunct human networks (like Mechanical Turk).
    9. Many fast interconnects.
    10. Many CDNs.
    11. Many cache memory pools.
    12. Many application profiles (simple request-response, live streaming, computationally complex, sensor driven, memory intensive, storage intensive, monolithic, decomposable, etc).
    13. Many legal jurisdictions. Don't want to perform a function on Patriot Act "protected" systems then move the function elsewhere.
    14. Many SLAs.
    15. Many data driven pricing policies that like airplane pricing algorithms will price "seats" to maximize profit using multi-variate time sensitive pricing models.
    16. Many competitive products. The need to defend your territory never seems to go away. Though what will map to scent-marking I'm not sure.
    17. Many and evolving resource gradients.
    18. Big concurrency. Everyone and everything is a potential source of real-time data that needs to processed in parallel to be processed at all within tolerable latencies.
    19. Big redundancy. Redundant nodes in an unpredictable world will provide cover for component failures and workers to take over when another fails.
    20. Big crushing transient traffic spikes as new mega worldwide social networks rapidly shift their collective attention from new shiny thing to new shiny thing.
    21. Big increases in application complexity to keep streams synchronized acrosss networks. Event handling will go off the charts as networks grow larger and denser and intelligent behaviour attaches to billions of events generated per second.
    22. Big data. Sources and amounts of historical and real-time data are increasing at increasing rates.

    This challenging, energetic, ever changing world is a very different looking world than today. It's as if Bambi was dropped into the middle of a Velociraptor pack.

    Click to read more ...

    Friday
    Feb262010

    MySQL and Memcached: End of an Era?

    If you look at the early days of this blog, when web scalability was still in its heady bloom of youth, many of the articles had to do with leveraging MySQL and memcached. Exciting times. Shard MySQL to handle high write loads, cache objects in memcached to handle high read loads, and then write a lot of glue code to make it all work together. That was state of the art, that was how it was done. The architecture of many major sites still follow this pattern today, largely because with enough elbow grease, it works.

    This was a pre-cloud, relational database dominated world, built from parts scrounged from the remnants of enterprises and datacenters past. Twitter and Digg started in this era, but are evolving into something different, as scaling pressures increase and new purpose built technologies pop into being.

    With a little perspective, it's clear the MySQL+memcached era is passing. It will stick around for a while. Old technologies seldom fade away completely. Some still ride horses. Some still use CDs. And the Internet will not completely replace that archaic electro-magnetic broadcast technology called TV, but the majority will move on into a new era.

    Click to read more ...

    Thursday
    Feb252010

    Paper: High Performance Scalable Data Stores 

    The world of scalable databases is not a simple one. They come in every race, creed, and color. Rick Cattell has brought some harmony to that world by publishing High Performance Scalable Data Stores, a nicely detailed one stop shop paper comparing scalable databases soley on the content of their character. Ironically, the first step in that evaluation is dividing the world into four groups:

    • Key-value stores: Redis, Scalaris, Voldmort, and Riak.
    • Document stores: Couch DB, MongoDB, and SimpleDB.
    • Record stores: BigTable, HBase, HyperTable, and Cassandra.
    • Scalable RDBMSs: MySQL Cluster, ScaleDB, Drizzle, and VoltDB.

    The paper describes each system and then compares them on the dimensions of Concurrency Control, Data Storage Replication, Transaction Model, General Comments, Maturity, K-hits, License Language.

    And the winner is: there are no winners. Yet. Rick concludes by pointing to a great convergence:

    I believe that a few of these systems will gain critical mass and key players, and will pull away from the others by next year.  At that point, open source contributors will likely migrate to those players.

    From the paper:

     

    Click to read more ...

    Wednesday
    Feb242010

    Hot Scalability Links for February 24, 2010

  • Cassandra @ Twitter: An Interview with Ryan King. Great interview by Alex Popescu on Twitter's thought process for switching to Cassandra. Twitter chose Cassandra because it had more big system features out of the box. Is that Cassandra FTW?
  • I Had Downtime Today. Here’s What I’m Doing About It by Patrick McKenzie. Awesome deep dive into went wrong with Bingo Card Creator. Sh*t happens. How do you design a process to help prevent it from happening and how do you deal with problems with integrity when they do?
  • High Availability Principle : Request Queueing by Ashish Soni. Queue request to ride out traffic spikes: 1) Request Queuing allows your system to operate at optimal throughput. 2) Your users only experience linear degradation versus exponential degradation. 3) Your system experiences NO degradation.
  • pfffft twatter tweeter by Knowbuddy. The reason you should care [about NoSQL] is because now you have more options--you're not stuck trying to wedge your system into a relational model if you don't want to. And isn't /. all about freedom of choice?
  • Wordpress, Varnish and Edge Side Includes. Using Varnish to go from .63 requests per second to 537.44 requests per second.
  • Facebook’s Petabyte Scale Data Warehouse using Hive and Hadoop by Ashish Thusoo and Namit Jain. How does Facebook deal with 12 TB of compressed new data everyday? They get a bad case of the Hives.
  • Click to read more ...

    Tuesday
    Feb232010

    Sponsored Post: Job Openings - Squarespace

    There was a bit of drama earlier when I posted a free job opening for Zynga. It caused unfortunate and just plain wrong accusations. It also caused a number of requests for more free job posts, which I should have anticipated, but obviously I can't let this blog become cluttered with that kind of stuff. Earlier I tried a job board type service, but that never really worked out. So what to do? Someone suggested a sponsored post approach and I think that's a good compromise. It minimizes the noise, let's people know about work, and brings in a little revenue. It works like an advertisement. If you are interested please let me know. When we have any job openings there will be a sponsored post like this one, that you can easily ignore or pay attention to, depending on your situation.

    Squarespace Looking for Full-time Scaling Expert

    Interested in helping a cutting-edge, high-growth startup scale? Squarespace, which was profiled here last year in Squarespace Architecture - A Grid Handles Hundreds of Millions of Requests a Month and also hosts this blog, is currently in the market for a crack scalability engineer to help build out its cloud infrastructure. Squarespace is very excited about finding a full-time scaling expert.

    Interested applicants should go to http://www.squarespace.com/jobs-software-engineer for more information.

    Tuesday
    Feb232010

    When to migrate your database?

    Why migrate your database? Efficiency and availability problems are harming your business – reports are out of date, your batch processing window is nearing its limits, outages (unplanned/planned) frequently halt work. Database consolidation – remove the costs that result from a heterogeneous database environment (DBAs time, database vendor pricing, database versions, hardware, OSs, patches, upgrades etc.). OK, so the driving forces for migration are clear,  what now?

    Read more on BigDataMatters.com

    Friday
    Feb192010

    Twitter’s Plan to Analyze 100 Billion Tweets

    If Twitter is the “nervous system of the web” as some people think, then what is the brain that makes sense of all those signals (tweets) from the nervous system? That brain is the Twitter Analytics System and Kevin Weil, as Analytics Lead at Twitter, is the homunculus within in charge of figuring out what those over 100 billion tweets (approximately the number of neurons in the human brain) mean.

    Twitter has only 10% of the expected 100 billion tweets now, but a good brain always plans ahead. Kevin gave a talk, Hadoop and Protocol Buffers at Twitter, at the Hadoop Meetup, explaining how Twitter plans to use all that data to an answer key business questions.

    What type of questions is Twitter interested in answering? Questions that help them better understand Twitter. Questions like:

    Click to read more ...