Entries in Example (248)

Friday
Feb192010

Twitter’s Plan to Analyze 100 Billion Tweets

If Twitter is the “nervous system of the web” as some people think, then what is the brain that makes sense of all those signals (tweets) from the nervous system? That brain is the Twitter Analytics System and Kevin Weil, as Analytics Lead at Twitter, is the homunculus within in charge of figuring out what those over 100 billion tweets (approximately the number of neurons in the human brain) mean.

Twitter has only 10% of the expected 100 billion tweets now, but a good brain always plans ahead. Kevin gave a talk, Hadoop and Protocol Buffers at Twitter, at the Hadoop Meetup, explaining how Twitter plans to use all that data to an answer key business questions.

What type of questions is Twitter interested in answering? Questions that help them better understand Twitter. Questions like:

Click to read more ...

Monday
Feb082010

How FarmVille Scales to Harvest 75 Million Players a Month

Several readers had follow-up questions in response to this article. Luke's responses can be found in How FarmVille Scales - The Follow-up.

If real farming was as comforting as it is in Zynga's mega-hit Farmville then my family would have probably never left those harsh North Dakota winters. None of the scary bedtime stories my Grandma used to tell about farming are true in FarmVille. Farmers make money, plants grow, and animals never visit the red barn. I guess it's just that keep-your-shoes-clean back-to-the-land charm that has helped make FarmVille the "largest game in the world" in such an astonishingly short time.

How did FarmVille scale a web application to handle 75 million players a month? Fortunately FarmVille's Luke Rajlich has agreed to let us in on a few their challenges and secrets. Here's what Luke has to say...

Click to read more ...

Friday
Jan222010

How BuddyPoke Scales on Facebook Using Google App Engine

How do you scale a viral Facebook app that has skyrocketed to a mind boggling 65 million installs (the population of France)? That's the fortunate problem BuddyPoke co-founder Dave Westwood has and he talked about his solution at Wednesday's Facebook Meetup. Slides for the complete talk are here. For those not quite sure what BuddyPoke is, it's a social network application that lets users show their mood, hug, kiss, and poke their friends through on-line avatars.

In many ways BuddyPoke is the quintessentially modern web application. It thrives off the energy of social network driven ecosystems. Game play mechanics, viral loops, and creative monetization strategies are all part of if its everyday conceptualization. It mashes together different technologies, not in a dark Frankensteining sort of way, but in a smart way that gets the most bang for the buck. Part of it runs on Facebook servers (free). Part of it runs on flash in a browser (free). Part of it runs on a storage cloud (higher cost). And part of runs on a Platform as a Service environment (that's GAE) (low cost). It also integrates tightly with other services like PayPal (a slice). Real $$$ are made selling virtual goods like gold coins redeemable in pokes. User's can also have their avatars made into dolls, t-shirts, and a whole army of other Zazzle powered gifts.

Click to read more ...

Tuesday
Nov172009

10 eBay Secrets for Planet Wide Scaling

You don't even have to make a bid, Randy Shoup, an eBay Distinguished Architect, gives this presentation on how eBay scales, for free. Randy has done a fabulous job in this presentation and in other talks listed at the end of this post getting at the heart of the principles behind scalability. It's more about ideas of how things work and fit together than a focusing on a particular technology stack.

Impressive Stats

In case you weren't sure, eBay is big, with lots of: users, data, features, and change...

  • Over 89 million active users worldwide
  • 190 million items for sale in 50,000 categories
  • Over 8 billion URL requests per day
  • Hundreds of new features per quarter
  • Roughly 10% of items are listed or ended every day
  • In 39 countries and 10 languages
  • 24x7x365
  • 70 billion read / write operations / day
  • Processes 50TB of new, incremental data per day
  • Analyzes 50PB of data per day

10 Lessons

Click to read more ...

Tuesday
Oct132009

Why are Facebook, Digg, and Twitter so hard to scale?

Real-time social graphs (connectivity between people, places, and things). That's why scaling Facebook is hard says Jeff Rothschild, Vice President of Technology at Facebook. Social networking sites like Facebook, Digg, and Twitter are simply harder than traditional websites to scale. Why is that? Why would social networking sites be any more difficult to scale than traditional web sites? Let's find out.

Traditional websites are easier to scale than social networking sites for two reasons:

Click to read more ...

Tuesday
Sep222009

How Ravelry Scales to 10 Million Requests Using Rails

Tim Bray has a wonderful interview with Casey Forbes, creator of Ravelry, a Ruby on Rails site supporting a 400,000+ strong community of dedicated knitters and crocheters.

Casey and his small team have done great things with Ravelry. It is a very focused site that provides a lot of value for users. And users absolutely adore the site. That's obvious from their enthusiastic comments and rocket fast adoption of Ravelry.

Ten years ago a site like Ravelry would have been a multi-million dollar operation. Today Casey is the sole engineer for Ravelry and to run it takes only a few people. He was able to code it in 4 months working nights and weekends. Take a look down below of all the technologies used to make Ravelry and you'll see how it is constructed almost completely from free of the shelf software that Casey has stitched together into a complete system. There's an amazing amount of leverage in today's ecosystem when you combine all the quality tools, languages, storage, bandwidth and hosting options.

Now Casey and several employees makes a living from Ravelry. Isn't that the dream of any small business? How might you go about doing the same thing?

Site: http://www.ravelry.com

Statistics

  • 10 million requests a day hit Rails (AJAX + RSS + API)
  • 3.6 million pageviews per day
  • 430,000 registered users. 70,000 active each day. 900 new sign ups per day.
  • 2.3 million knitting/crochet projects, 50,000 new forum posts each day, 19 million forum posts, 13 million private messages, 8 million photos (the majority are hosted by Flickr).
  • Started on a small VPS and demand exploded from the start.
  • Monetization: advertisers + merchandise store + pattern sales

    Platform

  • Ruby on Rails (1.8.6, Ruby GC patches)
  • Percona build of MySQL
  • Gentoo Linux
  • Servers: Silicon Mechanics (owned, not leased)
  • Hosting: Colocation with Hosted Solutions
  • Bandwidth: Cogent (very cheap)
  • Capistrano for deployment.
  • Nginx is much faster and less memory hungry than Apache.
  • Xen for virtualization
  • HAproxy for load balancing.
  • Munin for monitoring.
  • Tokyo Cabinet/Tyrant for large object caching
  • Nagios for alerts
  • HopToad for exception notifications.
  • NewRelic for tuning
  • Syslog-ng for log aggregation
  • S3 for storage
  • Cloudfront as a CDN
  • Sphinx for the search engine
  • Memcached for small object caching

    Architecture

  • 7 Servers (Gentoo Linux). Virtualization (Xen) creates 13 virtual servers.
  •  Front end uses Nginx and HAproxy. The request flow: nginx -> haproxy -> (load balanced) -> apache + mod_passenger. Nginx is first so it can provide functions like serving static files and redirects before passing a request to HAproxy for load balancing. Apache is probably used because it is more configurable than Nginx.
  •  One small backup server.
  • One small utility server for non-critical processes and staging.
  •  2 32 GB of RAM servers for the master database, slave database, Sphinx search engine.
  •  3 application servers running 6 Apache Passenger and Ruby instances, each capped at a pool size of 20. 6 quad core processors and 40 GB of RAM total. There's RAM to spare.
  • 5 terabytes of storage on Amazon S3. Cloudfront is used as a CDN.
  • Tokyo Cabinet/Tyrant is used instead of memcached in some places for caching larger objects. Specifically markdown text that has been converted to HTML.
  • HAproxy and Capistrano are used for rolling deploys of new versions of the site without affecting performance/traffic.

    Lessons Learned

  • Let your users create the site for you. Iterate and evolve. Start with something that works, get people in it, and build it together. Have a slow beta. Invite new people on slowly. Talk to the users about what they want every single day. Let your users help build your site. The result will be more reassuring, comforting, intuitive, and effective.
  • Let your users fund you. Ravelry was funded in part from users who donated $71K. That's a gift. Not stock. Don't give up equity in your company. It took 6 months of working full time and bandwidth/server costs before they started making a profit and this money helped bridge that gap. They key is having a product users feel passionate about and being the kind of people users feel good about supporting. That requires love and authenticity.
  • Become the farmer's market of your niche. Find an under serviced niche. Be anti-mass market. You don't always have to create something for the millions. The millions will likely yawn. Create something and do a good job for a smaller passionate group and that passion will transfer over to you.
  • Success is not about scale, it’s about sustainable execution. This lovely quote is from Jeff Putz.
  • The database is always the problem. Nearly all of the scaling/tuning/performance related work is database related. For example, MySQL schema changes on large tables are painful if you don’t want any downtime. One of the arguments for schemaless databases.
  • Keep it fun. Casey switched to Ruby on Rails because he was looking to make programming fun again. That reenchantment helped make the site possible.
  • Invent new things that delight your users. Go for magic. Users like that. This is one of Costco's principles too. This link, for example, describes some very innovative approaches to forum management.
  • Ruby rocks. It's a fun language and allowed them to develop quickly and release the site twice a day during beta.
  • Capture more profit using low margin services. Ravelry has their own merchandise store, wholesale accounts, printers, and fulfillment company. This allows them to keep all their costs lower so their profits aren't going third party services like CafePress.
  • Going from one server to many servers is the hardest transition to make. Everything changes and becomes more difficult. Have this transition in mind when you are planning your architecture.
  • You can do a lot with a little in today's ecosystem. It doesn't take many people or much money anymore to build a complex site like Ravelry. Take a look at all the different programs Ravelry uses to build there site and how few people are needed to run the site.

    Some people complain that there aren't a lot of nitty gritty details about how Raverly works. I think it should be illuminating that a site of this size doesn't need to have a lavish description of arcane scaling strategies. It can now be built from off the shelf parts smartly put together. And that's pretty cool.

    Related Articles

  • Ravelry gets funding from its own community.
  • Appache/Passenger vs Nginx/Mongrel by Matt Darby
  • The Ravelry Blog (note the number of comments on posts).
  • Podcast - Episode 4: Y Ravelry (featuring Jess & Casey)
  • Beta testing and beyond
  • Hacker News Thread - I included the reasoning from a user named Brett for why the HTTP request path is "Nginx out front passing requests to HAProxy and THEN to Apache + mod_rails."
  • Monday
    Aug312009

    Squarespace Architecture - A Grid Handles Hundreds of Millions of Requests a Month 

    I first heard an enthusiastic endorsement of Squarespace streaming from the ubiquitous Leo Laporte on one of his many Twit Live shows. Squarespace as a fully hosted, completely managed environment for creating and maintaining a website, blog or portfolio was of interest to me because they promise scalability and this site doesn't have enough of that. But sadly, since they don't offer a link preserving Drupal import our relationship was not meant to be.

    When a fine reader of High Scalability, Brian Egge, (and all my readers are thrifty, brave, and strong) asked me how Squarespace scaled I said I didn't know, but I would try and find out. I emailed Squarespace a few questions and founder Anthony Casalena and Director of Technical Operations Rolando Berrios were kind enough to reply in some detail. The questions were both from Brian and myself. Answers can be found below.

    Two things struck me most about Squarespace's approach:

  • They based their system on a memory grid, in this case Oracle Coherence. I'm not aware of too many customer facing systems that have moved to a grid as the backbone of their scalability strategy. It's good to see a successful system visible out in the wild.
  • They use a sort of Private Cloud internally. Everything is highly automated and easy to expand. They scale by adding additional resources like CPUs and disks and the system just adapts without a lot of human fussing involved. Now that's scaling with gas.

    Learn more about how Squarespace has learned how to scale to tens of thousands of customers, hundreds of thousands of signups, and serve hundreds of millions of hits per month.

    Site: http://www.squarespace.com

    The Stats

  • Tens of thousands of customers.
  • Hundreds of thousands of signups.
  • Serves hundreds of millions of hits per month.

    Platform

  • Java - well supported and an advanced language to work in, and the components out there (Apache Foundation, etc.) are second to none.
  • Tomcat - the stability of the server is extremely impressive.
  • Grid - Oracle Coherence for the re-balancing and caching layers.
  • Storage - Isilon Cluster. This allows them to treat their storage like another "grid" as the storage pool is easily scaled by adding more diskspace.
  • Monetiziation Strategy - charge money. No free customers. Pricing starts at $8/month.
  • Uptime - 99.98%
  • Hosting - Peer1, they do not yet operate in multiple datacenters.
  • Competitors - TypePad and WordPress
  • Hardware - they don't use "commodity nodes" or low cost hardware units. These end up costing more in the long run as datacenter power is extremely expensive.
  • Cacti - a cacti instance is used to graph statistical data which helps see trends over time, predict when a hardware upgrade is necessary, and troubleshoot any problems that do show up.

    Lessons Learned

  • Cache as much as you can and load balance requests intelligently across a cluster.
  • Use an infrastructure that scales automatically merely by adding more resources (CPU, disk).
  • Build a scalable design up front. Make scaling easy by designing the application and infrastructure with scaling in mind.
  • Build a hands-off capable maintenance system. Automate processes. Make them as simple as possible. Monitor programatically so people don't have to.
  • Release code early and often. Running on the latest code means problems can be detected quickly when the problem are small.
  • Keep things simple. Apply simplicity to every part of your infrastructure, including both your software and those of your outside vendors. Examples of this are: Grid for the application infrastructure, Isilon cluster for storage, automation, creating their own tools.
  • Use as few technologies as possible by selecting or building simple, powerful and robust tools.
  • Don't be afraid to implement your own code to ensure simplicity. Build or buy is a huge balancing act.
  • Don't be afraid to spend money on technology that helps you get where you need to go. It can save you months and months of headaches that would have prevented you from working on core functionality.

    Interview Questions and Responses

    They say they run on a grid. I'd be interested to know if they built their own grid?

    Partially. We rely on Oracle's Coherence product for the re-balancing
    and caching layers of our system -- which we consider a real workhorse
    for the "grid" aspects of the system. Each node in our infrastructure
    can handle a hit for any single site on the system. This means that in order to increase capacity, we just increase node count. No site is handled by a single node.

    2. How much traffic they can really handle?

    We've had several customer sites on the front page of Digg on multiple
    occasions, and didn't notice any performance degradation for any of our
    sites. In fact, we didn't even realize the surge happened until we reviewed our traffic reports a few hours later. For 99% of sites out there, Squarespace is going to be sufficient. Even larger sites with millions of inbound hits per day are servable, as the bulk of the traffic serving on those sites is in the media being served.

    3. How do they scale up, and allow for certain sites to become quite busy?

    We've tried to make scaling easy, and the application and infrastructure
    have been designed with scaling in mind. Because of this, we're luckily not
    in a situation where we need to keep getting bigger and beefier hardware to handle more and more traffic -- we try to scale out by supplementing the
    grid. Since we try to cache as much as we can and every server
    participates in handling requests for every site, it's generally just a
    matter of adding another node to the environment.

    We try to apply this simplicity to every part of our infrastructure, both
    with our own software and when deciding on purchases from outside vendors. For instance, we just increased the amount of available storage another few terabytes by adding another node to our Isilon cluster.

    4. Are there any stats you can share about how many customers, how many users, how many requests served, how many servers, how much disk, how fast, how reliable?

    We, unfortunately, can't share these numbers as we're a private company
    -- but we can say we have tens of thousands of customers, hundreds
    of thousands of signups, and serve hundreds of millions of hits per
    month. The server types and disk configurations (RAID, etc) are a bit
    irrelevant, as the clustering we implement provides redundancy -- not
    anything implemented into a particular single machine. Nothing in
    hardware is too particular to our setup. I will say we don't purchase
    "commodity nodes" or other low cost hardware units, as we find these
    end up costing more in the long run as datacenter power is extremely
    expensive.

    5. What technology stack are you using and why did you make the choices you made?

    We currently use Java along with Tomcat as our web server. After
    trying a few other solutions, we really appreciated the ability to use
    as few technologies as possible, and have those always remain things
    that are understandable for us. Java is an incredibly well supported
    and advanced language to work in, and the components out there (Apache
    Foundation, etc.) are second to none. As for Tomcat, the stability of
    the server is extremely impressive. We've implemented our own
    controller mechanisms on top of Tomcat (instead of going with some
    other library) in order to ensure extreme simplicity.

    6. How are you handling...

    Multi-tenancy?

    As mentioned above, every web node handles traffic for all sites, so a
    customer doesn't have to worry about an underpowered server unable to handle their traffic, or a node going down.

    Backups?

    Backups are obviously important to us, and we have several copies of user
    and server data stored in multiple locations. We gather backups with a
    combination of various home-grown scripts customized for our environment.

    Failover? Monitoring?

    Since this company originally was solely maintained by Anthony when he
    first started it, things needed to be as simple and automated as possible.
    This includes failover and monitoring. Our monitoring systems check every
    aspect of our environment we can think of several times a minute, and can
    restart obviously dead services, or alert us if it's something an
    actual person needs to handle.

    Additionally, we've set up a cacti instance to graph as much statistical
    data as we can pull out of our servers, so we can see trends over time.
    This allows us to easily predict when a hardware upgrade is necessary. It also helps us troubleshoot any problems that do show up.

    Operations? Releases? Upgrades? Add new hardware?

    With our customer base constantly growing, it's getting tough to manage our systems and still keep our workload under control. There are some projects on the road map to move to a much more hands-off maintenance of our environment, including automatic code deployments and system software upgrades. Most operations can be done without taking the grid offline.

    Multiple data centers?

    We do not have multiple data centers, but have some plans in the works to
    roll one out within the next year.

    Development?

    This is a really broad question, so it's a bit hard to succinctly
    answer. One thing (amongst many) that has consistently served us very
    well is trying to ensure our development environment is always
    releasable into production. By ensuring we're always out there with
    our latest code, we can usually detect problems very rapidly, and
    as a result, those problems are generally extremely small. Everyone on our development team tends to be responsible for wide, sweeping aspects of the system -- which gives them a lot of flexibility to determine how
    their components should work as a whole. It's incredibly important
    that everything fits seamlessly together in the end, so we spend a lot
    of time iterating on things that other groups might consider finished.

    Support?

    Support is something we take extremely seriously. As we've grown from
    the ground up without an external investor, most of our team members
    are versed in support, and understand how critical this component is.
    Our support staff is completely hired from our community, and is
    incredibly passionate about their jobs. We try and get every single
    customer support inquiry answered within 15 minutes or less, and have all sorts of metrics related to our goals here.

    7. What have you done that's really cool that you think other people could learn from?

    We spend a lot of time internally writing scripts and other
    applications that simply run our business. For instance, our
    persistence layer configuration files are generated by applications
    we've written that read our database model directly from the database.
    We develop a lot of these programs, and a lot of "standard naming"--this, again, means that we can move very rapidly as we have less monotonous tasks and searching to think about.

    While this sort of thing is appropriate for small tasks, for the big
    ones, we also aren't afraid to spend money on well developed
    technology. Some of our choices for load balancing and storage are
    very costly, but end up saving us months and months of time in the
    long haul, as we've avoided having to "put out fires" generated by
    untested home grown solutions. It's a huge balancing act.

    The End

    Often the best way to judge a product is to peruse the developer forums. It's these people who know what's really happening. And when I look I see an almost complete absence of threads about performance, scalability, or reliability problems. Take a look at other CMSs and you'll see a completely different tenor of questions. That says something good about the strength of their scalability strategy.

    I'd really like to thank Squarespace for taking the time and making the effort to share they've learned with the larger community. It's an effort we all benefit from. If you would also like to share your knowledge and wisdom with the world please get in touch and let's get started!

    Related Articles

  • Implementation Focus: Squarespace
  • Are Cloud Based Memory Architectures the Next Big Thing?
  • Up and running on Squarespace by Peter Efland
  • Kevin Rose Comes to Squarespace by D. Atkinson
  • Squarespace Vs Wordpress a thread in their developer forum.
  • Monday
    Aug242009

    How Google Serves Data from Multiple Datacenters

    Update: Streamy Explains CAP and HBase's Approach to CAP. We plan to employ inter-cluster replication, with each cluster located in a single DC. Remote replication will introduce some eventual consistency into the system, but each cluster will continue to be strongly consistent.

    Ryan Barrett, Google App Engine datastore lead, gave this talk Transactions Across Datacenters (and Other Weekend Projects) at the Google I/O 2009 conference.

    While the talk doesn't necessarily break new technical ground, Ryan does an excellent job explaining and evaluating the different options you have when architecting a system to work across multiple datacenters. This is called multihoming, operating from multiple datacenters simultaneously.

    As multihoming is one of the most challenging tasks in all computing, Ryan's clear and thoughtful style comfortably leads you through the various options. On the trip you learn:

  • The different multi-homing options are: Backups, Master-Slave, Multi-Master, 2PC, and Paxos. You'll also learn how they each fair on support for consistency, transactions, latency, throughput, data loss, and failover.
  • Google App Engine uses master/slave replication between datacenters. They chose this approach in order to provide:
    - lowish latency writes
    - datacenter failure survival
    - strong consistency guarantees.
  • No solution is all win, so a compromise must be made depending on what you think is important. A major Google App Engine goal was to provide a strong consistency model for programmers. They also wanted to be able to survive datacenter failures. And they wanted write performance that wasn't too far behind a typical relational database. These priorities guided their architectural choices.
  • In the future they hope to offer optional models so you can select Paxos, 2PC, etc for your particular problem requirements (Yahoo's PNUTS does something like this).

    There's still a lot more to learn. Here's my gloss on the talk:

    Consistency - What happens happens after you read after a write?

    Read/write data is one of the hardest kinds of data to run across datacenters. Users a expect a certain level of reliability and consistency.

  • Weak - it might be there, might not. Best effort. Like memcached. It's OK to drop for some applications like Voip, live video, and multiplayer games. You care more about where things are now, not where they where. For data this is not good.
  • Eventual - You eventually see the stuff you wrote, just not right away. Email is a good example. You send it but it doesn't arrive right away, but it gets there, eventually. DNS change propagation, SMTP, Amazon S3, SimpleDB, search engine indexing are all of this type. There's a delay after a write when a read won't see what was written, but the writes eventually push through. Still not ideal for data.
  • Strong - The ideal solution for a structured data system. You get what you put it in. Simplest to program against and think about. Any read after a write will return what was written. AppEngine, file systems, Microsoft Azure, and RDBMSes work this way.
  • Once we move data across datacenters what consistency guarantees do we have? We can give up some guarantees, but we should know what we are getting.

    Transactions - Extended form of consistency across multiple operations.

  • Transaction Properties: Correctness, consistency, enforce variants, ACID.
  • Example: bank transaction. Transfer money from A to B. Subtract money from A and add to B. These happen at different times. What happens if another transfer happens for A in-between? What happens if there's a failure? What happens of program reads from A or B? You want guarantees. On a crash will money added to B still be added to B? Will money taken from A still be taken from A? You don't want to lose or create money.
  • When you start operating across datacenters it's even harder to enforce transactions because more things can go wrong and operations have high latency.

    Why Operate in Multiple Datacenters?

  • Sh*t happens - datacenters fail for any number of reasons.
  • Performance - geolocality allows operations to be moved closer to the user. The speed of light limits limits how fast data can be transferred and becomes significant when operating across the world. Going through multiple router hops also slows traffic. So closer is better and you can only be closer if your data is near the user which requires operating in multiple datacenters. CDNs do this for you, especially for more static data. They put data everywhere.

    Why Not Operate in Multiple Datacenters?

  • Operating in a single datacenter is easy: Low cost bandwidth. Low latency. High bandwidth. Easy operations. Easier code.
  • Operating in multiple datacenters is hard: high cost, high latency, low latency, difficult operations, harder code.
  • It's especially hard if you have a read/write structured data system where you accept writes from more than one location. You have consistency problems. Maintaining consistency in the face of the distances and failures is non-trivial.

    Your Different Architecture Options

  • Single Datacenter. Don't bother operating in mutiple datacenters. This is the easiest option and is what most people do. But datacenters fail, you could lose data, and your site could go down.
  • Bunkerize. Create a Maginot Line for the Ultimate Datacenter. Make sure your datacenter doesn't ever go down. SimpleDB and Azure use this strategy.
  • Single Master. Pick a master datacenter that writes go to and other sites replicate to. The replicates sites off read-only services.
    - Better, but not great.
    - Data are usually replicated asynchronously so there's a window of vulnerability for loss.
    - Data in your other datacenters may not be consistent on failure.
    - Popular with financial institutions.
    - You get geolocation to serve reads. Consistency depends on the technique. Writes are still limited to one datacenter.
  • Multi-Master. True multihoming. The Holy Grail. All datacenters are serving reads and writes. All data is consistent. Transactions just work. This is really hard.
    - So some choose to do it with just two datacenters. NASDAQ has two datacenters close together (low latency) and perform a two-phase commit on every transaction, but they have very strict latency requirements.
    - Using more than two datacenters is fundamentally harder. You pay for it with queuing delays, routing delays, speed of light. You have to talk between datacenters. Just fundamentally slower with a smaller pipe. You may pay for with capacity and throughput, but you'll definitely pay in latency.

    How Do You Actually Do This?

    What are the techniques and tradeoffs of different approaches? Here's the evaluation matrix:

      Backups M/S MM 2PC Paxos
    Consistency Weak Eventual Eventual Strong Strong
    Transactions No Full Local Full Full
    Latency Low Low Low High High
    Throughput High High High Low Medium
    Data loss Lots Some Some None None
    Failover Down Read-only Read/Write Read/Write Read/Write


    - M/S = master/slave, MM - multi-master, 2PC - 2 Phase Commit
    - What kind of consistency, transactions, latency throughput do we get for a particular approach? Will we lose data on failure? How much will we lose? When we failover for maintenance or we want to move things, say decommissioning a datacenter, how well do we do that, how well do the techniques support it?

  • Backups - Make a copy of your data that's secret and safe. Generally weak consistency. Usually no transactions. Used for the first internal datastore launch. Not good enough for a production system. Lose data since last backup. You are down while restoring a backup to another datacenter.

  • Master/Slave Replication - Writes to a master are also written to one or more slaves.
    - Replication is asynchronous so good for latency and throughput.
    - Weak/eventual consistency unless you are very careful.
    - You have multiple copies in the datacenters, so you'll lose a little data on failure, but not much. Failover can go read-only until the master has been moved to another datacenter.
    - Datastore currently uses this mechanism. Truly multihoming adds latency because you have to add the extra hop between datacenters. App Engine is already slow on writes so this extra hit would be painful. M/S gives you most of the benefits of better forms while still offering lower latency writes.

  • Multi-Master Replication - support writes from multiple datacenters simultaneously.
    - You figure out how to merge all the writes later when there's a conflict. It's like asynchronous replication, but you are serving writes from multiple locations.
    - Best you can do is Eventual Consistency. Writes don't immediately go everywhere. This is a paradigm shift here. We've assumed with a strongly consistent system that backup and M/S that they don't change anything. They are just techniques to help us multihome. Here it literally changes how the system runs because the multiple writes must be merged.
    - To do the merging you must find away to serialize, impose an ordering on all your writes. There is no global clock. Things happen in parallel. You can't ever know what happens first. So you make it up using timestamps, local timetamps + skew, local version numbers, distributed consensus protocol. This is the magic and there are a number of ways to do it.
    - There's no way to do a global transaction. With multiple simultaneous writes you can't guarantee transactions. So you have to figure out what to do afterward.
    - AppEngine wants strong consistency to make building applications easier, so they didn't consider this option.
    - Failover is easy because each datacenter can handle writes.

  • Two Phase Commit (2PC) - protocol for setting up transactions between distributed systems.
    - Semi-distributed because there's always a master coordinator for a given 2PC transaction. Because there are so few datacenters you tend to go through the same set of master coordinators.
    - It's synchronous. All transactions are serialized through that master which kills your throughput and increases latency.
    - Never serious considered this option because write throughput is very important to them. No single point of failure or serialization point would work for them. Latency is high because of the extra coordination. Writes can be in the 200msec area.
    - This option does work though. You write to all datacenters or nothing. You get strong consistency and transactions.
    - Need N+1 datacenters. If you take one down then you still have N to handle your load.

  • Paxos - A consensus protocol where a group of independent nodes reach a majority consensus on a decision.
    - Protocol: there's a propose step and then an agree step. You only need a majority of nodes to agree to say something is persisted for it to be considered persisted.
    - Unlike 2PC it is fully distributed. There's no single master coordinator.
    - Multiple transactions can be run in parallel. There's less serialization.
    - Writes are high latency because of the 2 extra round coordination trips required in the protocol.
    - Wanted to do this, but the they didn't want to pay the 150msec latency hit to writes, especially when competing against 5msec writes for RDBMSes.
    - They tried using physcially close datacenters but the built-in multi-datacenter overhead (routers, etc) was too high. Even in the same datacenter was too slow.
    - Paxos is still used a ton within Google. Especially for lock servers. For coordinating anything they do across datacenters. Especially when state is moved between datacenters. If your app is serving data in one datacenter and it should be moved to another that coordination is done through Paxos. It's used also in managing memcache and offline processing.

    Miscellaneous

  • Entity Groups are the unit of consistency in AppEngine. Operations are serialized on Entity Groups. The log for each commit to an entity group is replicated. This maintains consistency and provides transactions. Entity Groups are essentially shards. Sharding enables scaling because it allows you to handle a lot of writes. Datastore shards in entity group size chunks. BuddyPoke has 40 million users, each of which has an entity group. That's 40 million different shards.
  • Eating your own dog food is a strategy used a lot at Google. Iterate and make people use new features internally. Using a ton of stuff that's very early. You can iterated many many times so that improves it before you are ready to launch.
  • They see relational databases in the datacenter as their competition as much as Azure and SimpleDB. Inserts into RDBMS are in low milliseconds. Writes into AppEngine are 30-40 msecs. Reads are fast. They like this trade-off because on the web reads vastly out number writes.

    Discussion

    A few things I wondered through the talk. Did they ever consider a distributed MVCC approach? That might be interesting and wasn't addressed as an option. Clearly at Google scale an in-memory data grid isn't yet appropriate.

    A preference for the strong consistency model was repeatedly specified as a major design goal because this makes the job of the programmer easier. A counter to this is that the programming model for Google App Engine is already very difficult. The limits and the lack of traditional relational database programming features put a lot of responsibility back on the programmer to write a scalable app. I wonder if giving up strong consistency would have been such a big deal in comparison?

    I really appreciated the evaluation matrix and the discussion of why Google App Engine made the choices they did. Since writes are already slow on Google App Engine they didn't have a lot of headroom to absorb more layers of coordination. These are the kinds of things developers talk about in design meetings, but they usually don't make it outside the cubicle or conference room walls. I can just hear Ryan, with voiced raised, saying "Why can't we have it all!" But it never seems we can have everything. Many thanks to Ryan for sharing.

    Related Articles

  • Slides for the Talk
  • ZooKeeper - A Reliable, Scalable Distributed Coordination System
  • Yahoo!'s PNUTS Database: Too Hot, Too Cold or Just Right?
  • Paper: Consensus Protocols: Paxos by Henry Robinson
  • Paper: Consensus Protocols: Two-Phase Commit by Henry Robinson
  • Paper: Dynamo: Amazon’s Highly Available Key-value Store
  • Are Cloud Based Memory Architectures the Next Big Thing?
  • Sunday
    Aug162009

    ThePort Network  Architecture

    ThePort Network's Director of Engineering, TJ Muehleman was kind of enough to share some of the architectural details for their white label social media system. It currently runs about 50 social networks varying in size from less than 1000 members to more than 300,000 members, all on a Microsoft stack. In addition to their social networking platform, they offer Javascript APIs and web service APIs (both REST and SOAP) which account for a significant percentage of overall system usage.

    ThePort is an excellent example of a real world in-the-trenches product offering real value to customers. One of the most interesting problems they have to solve is multi-tenancy. How do you provide good performance, complete customization, support, develop new features, and provide individual search indexes for each customer? It's not an easy problem to solve.

    How did they solve their problems and build a successful system? 

    Site: http://theport.com

    Platform

  • Microsoft.NET 3.5
  • C# / VB.NET
  • SQL Server 2005
  • Visual Studio 2008 Pro Edition
  • Prototype
  • Subversion
  • TortoiseSVN
  • Trac (for internal defect tracking. Will possibly move all internal and external issue tracking to it)
  • Beyond Compare 3
  • Web Tier
    * 6 x Dell blade servers running windows 2008 / IIS 7
  • Data Tier
    * 1 r/w SQL Cluster – dell 6850s (6 single core processors, 32 GB RAM)
    * 2 read-only dell 2950 (2 quad core processors, 16 GB RAM)
    * 1 distribution server – dell 2950 (2 quad core processors, 16 GB RAM)
  • We also use SQL Server Service Broker as a queuing system for some of our saves. It's an alternative to MSMQ that uses the DB for persistence in case of failure. We will most likely be moving to MSMQ in the near future to remove us from SQL dependence.
  • Caching
    * 2 Dell blade servers 8 GB RAM each to total 16 GB of available RAM
    * Running SharedCache (Basically an open source .NET port of MemCacheD. We initially looked at MemCacheD but our internal benchmarking indicated SharedCache had better performance – at least w/in a Microsoft environment. We may still investigate Microsoft's Velocity cache platform when it goes live)
  • Search
    * 2 Dell 2950s with 725 GB Storage
    * Running Lucene + SOLR
    * We chose Lucene over Lucene.NET because Lucene.NET's wildcard search was a little buggy in our initial beta testing. SQL Full Text wasn't a viable option because there was no clear and easy way to split indexes between customers. SOLR cores make this part easy. Above and beyond that, Lucene is lightning fast and is available with features we couldn't turn down (proximity search, searching w/in documents, and built-in RESTful APIs to name a few)

    How do you handle multi-tenancy?

    A multi-tenant platform has two primary hurdles to overcome:

    1. Preventing a single, large customer from overwhelming the system?

    The primary bottleneck for this is in the data layer. Our current DB architecture has helped mitigate this problem. The read-only servers help offset most of this by absorbing the bulk of the data calls. We did have to beef up the distribution server because latency between the r/w server and the read only servers had crept too high. Getting a new machine (2 quad cores with 16 GB of RAM) helped reduce the latency to less than a second.

    However robust the cluster is, we've concluded that we will eventually have to move to a sharded architecture with MySQL. MS SQL licensing fees makes both continuing to enhance the cluster and scaling out to multiple machines prohibitive. Additionally, sharding allows us to scale either by customer (because some may be more active than others) or by functional area (photos, comments, etc).

    2. Allowing clients to have total control over the look, feel, and user experience of their sites.


    Allowing CSS control isn't enough; we needed a templating system that allows total control over the site. We looked at using .NET master pages and user controls to accomplish this. But that assumes a level of knowledge in .NET for outside developers. We built a proprietary templating system that unfortunately became too limiting and would one day lead to a drag on performance.

    So we settled on using XML / XSLT. All of our business / entity objects are serializable to XML. This made XSLT a natural choice from the templating angle. We've seen a considerable boost in performance from this upgrade and an even greater increase in flexibility in terms of what our designers can do. Once the learning curve is overcome, the web designers love the amount of control they get.

    What did you do that was especially cool that people could learn from?

    XSLT as Custom Templating System

    Building a templating system in XSLT that actually allows the template author to make a web service call to our internal web service layer (or external web services) straight from the templating system. This allowed the development team to build a flexible, powerful system that allows a web designer to embed real-time calls into a given template. We accomplish this using XSLT Extension Objects. What we've found in our internal testing is that these extension objects scale way better than our previous templating system (a homegrown proprietary system). We've used ANTS profiler to compare the two and the difference is in orders of magnitude.

    Obviously we have to cache the hell out of this or the performance of the pages the calls are embedded in would suffer. For now, we make the internal web services calls via HTTP, but we will soon be moving this to a TCP call to take advantage of the better connection pooling offered by TCP. We're most likely to use WCF because of it's native support of TCP bindings. However, we haven't yet benchmarked that so it's possible it could change.

    Not Using the Database to Build Collections

    Another cool thing we've done is to move strongly away from using the database to retrieve collections of 'things'. For instance, if we needed a collection of comments, previously we'd hit the database for the 5, 10, 100, etc comments we wanted, do the sorting / filtering in the DB, return a single dataset, cache that, and then display.

    However, this is a database intensive operation, especially if you're going to join against user data (which you inevitably will). What we've started doing recently is caching the recent comment objects, and using our cache providers MultiGet ability to simultaneously retrieve all comments at the same time. We then sort / filter in memory in the application tier, discard whatever comments we don't need, and then display. We found that doing it this way, we save lots of hits to our database and in fact, saw a considerable performance gain from it.

    Our tests (on a developer laptop) fetched 10,000 objects from cache in about 1 second, then sorted them by date time in about .015 second.

    What prompted you to move to a SOA architecture?

    To better compartmentalize our code.

    Given the growth of our templating system mentioned above, we realized it was best to truly separate the tiers into discrete areas. Since our application is easily accessed via a set of REST APIs and our own internal skinning system (and who knows what in the future), dividing the application like this gives us a lot of leeway in being able to swap out components. Additionally, we're doing more and more queuing which lines up nicely with SOA.

    Performance

    Since modern web apps deal with complex data, breaking the work into more discrete operations handled by offline processes on their own infrastructure makes a lot of sense from a performance point of view.

    How do you handle consistency between the database and the search engine?

    We have a multi-threaded windows service that scans our database once every 5 minutes looking for new data. The service then adds the new items to the Lucene index. We keep audit columns on all our database tables so capturing new data is pretty simple. Once a night, we purge the Lucene index and run a full rescan of the database. We think this system will work for the near to mid term but long term, we'll take advantage of a queuing system to keep the index in sync.

    How you handle your release, support, bug fixing, development, etc.

    We have a decent sized dev team. 1 platform architect responsible for overall system architecture (selecting which systems to use, tuning them), 1 lead software architect, and 3 senior – mid level developers. Since we're a start-up in a fast evolving market (social media) we find that we're constantly having to adjust to market demands and the latest in social functionality. So we have a 2 month build cycle which is pretty aggressive.

    In terms of actual development, we've found the following to be keys to success:
    1. Daily stand-ups: it's absolutely necessary for everyone on the team to know what the other is doing. A code base as large as ours, it's very likely I'm writing a function someone has already written or solving a problem someone has solved previously. Daily stand-ups help with that
    2. Iterate: Build the core functionality, get it into QA and / or beta, beat the bugs out of it, move to the next piece. We've found this to be easier said than done. Market pressures sometimes dictate you roll with something more feature rich than you'd like. Sticking to an iterative cycle creates better code and more market ready products.
    3. Beta test: This goes hand-in-hand w/ #2 above. Get something done and get it in the hands of actual users. This is the best way to find where your app falls down

    With regard to support / bug fixing, we're moving to a forums based support model for many of our customers. We've found the same problems, especially in an app as configurable as ours, occur over and over. Getting those answers into an open, searchable format should hopefully cut down on confusion and get developers talking directly to developers.

    Internally we use Trac for bug tracking and devote roughly 20% of our week maintaining, supporting, and fixing issues. That may seem like a lot but given how configurable our system is, we're essentially running 50 heavily data driven websites.

    WCF sounds like a buggy underpeforming mess. How is it working?

    So far we have no complaints with WCF. I think baking it directly into .NET 3.5 helped iron a lot of the big kinks out. It does come with it's quirks, no doubt. We built our REST libraries on top of it and found that posting XML is not exactly the easiest thing in the world. But it was more than made up for with the ease in deploying all our GET operations with REST. Our next step will be to set up TCP and MSMQ bindings with WCF to handle our internal service requests and queuing, respectively. Since WCF exposes all of these bindings natively, we think we will see a lot of effective code re-use out of this.

    I'd like to thank TJ for taking the time and making the effort to write up their architecture for people to learn from. I'm sure it will help others when they are trying to build their own systems.

    You too can share the architecture for your amazing system. Come on, you've learned a lot from others, it's time to return the favor and give back. It's not that hard, really. If interested please contact me and we can get started.
  • Friday
    Aug072009

    The Canonical Cloud Architecture 

    Update 2: Elastic Load Balancer and EC2 instance bandwidth. It turns out we are limited by bandwidth and not by CPU. Solution: use DNS Round Robin for two to three HighCPU medium instances.
    Update: The Skinny Straw: Cloud Computing's Bottleneck and How to Address It. For cloud computing, bandwidth to and from the cloud provider is a bottleneck. Solution: Evaluate application architecture and consider application partitioning.

    I'm writing this post as a sort of penance. My sin was getting involved in another mutli-threaded mess of a program that was rife with strange pauses and unexpected errors. I really should have known better. But when APIs choose to make callbacks from some mystery thread pool it's hard to keep things straight. I eventually sobered up and posted all events to a queue so I could make sure the program would work correctly. Doh. I may never know why the .Net console output stopped working, but I'll live with it.

    And that reminded me that I've been meaning to write a post on the standard Cloud Architecture. I've tried to hit all the common architectures at one time or another, but there have been some excellent sources lately on structuring programs in a cloud that people may "know" in the same way I knew what not to do, but when the code hits the editor those thoughts may have hidden like a kid next to a broken cookie jar.

    The easiest way to create a scalable service is to compose the service from other scalable services. This is how Google AppEngine works and is largely how AWS works as well (EC2, S3, SQS, SimpleDB, etc), though AWS also functions as a blank canvas on which you can draw your own designs.

    The canonical cloud architecture that has evolved revolves around dynamically scalable CPUs consuming asynchronous, persistently queued events. We talked about this idea already in Flickr - Do the Essential Work Up-front and Queue the Rest. The cloud is just another way of implementing the same idea.

    Amazon suggests a few applications of the Cloud Architecture as:

  • Processing Pipelines
    - Document processing pipelines – convert hundreds of thousands of documents from Microsoft Word to PDF, OCR millions of pages/images into raw searchable text
    - Image processing pipelines – create thumbnails or low resolution variants of an image, resize millions of images
    - Video transcoding pipelines – transcode AVI to MPEG movies
    - Indexing – create an index of web crawl data
    - Data mining – perform search over millions of records
  • Batch Processing Systems
    - Back-office applications (in financial, insurance or retail sectors)
    - Log analysis – analyze and generate daily/weekly reports
    - Nightly builds – perform nightly automated builds of source code repository every night in parallel
    - Automated Unit Testing and Deployment Testing – Test and deploy and perform automated unit testing (functional, load, quality) on different deployment configurations every night
  • Websites
    - Websites that “sleep” at night and auto-scale during the day
    - Instant Websites – websites for conferences or events (Super Bowl, sports tournaments)
    - Promotion websites
    - “Seasonal Websites” - websites that only run during the tax season or the holiday season (“Black Friday” or Christmas)

    A good list, but after having worked on a seasonal website for taxes AWS is a horrible match. AWS only works on the instance level, so you need a whole instance turned on all the time even when there's no demand. This is a complete waste of money. An AWS model truly based on use combined with an SLA driven dashboard would be very convenient. But on to cases where AWS is a good fit.

    SmugMug's Cloud Architecture

    AWS pioneer Don MacAskill of SmugMug details how they process high-resolution photos and high-definition video use a cloud hosted queuing architecture in SkyNet Lives! (aka EC2 @ SmugMug).

    SkyNet, as you might expect, operates completely without human minders and automatically scales up and down in relation to the work load. Their system has several components:
  • Work Initiators - Work comes in from your website and/or other software subsystems and is queued up for processing in the Queue Service. Work doesn't have to be large requests either. Work can be small independent parts of an overall pipeline. Don't keep state in the Workers. Bundle what you need done into a work request in shoot back into the Queuing Service for processing.
  • Provisioning Service - This is Amazon's infrastructure that allows instances to be automatically scaled up and down in relation to the work load. This will be the major difference between your VPS or typical datacenter setup. There's an API for starting and stopping AMIs and
    mechanisms for automatically configuring and running VMs.
  • Workers - These are the guys that continually pull work off queues and do something interesting with it. For SmugMug the results are stored on S3 but the results could be put in your own database, SimpleDB or whatever.
  • Queuing Service - This is where work is queued for consumption by the workers. SmugMug built their own queuing service, but you could just as easily use Amazon's own SQS. Creating a scalable, distributed, performant, highly available queue service is not easy, so you may want to take a look at a number of different queue product suggestions in Flickr - Do the Essential Work Up-front and Queue the Rest.
  • Controller - This component monitors many variables related to the work flow and decides how many instances of EC2 are necessary based on optimizing a small set of goals. Instances are add and removed as needed.

    Don shares a lot of practical detailia on how to efficiently use AWS, how their queue service works, and how their controller manages to balances minimizing cost while still being responsive to users. Achieving fairness and balance in a queue system can be difficult, but SmugMug appears to have done a good job of that.

    What rocks about queuing architectures is that they are just so damn robust. Work is safe in the queues. A random reboot won't cause a loss. If one component is producing events too fast the queue will buffer up events until they can be processed. New components can be cleanly added and removed from the system at any time. Timing isn't critical. Work is processed when someone gets around to it. Timeouts and retries are unnecessary. Programs are simple loops that block on the queue, do something, persist results, and feed back more parallelizable work requests back into the queue. Very hard to screw up. Compare and contrast to complex multi-threaded system with shared-state.

    Building GrepTheWeb in the Cloud

    Amazon has published a great couple of articles on building a canonical Cloud Architecture: Building GrepTheWeb in the Cloud, Part 1: Cloud Architectures and Building GrepTheWeb in the Cloud, Part 2: Best Practices.

    These are really tight and well written articles so I'll just hit certain high points. The example used is an application called GrepTheWeb. GrepTheWeb searches using a regular expression across millions of web documents. So it's a grep for the web, ah got it now. The idea is to take an unpredictable but possibly large number of search requests, apply the search expression to hundreds of terabytes of documents, and return the results in a reasonable period of time.

    How exactly would you do such a thing? Here's how you do it in the cloud:
  • Amazon S3 for retrieving input datasets and for storing the output dataset
  • Amazon SQS for durably buffering requests acting as a “glue” between controllers
  • Amazon SimpleDB for storing intermediate status, log, and for user data about tasks
  • Amazon EC2 for running a large distributed processing Hadoop cluster on-demand
  • Hadoop for distributed processing, automatic parallelization, and job scheduling

    Clearly these are all (except for Hadoop) built on Amazon services, but the general ideas apply anywhere. For storing large amounts of data and accessing it efficiently in parallel you need a distributed file system like S3. To coordinate and dispatch work you need a queuing service like SQS. For keeping intermediate state you need a scalable database store like SimpleDB, though you could also imagine using S3. For dynamically scaling processing nodes something like EC2 is necessary. And for actually carrying out the document search a framework like Hadoop provides a lot of features, though you can imagine using other compute grid products.

    Here's their fabulous picture of what the system looks like:


    All the parts and linkages are described in the paper. What's important to note is that even though there are a lot of independently moving parts all the boundaries are clear and well described. In your typical program few will have any idea how it works. Using Cloud Architecture principles it's possible to create a system which both scales and easy to understand and explain.

    The paper makes several key architectural recommendations:
  • Use Scalable Ingredients - Ensure that your application is scalable by designing each component to be scalable on its own. If every component implements a service interface, responsible for its own scalability in all appropriate dimensions, then the overall system will have a scalable base.
  • Have Loosely Coupled Systems - For better manageability and high-availability, make sure that your components are loosely coupled. The key is to build components without having tight dependencies between each other, so that if one component were to die (fail), sleep (not respond) or remain busy (slow to respond) for some reason, the other components in the system are built so as to continue to work as if no failure is happening.
  • Think Parallel - Implement parallelization for better use of the infrastructure and for performance. Distributing the tasks on multiple machines, multithreading your requests and effective aggregation of results obtained in parallel are some of the techniques that help exploit the infrastructure.
  • Utilize On-Demand Requisition and Relinquishment - After designing the basic functionality, ask the question “What if this fails?” Use techniques and approaches that will ensure resilience. If any component fails (and failures happen all the time), the system should automatically alert, failover, and re-sync back to the “last known state” as if nothing had failed.
  • Use Designs that Are Resilient to Reboot and Re-Launch - Don’t forget the cost factor. The key to building a cost-effective application is using on-demand resources in your design. It’s wasteful to pay for infrastructure that is sitting idle.

    All good stuff which is why I like this paper so much. There's a big conceptual shift here, especially of you are used to relatively simple client-server and N-tier systems. It's like simulating in your mind how to keep an army of ants all working independently while still communicating, coordinating, and making progress on a goal. We implemented similar architecture in datacenters long before the cloud, it was just a lot harder as everything was roll your own. The cloud makes all the necessary components standard, featureful, and relatively inexpensive. This opens any application to completley different ways of structuring their backends than they did in the past.

    Related Articles

  • SkyNet Lives! (aka EC2 @ SmugMug).
  • Flickr - Do the Essential Work Up-front and Queue the Rest
  • Hadoop
  • GridGain: One Compute Grid, Many Data Grids
  • Building GrepTheWeb in the Cloud, Part 1: Cloud Architectures
  • Building GrepTheWeb in the Cloud, Part 2: Best Practices.