Entries in Problems (4)

Wednesday
Aug282013

Sean Hull's 20 Biggest Bottlenecks that Reduce and Slow Down Scalability

This article is a lightly edited version of 20 Obstacles to Scalability by Sean Hull (with permission) from the always excellent and thought provoking ACM Queue.

1. TWO-PHASE COMMIT

Normally when data is changed in a database, it is written both to memory and to disk. When a commit happens, a relational database makes a commitment to freeze the data somewhere on real storage media. Remember, memory doesn't survive a crash or reboot. Even if the data is cached in memory, the database still has to write it to disk. MySQL binary logs or Oracle redo logs fit the bill.

With a MySQL cluster or distributed file system such as DRBD (Distributed Replicated Block Device) or Amazon Multi-AZ (Multi-Availability Zone), a commit occurs not only locally, but also at the remote end. A two-phase commit means waiting for an acknowledgment from the far end. Because of network and other latency, those commits can be slowed down by milliseconds, as though all the cars on a highway were slowed down by heavy loads. For those considering using Multi-AZ or read replicas, the Amazon RDS (Relational Database Service) use-case comparison at http://www.iheavy.com/2012/06/14/rds-or-mysql-ten-use-cases/ will be helpful.

Synchronous replication has these issues as well; hence, MySQL's solution is semi-synchronous, which makes some compromises in a real two-phase commit.

2. INSUFFICIENT CACHING

...

Click to read more ...

Wednesday
Jun292011

Second Hand Seizure : A New Cause of Site Death

Like a digital SWAT team that implodes the wrong door on a raid, the FBI seized multiple racks of computers from DigitalOne, these racks host websites from many clients that just happened to be in the same racks as whomever they are investigating. Downed sites include Instapaper, Curbed Network, and Pinboard. With the density of servers these days many 1000s of sites could easily have been effected.

Sites like Pinboard were victims by association, they did not inhale. This is an association sites have no control over. On a shared hosting service, you have no control over your fellow VM mates. In a cloud or a managed service, you have no control over which racks your servers are in. So like second hand smoke, you get the disease by random association. There's something inherently unfair about that.

A comment by illumin8 shows just how Darth insidious this process can be:

Click to read more ...

Friday
Mar142008

Problem: Mobbing the Least Used Resource Error

A thoughtful reader recently suggested creating a series of posts based on real-life problems people have experienced and the solutions they've created to slay the little beasties. It's a great idea. Often we learn best from great trials and tribulations. I'll start off the new "Problem Report" feature with a diabolical little problem I dubbed the "Mobbing the Least Used Resource Error." Please post your own. And if you know someone with an interesting problem report, please tag them too. It could be a lot of fun. Of course, feel free to scrub your posts of all embarrassing details, but be sure to keep the heroic parts in :-)

The Problem

There's an unexpected and frequently fatal type of error that can happen when new resources are added to a horizontally scaled architecture. Because the new resource has the least of something, load or connections or whatever, a load balancer configured with a least metric will instantaneously direct all new traffic to that new resource. And bam! Your system dies. All the traffic that was meant to be spread across your entire cluster is now directed like a laser beam to one small part of it. I love this problem because it's such a Heisenberg. Everyone is screaming for more storage space so you bring up a new filer. All new data streams flow to the new filer and it crumbles and crawls because it can't handle the load for the entire system. It's in the very act of turning up more storage you bring your system down. How "cruel world the universe hates me" is that? Let's say you add database slaves to handle load. Your load balancer redirects traffic to the new slaves, but the slaves are trying to sync, yet they can't sink because they are getting hammered by the new traffic. Down goes Frazier. This is the dark side of partitioning. You partition data to get high performance via parallelization. For example, you hash on the user name to a cluster dedicated to handle those users. Unless your system is very flexible you can't scale anymore by adding resources because you can't repartition the data. All users are handled by their cluster. If you want a different organization you would have to redistribute data across all the clusters. Most systems can't handle that and you end not being able to scale out as easily as you hoped.

The Solution

The solution depends of course on the resource in question. Butting knowing a potential problem is present gives you the heads up you need to avoid destruction.
  • For filers migrate storage from existing filers to the new filers so storage is evened out. Then new storage will be allocated evenly across all the filers.
  • For services have a life cycle state machine indicating when a service is up and ready for work. Simply being alive doesn't mean it's ready.
  • Consistent Hashing to assign resources to a pool of servers in a scalable fashion.
  • For servers use random or round-robin balancing when the load balancer can receive incorrect feedback from pool servers. The Thundering Herd Problem is supposedly the same problem described here, but it doesn't seem the same to me.

    Click to read more ...

  • Wednesday
    Aug292007

    Skype Failed the Boot Scalability Test: Is P2P fundamentally flawed?

    Skype's 220 millions users lost service for a stunning two days. The primary cause for Skype's nightmare (can you imagine the beeper storm that went off?) was a massive global roll-out of a Window's patch triggering the simultaneous reboot of millions of machines across the globe. The secondary cause was a bug in Skype's software that prevented "self-healing" in the face of such attacks. The flood of log-in requests and a lack of "peer-to-peer resources" melted their system. Who's fault is it? Is Skype to blame? Is Microsoft to blame? Or is the peer-to-peer model itself fundamentally flawed in some way? Let's be real, how could Skype possibly test booting 220 million servers over a random configuration of resources? Answer: they can't. Yes, it's Skype's responsibility, but they are in a bit of a pickle on this one. The boot scenario is one of the most basic and one of the most difficult scalability scenarios to plan for and test. You can't simulate the viciousness of real-life conditions in a lab because only real-life has the variety of configurations and the massive resources needed to simulate itself. It's like simulating the universe. How do you simulate the universe if the computational matrix you need is the universe itself? You can't. You end up building smaller models and those models sometimes fail. I worked at set-top company for a while and our big boot scenario was the restart of entire neighbor hoods after a power failure. To make an easy upgrade path, each set-top downloaded their image from the head-end on boot, only a boot image was in EEPROM. This is a very stressful scenario for the system. How do you test it? How do you test thousands of booting set-tops when they don't even exist yet? How do you test the network characteristics of a cable system in the lab? How do you design a system not to croak under the load? Cleverness. One part of the solution was really cool. The boot images were continually broadcast over the network so each set-top would pick up blocks of the boot image. The image would be stitched together from blocks rather than having thousands of boxes individually download images, which would never work. This massively reduced the traffic over the network. Clever tricks like this can get you a long ways. Work. Great pools of workstations were used simulate set-tops and software was made to insert drops and simulate asymmetric network communications. But how could we ever simulate 220 million different users? Then, no way. Maybe now you could use grid services like Amazon's EC2. Help from your friends. Microsoft is not being a good neighbor. They should roll out updates at a much more gradual rate so these problems don't happen. Booting loads networks, taxes CPUs, fills queues, drops connections, stresses services, increases process switching, drops packets, encourages dead lock, steals RAM and file descriptors and other resources. So it would be nice if MS was smarter about their updates. But since you can't rely on such consideration, you always have to handle the load. I assume they used exponential backoff algorithms to limit login attempts, but with so many people this probably didn't matter. Perhaps they could insert a random wait to smooth out login traffic. But again, with so many people it probably won't matter. Perhaps they could stop automatic logins on boot? That would solve the problem at the expense of user convenience. No go. Perhaps their servers could be tuned to accept connections at a fast rate yet condition how quickly they respond to the rest of the login process? Not good enough I suppose. So how did Skype fix their problem? They explain it here : The parameters of the P2P network have been tuned to be smarter about how similar situations should be handled. Once we found the algorithmic fix to ensure continued operation in the face of high numbers of client reboots, the efforts focused squarely on stabilizing the P2P core. The fix means that we’ve tuned Skype’s P2P core so that it can cope with simultaneous P2P network load and core size changes similar to those that occurred on August 16. Whenever I see the word "tune" I get the premonition shivers. Tuning means you are just one unexpected problem away from being out of tune and your perfectly functioning symphony sounding like a band of percussion happy monkeys. Tuned things break under change. Tweak the cosmological constant just a little and wham, there's no human life. It needs to work by design. Or it needs to be self-adaptive and not finessed by human hands for each new disaster scenario. And this is where we get into the nature of P2P. Would the same problem have happened in a centralized architecture with resources spread strategically throughout the globe and automatic load balancing between different data centers? In a centralized model would it have been easier to bring more resources on line to handle the load? Would the outage have been easier to diagnose and last a much shorter amount of time? There are of course no definitive answers to these questions. But many of the web's most successful systems like YouTube, Amazon, Ebay, Google, GoogleTalk, and Flickr use a centralized model. They handle millions of users and massive amounts of content and have pretty good reliability records. Does P2P bring enough to the architecture that you should build a system around it? That to me is the interesting question that arises out of this incident.

    Related Articles

  • Vanilla Skype Part 2. This document gives a detailed explanation of Skype's supernode architecture and details the weakness of using your end users as your redundancy strategy.

    Click to read more ...