« Stuff The Internet Says On Scalability For November 15th, 2013 | Main | Sponsored Post: Klout, Apple, NuoDB, ScaleOut, FreeAgent, CloudStats.me, Intechnica, MongoDB, Stackdriver, BlueStripe, Booking, AiCache, Aerospike, New Relic, LogicMonitor, AppDynamics, ManageEngine, Site24x7 »
Wednesday
Nov132013

Google: Multiplex Multiple Works Loads on Computers to Increase Machine Utilization and Save Money

Jeff Dean gave a talk at SFBay ACM and at about 3 minutes in he goes over how Google runs jobs on computers, which is different than how most shops distribute workloads.

It’s common for machines to be dedicated to one service, say run a database, run a cache, run this, or run that. The logic is:

  • Better control over responsiveness as you generally know the traffic loads a machine will experience and you can over provision a box to be safe.

  • Easier to manage, load balance, configure, upgrade, create and make highly available. Since you know what a machine does another machine can be provisioned to do the same work.

The problem is monocropping hardware though conceptually clean for humans and safe for applications, is hugely wasteful. Machines are woefully underutilized, even in a virtualized world.

What Google does is use a shared environment in a datacenter where all kinds of stuff run on each computer. Batch computation and interactive computations all run together on the same machine. Each machine has (or may have): Linux, file system chunkserver, scheduling system, other system services, random application and higher level system services like Bigtable tablet server, a CPU intensive job, one ore more MapReduce jobs, and more random applications.

The benefit giving a machine lots to do is greatly increased utilization. Machines these days are like race horses pulling a plow. They need to be set free to run with abandon. Machine monocropping like big agriculture monocropping is a relic from a past era. So we see here a parallel in the computer world with the permaculture revolution taking over our food production system. Doing one thing and one thing only leads to inefficiencies.

The downside to the increased variability of running multiple workloads on a machine is it’s constant change which to humans appears as chaos and we humans do not like chaos. You have jobs running in the foreground and background, constantly changing, bursting in CPU, memory, disk, and network usage, so nobody has guarantees which means you have unpredictable performance. For interactive jobs, those where a user wants a response fast, it's especially troublesome.

With large fanout systems, which means architectures that encourage hundreds or thousands of different services to be contacted to service a request, the result can be a high variability of latency.

Google wants high machine utilization for its higher power efficiency and less money spent on expensive servers. So Google needed to go about solving all the problems generated by higher machine utilization, which as you might imagine involves lots of cool solutions. More on that in Google On Latency Tolerant Systems: Making A Predictable Whole Out Of Unpredictable Parts.

 

Related Articles

 

Reader Comments (5)

Not exactly a new idea, this is normal practice in large data centres for large, well established organisations utilising mainframes and similar technologies. Its more complicated now due to the rise of systems integration, SOA etc, but conceptually very old as an idea.

November 14, 2013 | Unregistered CommenterDavid Murphy

I don't know if I would agree with this approach for application level server bundling. Its great that they built something like this yet I think this technology would be best used to pack VMs on a server and expose each VM as a server to their developers to run their applications on. Then Google can offer an AWS competitor that doesn't tie the app developer into Google APP Engine. People really like having control over what OS, FS, Technology they use for their stack IMHO. Good tech just applied to specifically to the Google way of doing things (which is rarely public to convince outsiders its good)

November 14, 2013 | Unregistered CommenterDathan Vance Pattishall

I am wrong or using plan9 or inferno could be nice solutions to implement a similar workload balancing pattern?

November 14, 2013 | Unregistered Commenterpauldub

If every request needs to be cancelled by the backup server, that is 2x requests flowing on network and extra work and state to maintain on server where cancellation would happen. This cost grows directly proportional to number of requests.. Unless your requests really rely on 100 servers, this model is flawed where request processing depends on 2-4 servers.

November 16, 2013 | Unregistered Commentermaninder

Maninder, that depends on the topology of your datacenter network. Google designs with high crossectional bandwidth in mind so the messaging traffic is designed in. http://highscalability.com/blog/2012/7/2/c-is-for-compute-google-compute-engine-gce.html

November 18, 2013 | Registered CommenterHighScalability Team

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>