« Alternative Memcache Usage: A Highly Scalable, Highly Available, In-Memory Shard Index | Main | Strategy: Understanding Your Data Leads to the Best Scalability Solutions »
Sunday
Jan042009

Paper: MapReduce: Simplified Data Processing on Large Clusters

Update: MapReduce and PageRank Notes from Remzi Arpaci-Dusseau's Fall 2008 class . Collects interesting facts about MapReduce and PageRank. For example, the history of the solution to searching for the term "flu" is traced through multiple generations of technology.

With Google entering the cloud space with Google AppEngine and a maturing Hadoop product, the MapReduce scaling approach might finally become a standard programmer practice. This is the best paper on the subject and is an excellent primer on a content-addressable memory future.

Some interesting stats from the paper: Google executes 100k MapReduce jobs each day; more than 20 petabytes of data are processed per day; more than 10k MapReduce programs have been implemented; machines are dual processor with gigabit ethernet and 4-8 GB of memory.

One common criticism ex-Googlers have is that it takes months to get up and be productive in the Google environment. Hopefully a way will be found to lower the learning curve and make programmers more productive faster.

From the abstract:

MapReduce is a programming model and an associated implementation for processing
and generating large datasets that is amenable to a broad variety of real-world tasks.
Users specify the computation in terms of a map and a reduce function, and the underlying
runtime system automatically parallelizes the computation across large-scale clusters of
machines, handles machine failures, and schedules inter-machine communication to make efficient
use of the network and disks. Programmers find the system easy to use: more than ten
thousand distinct MapReduce programs have been implemented internally at Google over the
past four years, and an average of one hundred thousand MapReduce jobs are executed on
Google’s clusters every day, processing a total of more than twenty petabytes of data per day.


Thanks to Kevin Burton for linking to the complete article.

Related Articles

  • MapReducing 20 petabytes per day by Greg Linden
  • 2004 Version of the Article by Jeffrey Dean and Sanjay Ghemawat
  • References (1)

    References allow you to track sources for this article, as well as articles that were written in response to this article.

    Reader Comments (3)

    The intro CS course at UC Berkeley teaches map/reduce, and built out an infrastructure so students can do an assignment that requires it. Things will get interesting when these kids get to companies that don't have map/reduce and start demanding it.

    December 31, 1999 | Unregistered Commenterjedberg

    Such volume of information is extraordinary. Google rocks man!

    December 31, 1999 | Unregistered Commentermortgage rates

    Google keeps coming with such good aand other updates. I wonder where will it give a stop
    -----
    http://underwaterseaplants.awardspace.com">sea plants
    http://underwaterseaplants.awardspace.com/seagrapes.htm">sea grapes...http://underwaterseaplants.awardspace.com/plantroots.htm">plant roots

    December 31, 1999 | Unregistered Commenterfarhaj

    PostPost a New Comment

    Enter your information below to add a new comment.
    Author Email (optional):
    Author URL (optional):
    Post:
     
    Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>