« Google Spanner's Most Surprising Revelation: NoSQL is Out and NewSQL is In | Main | How Vimeo Saves 50% on EC2 by Playing a Smarter Game »
Friday
Sep212012

Stuff The Internet Says On Scalability For September 21, 2012

It's HighScalability Time:

  • @5h15h: Walmart took 40years to get their data warehouse at 400 terabytes. Facebook probably generates that every 4 days 
  • Should your database failover automatically or wait for the guiding hands of a helpful human? Jeremy Zawodny in Handling Database Failover at Craigslist says Craigslist and Yahoo! handle failovers manually. Knowing when a failure has happened is so error prone it's better to put in a human breaker in the loop. Others think this could be a SLA buster as write requests can't be processed while the decision is being made. Main issue is knowing anything is true in a distributed system is hard.
  • Review of a paper about scalable things, MPI, and granularity. If you like to read informed critiques that begin with phrases like "this is simply not true" or "utter garbage" then you might find this post by Sébastien Boisvert to be entertaining.
  • The Big Switch: How We Rebuilt Wanelo from Scratch and Lived to Tell About It. Complete rewrites can work...sometimes. In two months they switched to RoR, PostgreSQL, and TDD away from Java/XML, MySQL, and cowboy. Good description of the hand off process. Sounds like a huge shot of redesign helped fix previous mistakes, which is the holy grail of rewrites. It's a story of hope...but don't all stories of temptation begin with hope too?
  • This vision will take advances in security, but is one I obviously agree with and even came up with the same name - The Resource-as-a-Service (RaaS) Cloud: Over the next few years, a new model of buying and selling cloud computing resources will evolve. Instead of providers exclusively selling server equivalent virtual machines for relatively long periods of time (as done in today’s IaaS clouds), providers will increasingly sell individual resources (such as CPU, memory, and I/O resources) for a few seconds at a time. 
  • Software Needs to be 100x Better says James Larus from Microsoft: We are not even trying to make efficient systems today, throwing away billions of clock cycles on plain pure overhead. Example: IBM had investigated the conversion of a SOAP (text) date to a Java date object in IBM Trade benchmark.268 function calls and 70 object allocations.  There is great modularity and nothing obviously wrong in the code.  About 20% of memory is used to hold actual data, the rest is hash table, object management overhead, etc. 
  • Distributed Algorithms in NoSQL Databases.  Ilya Katsov with an epic post providing a "systematic description of techniques related to distributed operations in NoSQL databases" by looking at data consistency, data placement, and system coordination. Lots of cool diagrams along with both deep and broad coverage. Well worth reading.
  • How to Launch a 65Gbps DDoS, and How to Stop One. Fascinating story of handling big time DDoS attacks. That they talk so mater of factly of huge data flows shows how dangerous a world it is out there. Fun fact: To launch a 65Gbps attack, you'd need a botnet with at least 65,000 compromised machines each capable of sending 1Mbps of upstream data. But to run a discount attack you can use DNS reflection to amplify the attack. Defeating attacks uses a mix of strategies including including the stunning use of Anycast to dilute traffic across a cluster of datacenters. Attackers think they are generating a huge stream of pain, but when fanned enough datacenters there's no pain at all.
  • Webcrawling at Scale with Nokogiri and IronWorker (Part 2): The core part of performing any type of work at scale is to break the work into discrete tasks and workload units. 
  • Nice look at the ideas of elasticity, scalability, and ACID in SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System.
  • Disks from the Perspective of a File System. As Dr. House says, every disk lies. It's the job of the filesystem to heal the data anyway.
  • Hot Cloud '12 videos are now availabile.
  • Stephen Whitmore in Stripe CTF Post Mortem with a fun and detailed look at specific attack vectors: SQL injection, dangerous PHP functions, file uploads, advance SQL injection, and a few more. Phasers on!
  • Doing redundant work to speed up distributed queries. Peter Bailis asks when is the cost of redundant read requests to reduce tail latencies worth the price? Different databases have different answers: Cassandra now sends reads to the minimum number of replicas 90% of the time and to all replicas 10% of the time, primarily for consistency purposes. LinkedIn’s Voldemort also uses a send-to-minimum strategy. Basho Riak chooses the “true” Dynamo-style send-to-all read policy. 
  • Why I Migrated Away From MongoDB. Usual hyperbole in either direction, but the discussion should help you figure things out for yourself.
  • Grid Computing with Fault-Tolerant Actors and ZooKeeper (at eBay): We developed two primary patterns to make actors highly reliable. The first is to model critical actor state and mailboxes in ZooKeeper. The second is to use the ZooKeeper watch mechanism for actor supervision and recovery. < Excellent source of information if you are looking to add higher reliability into your system.
  • Data Modeling - JSON vs Composite columns. Reasons to shift away from using JSON with Cassandra. Opaque data has its downsides. Also, Is Cassandra right for me? Good Google Groups discussion 
  • Direct GPU/FPGA Communication Via PCI Express: This paper presents a mechanism for direct GPU-FPGA communication and characterizes its performance in a full hardware implementation.
  • RAMCube: Exploiting Network Proximity for RAM-Based Key-Value Store: This paper presents RAMCube, a DCN-oriented design for RAM-based key-value store based on the BCube network [9]. RAMCube exploits the properties of BCube to restrict all failure detection and recovery traffic within one-hop neighborhood, and leverages BCube’s multiple paths to handle switch failures. Prototype implementation demonstrates that RAMCube is promising to achieve high performance I/O and fast failure recovery in large-scale DCNs.
  • Big Data and the Big Apple: Understanding New York City using Millions of Check-ins. Foursquare sees check-in data as a microscope for cities and they can inspect cities at a higher resolution than ever before. Look into the microscope and see NY City through a veil of BigData. Through check-in data you can deduce the outline of buildings and the real boundaries of neighborhoods. You can see how much ice cream consumption goes up as it gets hotter and that museums get busier as it gets colder. When a new place opens you can track if patrons are arriving a new area of social graph or if it's growing via virality within a social graph. Watch as a coffee shop attendance spreads through a social graph. We've learned social exposure could be more important than location exposure. With tracking data you can gauge expertise and loyalty by looking at location, time of day, check-in history, friends preferences, venue similarities, and aggregate historical data. 

This weeks selection:

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>