@mwinkle : Listening to NASA big data challenges at #hadoopSummit, the square kilometer array project will produce 700tb per second. TB. Per second.
@imrantech : #hadoopsummit @twitter - 400M tweets, 80-100TB per day
@r39132 : At Netflix talk at #hadoopsummit : 2 B hours streamed in Q4 2011, 75% of the 30M daily movie starts are sourced from recommendations
@nattybnatkins : Run job. Identify bottleneck. Address bottleneck. Repeat. Sage wisdom from @tlipcon on optimizing MR jobs #HadoopSummit
@chiradeep : mainframe cost of operation - $5k per MIP per year #hadoopsummit
@MCanalytics : #hadoopsummit Yahoo metrics - 140pb on 42k nodes with 500 users on 360k Hadoop jobs for 100b events/day Holy smokes!
@M_Wein : Domain expertise is the wave of the future: it's more about "Hadoop and Healthcare" than "Using Bayesian counters with Hadoop" #hadoopsummit
@wlodekpk : Staggering amount of science goes towards recommendation of your next move. Not sure if this is depressing or not #hadoopsummit
@wattsteve : A lot of architectures I'm seeing at #HadoopSummit focus on reducing data size so it can be loaded into an OLAP repo 4 low latency analysis
Twitter was an unexpected pleasure at the Hadoop Summit with many quality and interesting talks. Dmitriy Ryaboy in Twitter at the Hadoop Summit overviews the talks and points to what was talked about. I found the Large-scale machine learning at twitter very well done, not so much for the use of Pig, but for the process involved in mapping learning to creating non-iterative aggregators.
SPDY makes HTTP better, but for most websites, HTTP is not the bottleneck. Through performance tests Guy Podjarny found something unexpected: SPDY, on average, is only about 4.5% faster than plain HTTPS, and is in fact about 3.4% slower than unencrypted HTTP. The reasons: Web pages use many different domains, and SPDY works per domain; Web pages have other bottlenecks, which SPDY does not address.
Under the Hood: Hadoop Distributed Filesystem reliability with Namenode and Avatarnode. Andrew Ryan explains in great detail how Facebook uses HDFS to store 100PB of data. HDFS was involved in 41% of Data Warehouse failures, but only 10% were because of the SPOF Namenode. A highly available Namenode would minimize these incidents and "allows us to perform scheduled maintenance on hardware and software. In fact, we estimate that it would eliminate 50% of our planned downtime, time in which the cluster would be unavailable." The rest of the article explains how Zookeeper is used to make a HA Namenode.
Dmitry Vyukov clearly defines, as much as possible, the terms "lockfree" and "waitfree", saying they are about forward progress guarantees, not performance. Wait-freedom: each thread moves forward regardless of external factors like contention from other threads, other thread blocking. Lock-freedom: a system as a whole moves forward regardless of anything. Also, An Introduction to Lock-Free Programming.
Be generous in what you accept. Heroku had some downtime because a process died on an unexpected message type. That's cool for testing, but in production log it and move on.
Amazon's recent downtime was triggered by a network configuration change that caused problems in EBS. During an upgrade traffic on a high capacity network was shifted to a low capacity network. This caused EBS nodes to think they were isolated which caused a re-mirroring storm. Some customers to lose data, but those operating in multiple AZs were protected. Under spec'ing your backup equipment rarely works out well.
A C10K Websocket Test found Erlang was the winner with 0 timeouts and the lowest latency, implying all your messages are being processed quickly.
The architecture of the web workhorse nginx is described in The Architecture of Open Source Applications: Because nginx does not fork a process or thread per connection, memory usage is very conservative and extremely efficient in the vast majority of cases. nginx conserves CPU cycles as well because there's no ongoing create-destroy pattern for processes or threads.
A case for MariaDB’s Hash Joins. Really interesting description of what hash joins are and what they are good for: Hash joins work best when you are joining very big tables with no WHERE clause, or a WHERE clause on a non-indexed column.
Ben Stopford with a database Story about George. In parable form, this isn't your typical tech article, but we can hope the conclusion will not just be a story.
Five short links by Pete Warden. Awesome article on TTYs. Back in the deep dark I remember pouring over the TTY driver to figure out what was going on. Here it's all spelled out.
As Rama says, another compelling compendium of Quick links with Greg Linden. May all our classrooms be flipped.
This weeks selection:
Article originally appeared on (http://highscalability.com/).
See website for complete article licensing information.