« Paper: Hyder - Scaling Out without Partitioning | Main | Facebook Uses Non-Stored Procedures to Update Social Graphs »
Tuesday
Nov092010

The Tera-Scale Effect 

In the past year, Intel issued a series of powerful chips under the new Nehalem microarchitecture, with large numbers of cores and extensive memory capacity. This new class of chips is is part of a bigger Intel initiative referred to as Tera-Scale Computing. Cisco has released their Unified Computing System (UCS) equipped with a unique extended memory and high speed network within the box, which is specifically geared to take advantage of this type of CPU architecture .

This new class of hardware has the potential to revolutionize the IT landscape as we know it.

In  this post, I want to focus primarily on the potential implications on application architecture, more specifically on the application platform landscape.  more...

Reader Comments (4)

Huh. I'll toss out something speculative: in the long run, commodity computers will start to look more like Cisco's big new chasses. The theory is, scaling up with a large single coherent memory space gets nasty beyond a certain number of cores. So you have to make the network nature of memory writes explicit. Maybe the trick will be message passing or NUMA or dynamic "ownership" of pages of memory by particular cores or special processor instructions to relax coherence requirements, or maybe it'll be *just* like Cisco's approach with multiple logical machines with lots of interconnect among them. Or maybe I'm wrong, and chip makers will figure out approaches that require nothing more than rewriting software to reduce memory contention (the kind of thing in the Scaling Linux to Many Cores paper) -- anyway, it's something to think about.

November 9, 2010 | Unregistered CommenterRandall

-I worry about the storage:CPU ratio in the intel and cisco stories. UCS looks OK for SAN+10gb+VMs, but for large static apps some of that is overkill. 2x1Gbps to separate TOR switches gives you redundancy as well as good bandwidth at a lower cost, and Hadoop-style filesystems and app execution good for your code. If all the hardware is trying to do is reduce hardware costs by running VM images, someone is missing the point, and, unless those VM images are themselves fully automated, missing out on a lot of cost saving opportunities.

-Also, anyone who thinks network costs have gone away isn't running a large enough datacentre. If every host has 10 gps, your backbone overloads, and no router setup likes VMs moving around rapidly, router table updates become a bottleneck all of their own. Then there's the ingress/egress traffic. If you look at the big Hadoop clusters -they even have to move to caching DNS servers in every host, as even DNS lookups on the workers start to overload the upstream DNS servers.

to put it differently: if DNS traffic in your datacentre isn't overloading part of your network infrastructure, you aren't operating at scale yet :)

November 9, 2010 | Unregistered CommenterSteveL

How much is intel paying you for this post ?

November 9, 2010 | Unregistered Commenterahm

ahm, 10% of all of Intel's revenues. Cool, huh?

November 9, 2010 | Registered CommenterHighScalability Team

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>