another approach to replication

File replication based on erasure codes can reduce total replicas size 2 times and more.
File replication based on erasure codes can reduce total replicas size 2 times and more.
I've been using GWT for an application and I get the same feeling using it that I first got using html. I've always sucked at building UIs. Starting with programming HP terminals, moving on to the Apple Lisa, then X Windows, and Microsoft Windows, I just never had IT, whatever IT is. On the Beauty and the Geek scale my interfaces are definitely horned-rimmed and pocket protector friendly. Html helped free me from all that to just build stuff that worked, but didn't have to look all that great. Expectations were pretty low and I eagerly fulfilled them. With Ajax expectations have risen again and I find myself once more easily identifiable as a styless geek. Using GWT I have some hopes I can suck a little less. In working with GWT I was so focussed on its tasty easily digestible Ajaxy goodness, I didn't stop to think about the topic of this site: scalability. When I finally brought my distracted mind around to consider the scalability of the single page webs site I was building, I became a bit concerned. Many of the strategies that are typically used to achieve scalability don't seem to apply in single page land. Here are the issues I see. Maybe you can tell me where I am off in my analysis?
Complex applications coordinating work across a lot of machines often need a highly performing fault tolerant message layer. Though a blast to write, it's probably a better use of your time to use an off the shelf solution. And that's where Spread comes in. Flickr, for example, uses Spread to create real-time event feeds from their web server logs. What exactly is Spread? From the Spread website:
Spread is an open source toolkit that provides a high performance messaging service that is resilient to faults across local and wide area networks. Spread functions as a unified message bus for distributed applications, and provides highly tuned application-level multicast, group communication, and point to point support. Spread services range from reliable messaging to fully ordered messages with delivery guarantees. Spread can be used in many distributed applications that require high reliability, high performance, and robust communication among various subsets of members. The toolkit is designed to encapsulate the challenging aspects of asynchronous networks and enable the construction of reliable and scalable distributed applications. Some of the services and benefits provided by Spread:In Building Scalable Web Sites Cal Henderson describes how Flickr uses Spread to create a log of real-time events, like photos uploaded and discussions started, as they happen. Spread is connected to their web servers. As photos are uploaded these web server events are messaged in real-time to agents consuming the feed. The advantage of this architecture is it sheds load away from the database. Otherwise the database would have to be continuously polled for new events by each agent.Reliable and scalable messaging and group communication. A very powerful but simple API simplifies the construction of distributed architectures. Easy to use, deploy and maintain. Highly scalable from one local area network to complex wide area networks. Supports thousands of groups with different sets of members. Enables message reliability in the presence of machine failures, process crashes and recoveries, and network partitions and merges. Provides a range of reliability, ordering and stability guarantees for messages. Emphasis on robustness and high performance. Completely distributed algorithms with no central point of failure.
Colin Charles has cool picture showing Flickr's message telling him they'll need about 15 minutes to move his 11,500 images to another shard. One, that's a lot of pictures! Two, it just goes to show you don't have to make this stuff complicated. Sure, it might be nice if their infrastructure could auto-balance shards with no down time and no loss of performance, but do you really need to go to all the extra complexity? The manual system works and though Colin would probably like his service to have been up, I am sure his day will still be a pleasant one.
How do you keep in sync a crescendo of data between data centers over a slow WAN? That's the question Alberto posted a few weeks ago. Normally I'm not into all boy bands, but I was frustrated there wasn't a really good answer for his problem. It occurred to me later a WAN accelerator might help turn his slow WAN link into more of a LAN, so the overhead of copying files across the WAN wouldn't be so limiting. Many might not consider a WAN accelerator in this situation, but since my friend Damon Ennis works at the WAN accelerator vendor Silver Peak, I thought I would ask him if their product would help. Not surprisingly his answer is yes! Potentially a lot, depending on the nature of your data. Here's a no BS overview of their product:
A superb explanation by Theo Schlossnagle of how to deploy a high availability load balanced system using mod backhand and Wackamole. The idea is you don't need to buy expensive redundant hardware load balancers, you can make use of the hosts you already have to the same effect. The discussion of using peer-based HA solutions versus a single front-end HA device is well worth the read. Another interesting perspective in the document is to view load balancing as a resource allocation problem. There's also a nice discussion of the negative of effect of keep-alives on performance.
Pownce is a new social messaging application competing micromessage to micromessage with the likes of Twitter and Jaiku. Still in closed beta, Pownce has generously shared some of what they've learned so far. Like going to a barrel tasting of a young wine and then tasting the same wine after some aging, I think what will be really interesting is to follow Pownce and compare the Pownce of today with the Pownce of tomorrow, after a few years spent in the barrel. What lessons lie in wait for Pownce as they grow? Site: http://www.pownce.com/
The article describes the basic architecture of a connection-oriented NIO-based java server. It takes a look at a preferred threading model, Java Non-blocking I/O and discusses the basic components of such a server.
Wackamole is an application that helps with making a cluster highly available. It manages a bunch of virtual IPs, that should be available to the outside world at all times. Wackamole ensures that a single machine within a cluster is listening on each virtual IP address that Wackamole manages. If it discovers that particular machines within the cluster are not alive, it will almost immediately ensure that other machines acquire these public IPs. At no time will more than one machine listen on any virtual IP. Wackamole also works toward achieving a balanced distribution of number IPs on the machine within the cluster it manages. There is no other software like Wackamole. Wackamole is quite unique in that it operates in a completely peer-to-peer mode within the cluster. Other products that provide the same high-availability guarantees use a "VIP" method. Wackamole is an application that runs as root in a cluster to make it highly available. It uses the membership notifications provided by the Spread toolkit to generate a consistent state that is agreed upon among all of the connected Wackamole instances. Wackamole is released under the CNDS Open Source License. Note: This post has been adapted from the linked to web site.
New update: Parascale’s CTO on what’s different about Parascale. Let's say you have gigglebytes of data to store and you aren't sure you want to use a CDN. Amazon's S3 doesn't excite you. And you aren't quite ready to join the grid nation. You want to keep it all in house. Wouldn't it be nice to have something like the Google File System you could use to create a unified file system out of all your disks sitting on all your nodes? According to Robin Harris, a.k.a StorageMojo (a great blog BTW), you can now have your own GFS: Parascale launches Google-like storage software. Parascale calls their softwate a Virtual Storage Network (VSN). It "aggregates disks across commodity Linux x86 servers to deliver petabyte-scale file storage. With features such as automated, transparent file replication and file migration, Parascale eliminates storage hotspots and delivers massive read/write bandwidth." Why should you care? I don't know about you, but the "storage problem" is one the most frustrating parts of building websites. There's never a good answer that is affordable. Should you build a SAN or a NAS? How do you make it redundant? How do you make it perform? How do you back it up? How do you grow it without a defense appropriations sized budget? Should you use RAID? Which level and where for what reason? Should you use SCSI, iSCSI, SAS, SATA, or alpha beta? Which vendor should you use? There are so many conflicting opinions about everything. It's all a confusing mess to me. So I like the simplicity of buying commodity nodes with just a bunch of disks attached. But the question has always been how do you turn all those disks into a unified storage system without writing a ton of software on top? Harris says this is what Parascale has done for you:
VSN, like GFS, builds availability and scalability around low-cost servers and disks. NAS appliances rely on costly low-volume boxes that are closed and don't scale. GFS has been deployed in production clusters of over 5,000 servers, proving the scalability of the architecture. Fast, reliable, low-cost and massively scalable storage powers the growth of new applications like Web 2.0, video-on-demand, and hi-resolution image archiving. Parascale is the first of a new generation of software-only storage solutions.They make a big deal out of it being a software only system. Harris says why this is a good thing:
I like software-based systems because hardware is a commodity. When you create custom hardware you also create low-volume, high-cost components whose economics go from bad to worse. If you *need* to do it, then go for it. But data is getting cooler and the requirement for specialized high-performance hardware is shrinking relative to the market.Other systems use an appliance model. Appliances can add a lot of value, but they are also a way of monetizing you. A software system on commodity hardware has the potential to give good value. Will it? I didn't see pricing so it's hard to tell. Even odder is their pricing model. You are leasing the software per year, per disk spindle. Do you have any idea how much this will cost? Neither do I. I sounds like it could be horribly expensive or really reasonable. We'll have to see. Another thing that bothers me is that you can't run a database on top of their file system. This means I need an entire separate storage system for my database. You can run a database on a NAS or SAN, so this is a definite disadvantage. Anyway, it's just another interesting option to consider when architecting your website.