Friday
Jun062008

Economies of Non-Scale

Scalability forces us to think differently. What worked on a small scale doesn't always work on a large scale -- and costs are no different. If 90% of our application is free of contention, and only 10% is spent on a shared resources, we will need to grow our compute resources by a factor of 100 to scale by a factor of 10! Another important thing to note is that 10x, in this case, is the limit of our ability to scale, even if more resources are added. 1. The cost of non-linearly scalable applications grows exponentially with the demand for more scale. 2. Non-linearly scalable applications have an absolute limit of scalability. According to Amdhal's Law, with 10% contention, the maximum scaling limit is 10. With 40% contention, our maximum scaling limit is 2.5 - no matter how many hardware resources we will throw at the problem. This post discuss in further details how to measure the true cost of non linearly scalable systems and suggest a model for reducing that cost significantly.

Click to read more ...

Wednesday
Jun042008

LinkedIn Architecture

LinkedIn is the largest professional networking site in the world. LinkedIn employees presented two sessions about their server architecture at JavaOne 2008. This post contains a summary of these presentations. Key topics include:

  • Up-to-date statistics about the LinkedIn user base and activity level
  • The evolution of the LinkedIn architecture, from 2003 to 2008
  • "The Cloud", the specialized server that maintains the LinkedIn network graph
  • Their communication architecture

Click to read more ...

Monday
Jun022008

Total Cost of Ownership for different web development frameworks

I would like to compile a comparison matrix on the total cost of ownership for .Net, Java, Lamp & Rails. Where should I start? Has anyone seen or know of a recent study on this subject?

Click to read more ...

Saturday
May312008

memcached and Storage of Friend list

My first post, please be gentle. I know it is long. You are all like doctors - the more info, the better the diagnosis. ----------- What is the best way to store a list of all of your friends in the memcached cache (a simple boolean saying “yes this user is your friend”, or “no”)? Think Robert Scoble (26,000+ “friends”) on Twitter.com. He views a list of ALL existing users, and in this list, his friends are highlighted. I came up with 4 possible methods: --store in memcache as an array, a list of all the "yes" friend ID's --store your friend ID's as individual elements. --store as a hash of arrays based on last 3 digits of friend's ID -- so have up to 1000 arrays for you. --comma-delimited string of ID's as one element I'm using the second one because I think it is faster to update. The single array or hash of arrays feels like too much overhead calculating and updating – and even just loading – to check for existence of a friend. The key is FRIEND[small ID#]_[big ID#]. The value is 1. This way there are no dupes. (I add u as friend, it always adds me as ur friend...I remove u, u remove me). Store with it 2 additional flags: One denotes start of entries. One denotes end of entries. As friends are added, the end flag position relative to new friends will become meaningless, but that is ok (I think). To see if someone is your friend, the system checks if both start and end flags exist. If both exist, it can check for existence of friend ID - if exists, then friend. Start flag is required. If start flag is pushed out of cache, we must assume some friends were also pushed out. Currently, the system loads from DB in a daemon in the background after you log in (if two flags are not already set). Until the two flags are set, it does db lookups. There is no timeout on the data in cache. Adding/removing friends to your account adds/removes to/from memcache - so, theoretically, it might never have to pre-load anything. Downside of my method is if the elements span multiple servers and one dies, you loose some of your friends (that's the upside of using arrays). I don't know how to resolve if the lost box didn't contain either of the flags -- in that case, the users' info will NEVER get refreshed. This is my concern. Any ideas? Thanks so much!!!

Click to read more ...

Saturday
May312008

Biggest Under Reported Story: Google's BigTable Costs 10 Times Less than Amazon's SimpleDB

Why isn't Google's aggressive new database pricing strategy getting more pub? That's what Bill Katz, instigator of the GAE Meetup and prize winning science fiction author is wondering:

It's surprising that the blogosphere hasn't picked up the biggest difference in pricing: 
Google's datastore is less than a tenth of the price of Amazon's SimpleDB while offering a better API.
If money matters to you then the burn rate under GAE could be convincingly lower. Let's compare the numbers: GAE pricing: * $0.10 - $0.12 per CPU core-hour * $0.15 - $0.18 per GB-month of storage * $0.11 - $0.13 per GB outgoing bandwidth * $0.09 - $0.11 per GB incoming bandwidth SimpleDB Pricing: * $0.14 per Amazon SimpleDB Machine Hour consumed * Structured Data Storage - $1.50 per GB-month * $0.100 per GB - all data transfer in * $0.170 per GB - first 10 TB / month data transfer out (more on the site) Clearly Google priced their services to be competitive with Amazon. We may see a response by Amazon in the near feature, but the database storage cost for GAE is dramatically cheaper at $0.15 - $0.18 per GB-month vs $1.50 per GB-month. Interestingly, Google's price is the same as Amazon's S3 (file storage) pricing. Google seems to think of database storage as more like file storage. That makes a certain amount of sense because BigTable is a layer on the Google File System. File system pricing may be the more appropriate price reference point. On SimpleDB a 1TB database costs $1,500/month and BigTable costs in the $180/month range. As you grow into ever larger data sets the difference becomes even more compelling. If you are a startup your need for funding just dropped another notch. It's hard to self-finance many thousands of dollars a month, but hundreds of dollars is an easy nut to make. Still, Amazon's advantage is they support application clusters that can access the data for free within AWS. GAE excels at providing a scalable two tier architecture for displaying web pages. Doing anything else with your data has to be done outside GAE, which kicks up your bandwidth costs considerably. How much obviously depends on your application. But if your web site is of the more vanilla variety the cost savings could be game changing.

Click to read more ...

Friday
May302008

Is "Scaling Engineer" a new job title?

Justin.tv is looking to hire a Scaling Engineer to help scale their video cluster, IRC server, web app, monitoring and search services. I've never seen this job title before. A quick search that showed only a few previous instances of it being used. Has anyone else seen Scaling Engineer as a job title before? It's a great idea. Scaling is certainly a worthy specialty of it's own. Why there's a difficult lingo, obscure tools, endlessly subtle concepts, a massive body of knowledge to master, and many competing religious factions. All a good start. Next I see a chain of Scalability Universities. Maybe use all those Starbucks that are closing down. Contact me for franchise opportunities :-)

Click to read more ...

Thursday
May292008

Amazon Improves Diagonal Scaling Support with High-CPU Instances

Now you can buy more cores on EC2 without adding more machines:

  • The High-CPU Medium Instance is billed at $0.20 (20 cents) per hour. It features 1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units Each), and 350 GB of instance storage, all on a 32-bit platform.
  • The High-CPU Extra Large Instance is billed at $0.80 (80 cents) per hour. It features 7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), and 1,690 GB of instance storage, all on a 64-bit platform. Diagonal Scaling is making a site faster by removing machines. More on this intriguing idea in Diagonal Scaling - Don't Forget to Scale Out AND Up.

    Click to read more ...

  • Wednesday
    May282008

    Job queue and search engine

    Hi, I want to implement a search engine with lucene. To be scalable, I would like to execute search jobs asynchronously (with a job queuing system). But i don't know if it is a good design... Why ? Search results can be large ! (eg: 100+ pages with 25 documents per page) With asynchronous sytem, I need to store results for each search job. I can set a short expiration time (~5 min) for each search result, but it's still large. What do you think about it ? Which design would you use for that ? Thanks Mat

    Click to read more ...

    Wednesday
    May282008

    Webinar: Designing and Implementing Scalable Applications with Memcached and MySQL

    The following technical Webinar could be of interest to the community. WHO:

    • Farhan "Frank" Mashraqi, Director of Business Operations and Technical Strategy, Fotolog Inc
    • Monty Taylor, Senior Consultant, Sun Microsystems
    • Jimmy Guerrero, Sr Product Marketing Manager, Sun Microsystems - Database Group
    WHAT:
    • Designing and Implementing Scalable Applications with Memcached and MySQL web presentation.
    WHEN:
    • Thursday, May 29, 2008, 10:00 am PST, 1:00 pm EST, 18:00 GMT
    • The presentation will be approximately 45 minutes long followed by Q&A.
    Check out the details here!

    Click to read more ...

    Tuesday
    May272008

    Scalable virus scanning for web-applications

    Hi, We're looking for a highly scalable way of scanning documents being uploaded and downloaded from our web application. I believe services like gmail and hotmail are using bespoke solutions from companies like Trend, but are there some quality "off the shelf" products out there that can easily be scaled out and have a "loose" API (HTTP based) for application integration? Once again, thanks for any input.

    Click to read more ...