Saturday
Dec222007

This was a porn/spam post

Seems as though anonymous users can edit old posts w/o any authentication. This post was loaded with spam/porn links. Now it is not. /anonymous

Click to read more ...

Friday
Dec212007

Strategy: Limit Result Sets

Release It! author Michael Nygard tells a tale of two web sites, both brought low by unexpectedly huge unbounded results sets that slowed down their sites to the speed of a Christmas checkout line. I've committed this error more than a few times. During testing the results sets are often small, so you don't see problems. Or when a product is new you don't have a lot of data so everything is fine, until some magic line is crossed and you get that dreaded 2AM fix it call. My most embarrassing bug of this type caused a rather spectacular failure at a customer site as the variance in response times was out of spec and this kicked in penalty clauses. What happened was the customer had a larger network than we could even test (customers always get the good stuff). I took a lock and went to get all the data. Because the result set was so much larger in their larger system I took the lock for many more milliseconds than I should have. Unknown to me a chunk of code on the critical path also was in the lock path and all hell broke loose. I had to change the logic to process the result set in fixed size deterministic chunks, releasing locks as I went. I even had to measure CPU usage and back off after a certain amount of CPU was used. But all was well again. I then hunted down every other place I made the same mistake. And there were a few. To solve this problem in general I developed an architecture supporting scheduling work by CPU usage. A common theme in many of the profiles on this site is protecting your system from requests that can bring down the system. Mailinator has a lot resource exhaustion problems and does a good job solving them. Ebay has an interesting strategy of doing as little work as possible in the database which leads them to do joins in application space. Which is exactly the opposite of this strategy's conclusion. But I think this may be going too far. With proper indexes performing selects in the database to minimize the result sets would seem to be a win as databases are good at this sort of thing. Yah, relational databases suck at doing top 10 type of logic, so calculate that on the fly and cache it. How can you bound results sets?

  • Michael's strategy: You should make your apps be paranoid about their data. If your app processes one record at a time, then looping through an entire result set might be OK---as long as you're not making a user wait while you do. But if your app that turns rows into objects, then it had better be very selective about its SELECTs. The relationships might not be what you expect. The data producer might have changed in a surprising way, particularly if it's not under your control. Purging routines might not be in place, or might have gotten broken. Definitely don't trust some other application or batch job to load your data in a safe way.
  • A diagnostic tactic is to benchmark all page response times and look for pages that take too long. This should be an automated system that emails you or shows alerts in your dashboard system. And because you logged everything you can figure out why the response was slow and then take measures to fix it.

    Click to read more ...

  • Wednesday
    Dec192007

    How can I learn to scale my project?

    This is a question asked on the ycombinator list and there are some good responses. I gave a quick response, but I particularly like neilk's knock out of the park insightful answer:

  • Read Cal Henderson's book. (I'd add in Theo's book and Release It! too)
  • The center of your design should be the data store, not a process. You transition the data store from state to state, securely and reliably, in small increments.
  • Avoid globals and session state. The more "pure" your function is, the easier it will be to cache or partition.
  • Don't make your data store too smart. Calculations and renderings should happen in a separate, asynchronous process.
  • The data store should be able to handle lots of concurrent connections. Minimize locking. (Read about optimistic locking).
  • Protect your algorithm from the implementation of the data store, with a helper class or module or whatever. But don't (DO NOT) try to build a framework for any conceivable query. Just the ones your algorithm needs. Viewing an application as a series of state transitions instead of a blizzard of actions and events is a way under appreciated design perspective. This is one of they key design approaches for making robust embedded systems. A great paper talking about this sort of stuff is Mission Planning and Execution Within the Mission Data System - an effort to make engineering flight software more straightforward and less prone to error through the explicit modeling of spacecraft state. Another interesting paper is CLEaR: Closed Loop Execution and Recovery High-Level Onboard Autonomy for Rover Operations. In general I call these Fact Based Architectures. I'm really glad neilk brought it up.

    Click to read more ...

  • Friday
    Dec142007

    The Current Pros and Cons List for SimpleDB

    Not surprisingly opinions on SimpleDB vary from it sucks, don't take my database, to it will change the world, who needs a database anyway? From a quick survey of the blogosphere, here's where SimpleDB stands at the moment:

    SimpleDB Cons

  • No SLA. We don't know how reliable it will be, how fast it will be, or how consistent the performance will be.
  • Consistency constraints are relaxed. Reading data immediately after a write may not reflect the latest updates. To programmers used to transactions, this may be surprising, but many people think this is one of the tradeoffs that needs to be made to scale.
  • Database is a core competency. If you don't control your database you can't out compete your competition.
  • When your database is out of your control you can't guarantee it will work properly. You can't create the proper indexes and other optimizations.
  • No join or IN operator. You'll need to do multiple client side calls to simulate joins, which will be slow.
  • No stored procedures, referential integrity, and other relational goodies. This is not a professional product.
  • Attribute size limited to 1024 bytes. It's not designed for content serving.
  • Latency from outside Amazon will be high.
  • Setting up and maintaining a database is cheap and easy these days, so why bother? It costs too much compared when compared to running your own servers.
  • What happens when you need to super scale to very large datasets?
  • No API support from common languages like PHP, Ruby, etc.
  • All your existing code and infrastructure needs to be rewritten.
  • Not geographically distributed with nearest datacenter routing.
  • Queries are lexigraphical. So you’ll need to store data in lexicographical order. This means says inside looking out: zero-padding your integers, adding positive offsets to negative integer sets, and converting dates into something like ISO 8601.
  • Attribute values are typeless which could lead to a lot of typing related errors and inefficient queries.
  • The 10 GB maximum per domain is too limiting.
  • It's not Dynamo. Amazon is keeping the really good stuff to themselves.
  • Text searching is not supported. You'll need to construct your own fast search indexes.
  • Queries are limited to 5 seconds running time. It's only for getting and setting, nothing more SQLish.
  • No cloned APIs for unit testing. Need to be able to develop locally against other data stores.
  • Your data is under Amazon's control, so there could be security and privacy problems.
  • The XML based protocol unnecessarily increases overhead, latency, and cost.
  • Lockin. If you decide to leave Amazon’s cloud how do you move all your data and get a similar system up and working outside the cloud?
  • Open cash register. Since SDB is charge on use, a malicious user can simply setup a loop to query your site, which costs you an unbounded amount of money.

    SimpleDB Pros

  • SimpleDB is not a relational database. Relational databases are too complex and don't scale well. Keeping data access simple is a selling point, not a weakness.
  • Low setup costs and pay-as-you-go expansion make it perfect for startups. The price is reasonable given the functionality and the hands off admin.
  • Setting up and maintaining a highly available clustered database that is constantly growing is extremely difficult. Building your application on a building block that does all this for you adds a lot of value.
  • Setting up a database inside EC2 is a pain. The makes getting basic database functionality trivial. No need to worry about scaling, capacity planning, or partitioning.
  • It has a decent query language, which is unusual for this type of data store.
  • Data are stored across multiple nodes which supports parallel query execution.
  • It's built on Erlang and that's cool.
  • You don't need to seek funding to hire a database team and buy hardware. Depending on how you weight each factor, SimpleDB could be way behind or way ahead of other options. What's interesting is to see what people think is important. For many people the only real database is relational and if it doesn't have transactions, joins, etc it's not real. Databases like beauty seem to be in the eye of the beholder.

    Click to read more ...

  • Thursday
    Dec132007

    Amazon SimpleDB - Scalable Cloud Database

    Amazon has announced the limited beta of Amazon SimpleDB - a simple web services interface to create and store multiple data sets, query your data easily, and return the results. Together with the Simple Storage Service (S3), Elastic Compute Cloud (EC2) and other web services Amazon offers a complete utility computing platform. SimpleDB was the missing piece of AWS - the scalable structured database. Check out my blog entry: http://innowave.blogspot.com/2007/12/amazon-simpledb-scalable-cloud-database.html I was waiting for this one :-) Geekr

    Click to read more ...

    Thursday
    Dec132007

    un-article: the setup behind microsoft.com

    On the blogs.technet.com article on microsoft.com's infrastructure: The article reads like a blatant ad for it's own products, and is light on the technical side. The juicy bits are here, so you know what the fuss is about:

    • Cytrix Netscaler (= loadbalancer with various optimizations)
    • W2K8 + IIS7 and antivirus software on the webservers
    • 650GB/day ISS log files
    • 8-9GBit/s (unknown if CDN's are included)
    • Simple network filtering: stateless access lists blocking unwanted ports on the routers/switches (hence the debated "no firewalls" claim).
    Note that this information may not reflect present reality very well; the spokesman appears to be reciting others words.

    Click to read more ...

    Thursday
    Dec132007

    Is premature scalation a real disease?

    Update 3: InfoQ's Big Architecture Up Front - A Case of Premature Scalaculation? twines several different threads on the topic together into a fine noose. Update 2: Kevin says the biggest problems he sees with startups is they need to scale their backend (no, the other one). Update: My bad. It's hard to sell scalability so just forget it. The premise of Startups and The Problem Of Premature Scalaculation and Don’t scale: 99.999% uptime is for Wal-Mart is that you shouldn't spend precious limited resources worrying about scaling before you've first implemented the functionality that will make you successful enough to have scaling problems in the first place. It's kind of an embodied life force model of system creation. Energy is scarce so any parasites siphoning off energy must be hunted down and destroyed so the body has its best chance of survival. Is this really how it works? If I ever believed this I certainly don't believe it anymore. The world has changed, even since 2005. Thanks to many books and papers on how to scale the knowledge of scaling isn't the scarce precious resource it once was. It's no longer knowledge tightly held by a cabal of experts until Nicolas Cage flies in and pries it out of their grasping dessicated fingers. Now any journeyman computerista can do a reasonable job at designing a scalable system. Not only has knowledge dissemination improved, but so have our tools. Drastically. At one time building a scalable system up front would have required buying and configuring a truck load of servers, building out a data center, configuring a spider's web of networks, and bootstrapping an equally nasty storage network. All extremely complicated and disaster prone. Now you can use services like Amazon's EC2/S3, 3tera's grid OS, Joyent to cut significant parts of all that complexity out of the system. While most of us toil away in anonymity and scaling problems are just a fond dream, when the webosphere does find you it does so with a crush. With a little thinking ahead Blue Origin was able to handle 3.5 million requests and 758 GBs in bandwidth in a single day using S3. Did that effort prevent other features from getting implemented? I seriously doubt it. Usually doing the right thing isn't harder if you know what is the right thing to do. And what if Blue Origin wouldn't have been able to scale? Could they have recovered from the opportunity lost of grabbing the iron when it's hot and when potential customers are interested? Ask Friendster. What do you think? Has most of the risk associated with up front scalability design been squeezed out? Is premature scalation still something to be avoided? Or have times changed and does doing the simplest thing that could possibly work now include worrying about scaling up front?

    Click to read more ...

    Wednesday
    Dec122007

    Oracle Can Do Read/Write Splitting Too

    People sometimes wonder why Oracle isn't mentioned on this site more. Maybe it will now as Michael Nygard reports Oracle 11g now does read/write splitting with their Active Data Guard product. Average replication latency was 1 second and it's accomplished with standard Oracle JDBC drivers. They see a 250% increase in transactions per service for read-write service. And a 110% improvement in tps for read-only service was found. You see a change in hardware architecture with the new setup. They now recommend using a primary and multiple standby servers, a single controller per server, and a single set of disks in RAID1. Previously the recommendation was to have a primary and secondary server with two controllers per server and a set of mirrored disks per controller. The changes increase performance, availability, and hardware utilization. They also have a useful looking best practices document for High Availability called Maximum Availability Architecture (MAA).

    Click to read more ...

    Wednesday
    Dec122007

    Report from OpenSocial Meetup at Google

    Update: Facebook pulls a Microsoft and embraces and extends by opening their platform to other social sites like Bebo. Very smart and unexpected. More info at Facebook to let other sites access platform code. This month's regular Facebook Meetup was held at Google and the topic of the day was OpenSocial. For those of you with real lives, OpenSocial "provides a common set of APIs for social applications across multiple websites." Over 200 excited people, hoping to do very exciting things, and dreaming of making an exciting pile of money, watched an OpenSocial presentation put on by a couple of appropriately knowledgeable evangelists. I could feel my social graph being more successfully monetized with each passing minute. Normally the meetings are much smaller, but Google puts on a very nice spread, so I think people may have showed up to dine :-) Or they could have showed up to learn why and how they should code to the new uber social API. By the looks of the full plates and the sounds of energetic chatter, it was likely a bit of both. The crowd seemed skeptical, yet interested. The Facebook world is somewhat self satisfied and that comfy world was being disturbed. It might get ugly I thought, but unfortunately it stayed quite civil and informative. With my bread I had hoped for a bit of circus. My take on OpenSocial: code social application once, run anywhere. Code your social app using Google's gadget model and the social API and it will run on any conforming social network container. It's kind of like a concurrency model based on mobile threads instead of the more traditional message passing model. So your friend's profile app will work just as will on Ning as Orkut. Interestingly, there's a layer of indirection the social network container has to locally interpret what things like friends are. So your friends in SalesForce could mean people you've email once and friends in Ning could mean people you've marked as friends. There's a fairly minimal API of verbs and nouns at this point, but that will undoubtedly grow. They are taking a "do the simplest thing" approach. Or they could have simply needed to get something out to compete with Facebook. Important features like a security model, authentication model, sharing model, and advanced data types are TBD. Lots of tricky things still have to be specified. How do you establish identity across services, who can share what information, how do apps deal with different terms of services, and how they deal with different social network models? OpenSocial is a group of companies so you hear a lot of things like "we'll have to meet and decide that. Joe has a lot of good ideas on how that might be done." The same sort of stuff you hear with all the complex Java standards that everyone hates. Maybe some group will Spring into action and fix some of the problems that develop. What I don't quite understand is how social networks will distinguish themselves from each other with a common API? Using the standard your app will run anywhere so why should I choose a particular social graph provider? So services will have to differentiate by adding nonstandard features which leads to a horrible complex mess of a system. They were already talking about using reflection so you could discover what capabilities a container had. Oh boy. Sounds like a hard road for developers. From a scalability POV you must still host your own applications. So that's no different from Facebook. If you get a million users overnight you have to figure out how to make them scale. On the bright side there was a properties like data store you could use to store data in. The amount of data, types, query model, transaction model, locking model, SLAs, etc seemed open, but not managing state is a big win. From a scalable development POV, I can't help but think the drive towards differentiation will require special coding for each target container and you'll have to pick just a few containers to develop for (think browser wars times 100), but we'll see.

    Click to read more ...

    Tuesday
    Dec112007

    Hosting and CDN for startup video sharing site

    This question is for all the gurus here. Please help this novice x I am starting a video sharing site like YouTube in India. I want to offer the best quality possible, at minimum cost. Nothing new about it, right? :). I have done some research on the dedicated hosting services and CDN services available and I have some basic knowledge on these. Following are my requirements 1) My budget is $500 to $1000 per month for hosting (including CDN if and as applicable). 2) I will need around 500GB of storage and 1TB per month of bandwidth in first 2-3 months and then about 10TB of storage and 5TB per month of bandwidth. And more ... depending on how big it gets (I can afford more when it gets big) 3) 90% of my viewers are in India. Other 10% are in US and UK. Based on the above, could you please answer my following questions? 1) Can I go with just a good dedicated server to start with and get a CDN service later on when the site gets big? Or do you think its wise to start with a CDN service? 2) Should I look for a server closer to India? They are pretty expensive in Asia? Should I look for one in Western Europe or at least Western US? How big a difference does it make? 3) Could you suggest the best dedicated hosting and CDN service based on my requirements? 4) I can get unmetered bandwidth on a 100Mbps pipe for my budget. Do you think that will be fine to start with? 5) Anything else I am missing? Also, could you also please give any tips on how to minimize the bandwidth (buffering, lower bitrate etc..)? Thanks a lot for your suggestions!

    Click to read more ...