Thursday
Sep102009
Building Scalable Databases: Denormalization, the NoSQL Movement and Digg

Database normalization is a technique for designing relational database schemas that ensures that the data is optimal for ad-hoc querying and that modifications such as deletion or insertion of data does not lead to data inconsistency. Database denormalization is the process of optimizing your database for reads by creating redundant data. A consequence of denormalization is that insertions or deletions could cause data inconsistency if not uniformly applied to all redundant copies of the data within the database.
Read more on Carnage4life blog...
Reader Comments (2)
NoSQL – the new wave against RDBMS
http://bigdatamatters.com/bigdatamatters/2009/07/nosql-vs-rdbms.html
We use rdbms for most things, but we have the databases broken out by application and then have a central tool for updating the sub-systems. I imagine we're only going to continue in this direction for a while and just add more caching to the front of each section. So far that has often helped us scale the best.
Personally I'm looking forward to doing more caching/tuning not to mention process tiering in order to make things even faster. Also hoping to find ways to get even better performance from our rdbms. We already tune them pretty well.
---------------------------------------------------------------
http://blog.pe-ell.net http://wetnun.net