Entries in scalable (8)

Wednesday
Jul092014

Using SSD as a Foundation for New Generations of Flash Databases - Nati Shalom

“You just can't have it all” is a phrase that most of us are accustomed to hearing and that many still believe to be true when discussing the speed, scale and cost of processing data. To reach high speed data processing, it is necessary to utilize more memory resources which increases cost. This occurs because price increases as memory, on average, tends to be more expensive than commodity disk drive. The idea of data systems being unable to reliably provide you with both memory and fast access—not to mention at the right cost—has long been debated, though the idea of such limitations was cemented by computer scientist, Eric Brewer, who introduced us to the CAP theorem.

The CAP Theorem and Limitations for Distributed Computer Systems

Click to read more ...

Thursday
Sep102009

Building Scalable Databases: Denormalization, the NoSQL Movement and Digg

Database normalization is a technique for designing relational database schemas that ensures that the data is optimal for ad-hoc querying and that modifications such as deletion or insertion of data does not lead to data inconsistency. Database denormalization is the process of optimizing your database for reads by creating redundant data. A consequence of denormalization is that insertions or deletions could cause data inconsistency if not uniformly applied to all redundant copies of the data within the database.

Read more on Carnage4life blog...

Thursday
Jul162009

Scalable Web Architectures and Application State

In this article we follow a hypothetical programmer, Damian, on his quest to make his web application scalable.

Read the full article on Bytepawn

Wednesday
Jun102009

Hive - A Petabyte Scale Data Warehouse using Hadoop

This post about using Hive and Hadoop for analytics comes straight from Facebook engineers.

Scalable analysis on large data sets has been core to the functions of a number of teams at Facebook - both engineering and non-engineering. Apart from ad hoc analysis and business intelligence applications used by analysts across the company, a number of Facebook products are also based on analytics.

These products range from simple reporting applications like Insights for the Facebook Ad Network, to more advanced kind such as Facebook's Lexicon product.

As a result a flexible infrastructure that caters to the needs of these diverse applications and users and that also scales up in a cost effective manner with the ever increasing amounts of data being generated on Facebook, is critical. Hive and Hadoop are the technologies that we have used to address these requirements at Facebook.

Read the rest of the article on Engineering @ Facebook's Notes page

Monday
Apr132009

High Performance Web Pages – Real World Examples: Netflix Case Study

This read will provide you with information about how Netflix deals with high load on their movie rental website.
It was written by Bill Scott in the fall of 2008.

Read or download the PDF file here

Click to read more ...

Sunday
Dec282008

How to Organize a Database Table’s Keys for Scalability

The key (no pun intended) to understanding how to organize your dataset’s data is to think of each shard not as an individual database, but as one large singular database. Just as in a normal single server database setup where you have a unique key for each row within a table, each row key within each individual shard must be unique to the whole dataset partitioned across all shards. There are a few different ways we can accomplish uniqueness of row keys across a shard cluster. Each has its pro’s and con’s and the one chosen should be specific to the problems you’re trying to solve.

Click to read more ...

Sunday
Dec212008

The I.H.S.D.F. Theorem: A Proposed Theorem for the Trade-offs in Horizontally Scalable Systems

Successful software design is all about trade-offs. In the typical (if there is such a thing) distributed system, recognizing the importance of trade-offs within the design of your architecture is integral to the success of your system. Despite this reality, I see time and time again, developers choosing a particular solution based on an ill-placed belief in their solution as a “silver bullet”, or a solution that conquers all, despite the inevitable occurrence of changing requirements. Regardless of the reasons behind this phenomenon, I’d like to outline a few of the methods I use to ensure that I’m making good scalable decisions without losing sight of the trade-offs that accompany them. I’d also like to compile (pun intended) the issues at hand, by formulating a simple theorem that we can use to describe this oft occurring situation.

Click to read more ...

Tuesday
Jan292008

When things aren't scalable

OK, I know this site is for scalable web site design. But as there aren't any sites I can find for graceful failure under "slashdotted" like pressure I'll ask here. Does anyone have a sensible way, once you have a "web application" that either won't scale, or can't scale, that you can give some users a good consistent experience and bounce other users to a busy site page. I have seen sites do this to varying degrees, some of which work better than others, but no explanations beyond simply bouncing requests to a "we're busy page server" when you have more than a given number of connections. This is obviously useless as a web page likely requires multiple connection (ignoring keep-alive, pipelining etc) multiple connection to completely render properly. The normal problem is users getting a page and not the "furniture" for that page like images or css. Other problems are having to wait ages to get the busy page or the site being slow even if you do "get in". And some site let a user "in" and then as they browse around they get bounced out suddenly to the busy page. Obviously not being the developer for sites I deal with (I am an infrastructure bod) I can't solve the problem where it should have been pre-emptively solved. That is to say I can't write the code to be scalable or re-write the code to do some simple session filtering or the like (and not being a developer I get dirty looks when I point developers at information like your site ... I can hear them thinking "how dare you suggest I don't know how to code a web site you lowly infrastructure cretin"). Before developer on-line lynch me I should point out that sometimes the cause of not being able to scale a site is that I can't get in new hardware quick enough, but then who knows when you will get slashdotted right ?. So my question applies even when a developer of genius level brilliance has built a unsurpasibly scalable web site for me to run the infrastructure for. My best guess so far is using something like HAProxy to load balance sessions, and then use it's more advanced total session count, and cookie issuing abilities to track users and bounce some at a given "heavy load" point. This isn't ideal as the heavy load point would have to be based on connection counts not server load or server response times, but it's the best I can come up with so far. Also, having mentioned brilliant developers writing great sites not always making my question redundant, could I ask, do people normally think about coping with overload when designing scalable solution - surely they should but I don't see much talk about it. Couldn't a simple Java filter or the equivalent for other things be built into applications ? It'd be nice to have a site that not only scales, but "is nice" when waiting for the infrastructure it runs on to be scaled, which could be several days when you have to purchase new hardware.

Click to read more ...