Google and Netflix Strategy: Use Partial Responses to Reduce Request Sizes
Wednesday, March 9, 2011 at 11:31AM
HighScalability Team in Strategy, netflix

This strategy targets reducing the amount of protocol data in packets by sending only the attributes that are needed. Google calls this Partial Response and Partial Update.

Netflix posted about adopting this strategy in their recent Netflix API redesign. We've seen previously how Netflix improved performance by creating less chatty protocols.

As a consequence packet sizes rise as more data is being stuffed into each packet in order to reduce the number of round trips. But we don't like large packets either (memory usage and packet processing overhead), so we have to think of creative ways to shrink them back down.

The change Netflx is making is to conceptualize their API as a database. What does this mean?

A partial response is like a SQL select statement where you can specify only the fields you want back. Only the attributes of interest are requested.  Previously all fields for objects were returned, even if the client didn't need them. So the goal is reduce payload sizes by being more selective about what data is returned.

An example Google uses is:

GET /myFeed?fields=id,entry(author)

The fields parameter selects the attributes to return out of a much larger feeds resource.

A partial update works similarly, you send only the fields you want to update.

Synchronization Issues?

Clearly this will reduce the number of attributes in flight, but are there any problems with this strategy? A major problem I've experienced when using partial data are synchronization issues.

Thinking of an API as a database moves the problem to be in the general problem class of keeping two distributed models in sync when both sides are changing and the connection between them is unreliable. Now each model is receiving only partial data,  if any data is lost or retransmitted between requests, then resources get out of sync, which can lead to integrity problems.

A user account model on the client side, for example, could ask for the user's preferences just once, preferences could change via another client or via some backend system, and the first client would never pick up changes to those preferences again, during that interval the client could be making a lot of bad choices. Then the user could make a change to those preferences on the client and overwrite any updates that have happened since the last request. If you are dealing with alarms and alerts this all gets a lot worse. With a state synchronization model where the entire resources is returned the window for these errors is much smaller as updated preferences are always being returned.

This brings up an entire rats nest of workarounds, like doing a complete resource sync before writes, but the system gets a lot more complicated. Another direction to go is to drop responses completely. In a database sync model there is really no need for direct responses anymore.  What's needed are for all aggregated changes to sync back to clients so the client can bring the client side model back in sync with the Netflix model.

Just some things to think about if you are considering this approach.

Article originally appeared on (http://highscalability.com/).
See website for complete article licensing information.