One of the challenges in processing data is that the speed at which we can input data is quite often much faster than the speed at which we can process it. This problem becomes even more pronounced in the context of Big Data, where the volume of data keeps on growing, along with a corresponding need for more insights, and thus the need for more complex processing also increases.
Batch Processing to the Rescue
Hadoop was designed to deal with this challenge in the following ways:
1. Use a distributed file system: This enables us to spread the load and grow our system as needed.
2. Optimize for write speed: To enable fast writes the Hadoop architecture was designed so that writes are first logged, and then processed. This enables fairly fast write speeds.
3. Use batch processing (Map/Reduce) to balance the speed for the data feeds with the processing speed.
Batch Processing Challenges
The challenge with batch-processing is that it assumes that the feeds come in bursts. If our data feeds come in on a continuous basis, the entire assumption and architecture behind batch processing starts to break down.
If we increase the batch window, the result is higher latency between the time the data comes in until the time we actually get it into our reports and insights. Moreover, the number of hours is finite -- in many systems the batch window is done on a daily basis. Often, the assumption is that most of the processing can be done during off-peak hours. But as the volume gets bigger, the time it takes to process the data gets longer, until it reaches the limit of the hours in a day and then we face dealing with a continuously growing backlog. In addition, if we experience a failure during the processing we might not have enough time to re-process.
Making Hadoop Run Faster
We can make our Hadoop system run faster by pre-processing some of the work before it gets into our Hadoop system. We can also move the types of workload for which batch processing isn't a good fit out of the Hadoop Map/Reduce system and use Stream Processing, as Google did.
Speed Things Up Through Stream-Based Processing
The concept of stream-based processing is fairly simple. Instead of logging the data first and then processing it, we can process it as it comes in.
A good analogy to explain the difference is a manufacturing pipeline. Think about a car manufacturing pipeline: Compare the process of first putting all the parts together and then assembling them piece by piece, versus a process in which you package each unit at the manufacturer and only send the pre-packaged parts to the assembly line. Which method is faster?
Data processing is just like any pipeline. Putting stream-based processing at the front is analogous to pre-packaging our parts before they get to the assembly line, which is in our case is the Hadoop batch processing system.
As in manufacturing, even if we pre-package the parts at the manufacturer we still need an assembly line to put all the parts together. In the same way, stream-based processing is not meant to replace our Hadoop system, but rather to reduce the amount of work that the system needs to deal with, and to make the work that does go into the Hadoop process easier, and thus faster, to process.
In-memory stream processing can make a good stream processing system, as Curt Monash’s points out on his research traditional databases will eventually end up in RAM. An example of how this can work in the context of real-time analytics for Big Data is provided in this case study, where we demonstrate the processing of Twitter feeds using stream-based processing that then feeds a Big Data database for the serving providing the historical agregated view as described in the diagram below.
You can read the full detailes on how this can be done here