« Stuff The Internet Says On Scalability For April 8th, 2016 | Main | Stuff The Internet Says On Scalability For April 1st, 2016 »
Monday
Apr042016

How to Remove Duplicates in a Large Dataset Reducing Memory Requirements by 99%

This is a guest repost by Suresh Kondamudi from CleverTap.

Dealing with large datasets is often daunting. With limited computing resources, particularly memory, it can be challenging to perform even basic tasks like counting distinct elements, membership check, filtering duplicate elements, finding minimum, maximum, top-n elements, or set operations like union, intersection, similarity and so on

Probabilistic Data Structures to the Rescue

Probabilistic data structures can come in pretty handy in these cases, in that they dramatically reduce memory requirements, while still providing acceptable accuracy. Moreover, you get time efficiencies, as lookups (and adds) rely on multiple independent hash functions, which can be parallelized. We use structures like Bloom filtersMinHashCount-min sketchHyperLogLog extensively to solve a variety of problems. One fairly straightforward example is presented below.

The Problem

We at CleverTap manage mobile push notifications for our customers, and one of the things we need to guard against is sending multiple notifications to the same user for the same campaign. Push notifications are routed to individual devices/users based on push notification tokens generated by the mobile platforms. Because of their size (anywhere from 32b to 4kb), it’s non-performant for us to index push tokens or use them as the primary user key.

On certain mobile platforms, when a user uninstalls and subsequently re-installs the same app, we lose our primary user key and create a new user profile for that device. Typically, in that case, the mobile platform will generate a new push notification token for that user on the reinstall. However, that is not always guaranteed. So, in a small number of cases we can end up with multiple user records in our system having the same push notification token.

As a result, to prevent sending multiple notifications to the same user for the same campaign, we need to filter for a relatively small number of duplicate push tokens from a total dataset that runs from hundreds of millions to billions of records. To give you a sense of proportion, the memory required to filter just 100 Million push tokens is 100M * 256 = 25 GB!

The Solution – Bloom filter

The idea is very simple.

  • Allocate a bit array of size m
  • Choose k  independent hash functions  h_i(x)   whose range is [ 0 .. m-1 ]
  • For each data element, compute hashes and turn on bits 
  • For membership query q , apply hashes and check if all the corresponding bits are ‘on’

Note that bits might be turned ‘on’ by hash collisions leading to false positives i,e a non-existing element may be reported to exist and the goal is to minimise this.

On Hash Functions

Hash functions for Bloom filter should be independent and uniformly distributed. Cryptographic hashes like MD5 or SHA-1 are not good choices for performance reasons. Some of the suitable fast hashes are MurmurHashFNV hashes andJenkin’s Hashes.

We use MurmurHash

  • It’s fast – 10x faster than MD5
  • Good distribution – passes chi-squared test for uniformity
  • Avalanche effect – sensitive to even slightest input changes
  • Independent enough

Sizing the Bloom filter

Sizing the bit array involves choosing optimal number of hash functions to minimise false-positive probability.

With m bits, k hash functions and n elements, the false positive probability i,e the probability of all the corresponding k bits are ‘on’ falsely when the element doesn’t exist

p = ( 1 - [ 1 - \frac{1}{m}]^{kn} )^k \approx ( 1 - e^{-\frac{kn}{m}})^k

for given m, n , optimal k that minimises p

i,e \frac{dp}{dk} = 0 \implies k = \frac{m}{n}ln(2)

\implies m = -\frac{nln(p)}{(ln(2))^2}

so, for 100 Million push tokens with 0.001 error probability

m = -\frac{100000000*ln(0.001)}{(ln(2))^2} = 171 MB

This is significant improvement from 25 GB.  This is not a theoretical improvement, but it actually comes with a cost: the false positive. The goal is to minimise and control the cost by sizing the data structure properly. If you have use cases where you don't need to treat for false positives then it's a perfect way to beat the size versus scale tradeoff. In our case the false positive is "declaring a duplicate when it's not." This translates to "not sending push notifications although qualified."  For some 1 in 10000 cases this is perfectly acceptable for us.

 

Related Articles

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments (4)

Have you looked into higher efficiency alternatives? Cuckoo Filters (https://www.cs.cmu.edu/~dga/papers/cuckoo-conext2014.pdf) and TinySet (http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2015/CS/CS-2015-03.pdf) are reportedly very compact and perform well.

April 4, 2016 | Unregistered CommenterBen

hi this is the best source for this remove duplicate: https://blog.clevertap.com/how-to-remove-duplicates-in-large-datasets/

and thanks for the information.

April 8, 2016 | Unregistered CommenterPriya

Thanks for posting this. I've a question. In your scenario I assume that you put already processed user profiles' primary keys in the bloom filter. Before processing a new user profile first you check whether its primary key exist in the user profile and if yes you don't process it again and if not you process it. But because false positives are possible in bloom filter it is possible that you may skip processing of few user profiles (for example for a user profile x the bloom filter gives false positive that x exists in it and you assume it is already processed and skip it). Is it OK in your scenario?

April 13, 2016 | Unregistered CommenterRaghuram Murthy Petla

Thank for you post! Which Big Data framework would you recommend to choose: Hadoop or Spark? Isn't Hadoop to old for use?

February 25, 2019 | Registered CommenterJelvix

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>