Entries in Example (248)

Wednesday
Sep282016

How Uber Manages a Million Writes Per Second Using Mesos and Cassandra Across Multiple Datacenters 

If you are Uber and you need to store the location data that is sent out every 30 seconds by both driver and rider apps, what do you do? That’s a lot of real-time data that needs to be used in real-time.

Uber’s solution is comprehensive. They built their own system that runs Cassandra on top of Mesos. It’s all explained in a good talk by Abhishek Verma, Software Engineer at Uber: Cassandra on Mesos Across Multiple Datacenters at Uber (slides).

Is this something you should do too? That’s an interesting thought that comes to mind when listening to Abhishek’s talk.

Developers have a lot of difficult choices to make these days. Should we go all in on the cloud? Which one? Isn’t it too expensive? Do we worry about lock-in? Or should we try to have it both ways and craft brew a hybrid architecture? Or should we just do it all ourselves for fear of being cloud shamed by our board for not reaching 50 percent gross margins?

Uber decided to build their own. Or rather they decided to weld together their own system by fusing together two very capable open source components. What was needed was a way to make Cassandra and Mesos work together, and that’s what Uber built.

For Uber the decision is not all that hard. They are very well financed and have access to the top talent and resources needed to create, maintain, and update these kind of complex systems.

Since Uber’s goal is for transportation to have 99.99% availability for everyone, everywhere, it really makes sense to want to be able to control your costs as you scale to infinity and beyond.

But as you listen to the talk you realize the staggering effort that goes into making these kind of systems. Is this really something your average shop can do? No, not really. Keep this in mind if you are one of those cloud deniers who want everyone to build all their own code on top of the barest of bare metals.

Trading money for time is often a good deal. Trading money for skill is often absolutely necessary.

Given Uber’s goal of reliability, where out of 10,000 requests only one can fail, they need to run out of multiple datacenters. Since Cassandra is proven to handle huge loads and works across datacenters, it makes sense as the database choice.  

And if you want to make transportation reliable for everyone, everywhere, you need to use your resources efficiently. That’s the idea behind using a datacenter OS like Mesos. By statistically multiplexing services on the same machines you need 30% fewer machines, which saves money. Mesos was chosen because at the time Mesos was the only product proven to work with cluster sizes of 10s of thousands of machines, which was an Uber requirement. Uber does things in the large.

What were some of the more interesting findings?

  • You can run stateful services in containers. Uber found there was hardly any difference, 5-10% overhead, between running Cassandra on bare metal versus running Cassandra in a container managed by Mesos.

  • Performance is good: mean read latency: 13 ms and write latency: 25 ms, and P99s look good.

  • For their largest clusters they are able to support more than a million writes/sec and ~100k reads/sec.

  • Agility is more important than performance. With this kind of architecture what Uber gets is agility. It’s very easy to create and run workloads across clusters.

Here’s my gloss of the talk:

In the Beginning

Click to read more ...

Tuesday
Sep132016

The Dollar Shave Club Architecture Unilever Bought for $1 Billion

This is a guest post by Jason Bosco, the Dollar Shave Club’s Director of Engineering, Core Platform & Infrastructure, on the infrastructure of its ecommerce technology.

With more than 3 million members, Dollar Shave Club will do over $200 million in revenue this year. Although most are familiar with the company’s marketing, this immense growth in just a few years since launch is largely due to its team of 45 engineers.

Dollar Shave Club engineering by the numbers:

Core Stats

Click to read more ...

Tuesday
Aug302016

The cat-and-mouse story of implementing anti-spam for Mail.Ru Group’s email service and what Tarantool has to do with this

Hey guys!

In this article, I’d like to tell you a story of implementing the anti-spam system for Mail.Ru Group’s email service and share our experience of using the Tarantool database within this project: what tasks Tarantool serves, what limitations and integration issues we faced, what pitfalls we fell into and how we finally arrived to a revelation.

Let me start with a short backtrace. We started introducing anti-spam for the email service roughly ten years ago. Our first filtering solution was Kaspersky Anti-Spam together with RBL (Real-time blackhole list — a realtime list of IP addresses that have something to do with spam mailouts). This allowed us to decrease the flow of spam messages, but due to the system’s inertia, we couldn’t suppress spam mailouts quickly enough (i.e. in the real time). The other requirement that wasn’t met was speed: users should have received verified email messages with a minimal delay, but the integrated solution was not fast enough to catch up with the spammers. Spam senders are very fast at changing their behavior model and the outlook of their spam content when they find out that spam messages are not delivered. So, we couldn’t put up with the system’s inertia and started developing our own spam filter...

Click to read more ...

Monday
Jul182016

How Does Google do Planet-Scale Engineering for a Planet-Scale Infrastructure?

 

How does Google keep all its services up and running? They almost never seem to fail. If you've ever wondered we get a wonderful peek behind the curtain in a talk given at GCP NEXT 2016 by Melissa Binde, Director, Storage SRE at Google: How Google Does Planet-Scale Engineering for Planet-Scale Infrastructure.

Melissa's talk is short, but it's packed with wisdom and delivered in a no nonsense style that makes you think if your service is down Melissa is definitely the kind of person you want on the case. 

Oh, just what is SRE? It stands for Site Reliability Engineering, but a definition is more elusive. It's like the kind of answers you get when you ask for a definition of the Tao. It's more a process than a thing, as is made clear by Ben Sloss 24x7 VP, Google, who defines SRE as:

what happens when a software engineer is tasked with what used to be called operations.

Let that bounce around your head for awhile.

Above and beyond all else one thing is clear: SREs are the custodian of production. SREs are the custodian of customer experience, for both google.com and GCP.

Some of the highlights of the talk for me:

  • The Destructive Incentives of Pitting Uptime vs Features. SRE is an attempt to solve the natural tension between developers who want to push features and sysadmins that want maintain uptime by not pushing features. 
  • The Error Budget. This is the idea that failure is expected. It's not a bad thing. Users can't tell if a service is up 100% of the time or 99.99%, so you can have errors. This reduces the tension between dev and ops. As long as the error budget is maintained you can push out new features and the ops side won't be blamed.
  • Goal is to restore service immediately. Troubleshooting comes later. This means you need a  lot of logging and tooling to debug after a service has been restored. For some reason this made flash on a line from an earlier article, also based on a talk from a Google SRE: Backups are useless. It’s the restore you care about
  • No Boredom Philosophy of Paging. When a page comes in it should be for an interesting and new problem. You don't want SREs being bored handling repetitive problems. That's what bots are for.

Other interesting topics in the talk are: How is SRE structured organizationally? How are devs hired into a role focussed on production and keep them happy? How do we keep the team valued inside of Google? How do we help our teams communicate better and resolve disagreements with data rather than with assertions or power grabs? 

Let's get on with it with it. Here's how Google does Planet-Scale Engineering for a Planet-Scale Infrastructure...

Click to read more ...

Monday
Jun272016

How Facebook Live Streams to 800,000 Simultaneous Viewers

Fewer companies know how to build world spanning distributed services than there are countries with nuclear weapons. Facebook is one of those companies and Facebook Live, Facebook’s new live video streaming product, is one one of those services.

Facebook CEO Mark Zuckerberg

The big decision we made was to shift a lot of our video efforts to focus on Live, because it is this emerging new format; not the kind of videos that have been online for the past five or ten years...We’re entering this new golden age of video. I wouldn’t be surprised if you fast-forward five years and most of the content that people see on Facebook and are sharing on a day-to-day basis is video.

If you are in the advertising business what could better than a supply of advertising ready content that is never ending, always expanding, and freely generated? It’s the same economics Google exploited when it started slapping ads on an exponentially growing web.

An example of Facebook’s streaming prowess is a 45 minute video of two people exploding a watermelon with rubber bands. It reached a peak of over 800,000 simultaneous viewers who also racked up over 300,000 comments. That’s the kind of viral scale you can generate with a social network of 1.5 billion users.

As a comparison The 2015 Super Bowl was watched by 114 million viewers with an average 2.36 million on the live stream. On Twitch there was a peak of 840,000 viewers at E3 2015. The September 16th Republican debate peaked at 921,000 simultaneous live streams.

So Facebook is right up there with the state of the art. Keep in mind Facebook would have a large number of other streams going on at the same time as well.

A Wired article quotes Chris Cox, Facebook’s chief product officer, who said Facebook:

  • Has more than a hundred people working on Live. (it started with ~12 and now there are more than 150 engineers on the project)

  • Needs to be able to serve up millions of simultaneous streams without crashing.

  • Need to be able to support millions of simultaneous viewers on a stream, as well as seamless streams across different devices and service providers around the world.

Cox said that “It turns out it’s a really hard infrastructure problem.”

Wouldn't it be interesting if we had some details about how that infrastructure problem was solved? Woe is we. But wait, we do!

 Federico Larumbe from Facebook’s Traffic Team, which works on the caching software powering Facebook’s CDN and the Global Load Balancing system, gave an excellent talk: Scaling Facebook Live, where he shares some details about how Live works.

Here’s my gloss on the talk. It’s impressive.

Origin Story

Click to read more ...

Wednesday
Jun152016

The Image Optimization Technology that Serves Millions of Requests Per Day

This article will touch upon how Kraken.io built and scaled an image optimization platform which serves millions of requests per day, with the goal of maintaining high performance at all times while keeping costs as low as possible. We present our infrastructure as it is in its current state at the time of writing, and touch upon some of the interesting things we learned in order to get it here.

Let’s make an image optimizer

You want to start saving money on your CDN bills and generally speed up your websites by pushing less bytes over the wire to your user’s browser. Chances are that over 60% of your traffic are images alone.

Using ImageMagick (you did read ImageTragick, right?) you can slash down the quality of a JPEG file with a simple command:

$ convert -quality 70 original.jpg optimized.jpg

$ ls -la

-rw-r--r--  1 matylla  staff  5897 May 16 14:24 original.jpg

-rw-r--r--  1 matylla  staff  2995 May 16 14:25 optimized.jpg

Congratulations. You’ve just brought down the size of that JPEG by ~50% by butchering it’s quality. The image now looks like Minecraft. It can’t look like that - it sells your products and services. Ideally, images on the Web should have outstanding quality and carry no unnecessary bloat in the form of excessively high quality or EXIF metadata.

You now open your favourite image-editing software and start playing with Q levels while saving a JPEG for the Web. It turns out that this particular image you test looks great at Q76. You start saving all your JPEGs with quality set to 76. But hold on a second… Some images look terrible even with Q80 while some would look just fine even at Q60.

Ok. You decide to automate it somehow - who wants to manually test the quality of millions of images you have the “privilege” of maintaining. So you create a script that generates dozens of copies of an input image at different Q levels. Now you need a metric that will tell you which Q level is perfect for a particular image. MSE? SSIM? MS-SSIM? PSNR? You’re so desperate that you even start calculating and comparing perceptual hashes of different versions of your input image.

Some metrics perform better than others. Some work well for specific types of images. Some are blazingly fast while the others take a long time to complete. You can get away by reducing the number of loops in which you process each image but then chances are that you miss your perfect Q level and the image will either be heavier than it could be or quality degradation will be too high.

And what about product images against white backgrounds? You really want to reduce ringing/haloing artifacts around the subject. What about custom chroma-subsampling settings on per-image basis? That red dress against white background looks all washed-out now. You’ve learned that stripping EXIF metadata will bring the file size down a bit but you’ve also removed Orientation tag and now your images are all rotated incorrectly.

And that’s only the JPEG format. For your PNGs probably you’d want to re-compress your 7-Zip or Deflate compressed images with something more cutting-edge like Google’s Zopfli. You spin up your script and watch the fan on your CPU start to melt...

Click to read more ...

Wednesday
Apr202016

How Twitter Handles 3,000 Images Per Second

Today Twitter is creating and persisting 3,000 (200 GB) images per second. Even better, in 2015 Twitter was able to save $6 million due to improved media storage policies.

It was not always so. Twitter in 2012 was primarily text based. A Hogwarts without all the cool moving pictures hanging on the wall. It’s now 2016 and Twitter has moved into to a media rich future. Twitter has made the transition through the development of a new Media Platform capable of supporting photos with previews, multi-photos, gifs, vines, and inline video.

Henna Kermani, a Software Development Engineer at Twitter, tells the story of the Media Platform in an interesting talk she gave at Mobile @Scale London: 3,000 images per second. The talk focuses primarily on the image pipeline, but she says most of the details also apply to the other forms of media as well.

Some of the most interesting lessons from the talk:

  • Doing the simplest thing that can possibly work can really screw you. The simple method of uploading a tweet with an image as an all or nothing operation was a form of lock-in. It didn’t scale well, especially on poor networks, which made it difficult for Twitter to add new features.

  • Decouple. By decoupling media upload from tweeting Twitter was able independently optimize each pathway and gain a lot of operational flexibility. 

  • Move handles not blobs. Don’t move big chunks of data through your system. It eats bandwidth and causes performance problems for every service that has to touch the data. Instead, store the data and refer to it with a handle.

  • Moving to segmented resumable uploads resulted in big decreases in media upload failure rates.

  • Experiment and research. Twitter found through research that a 20 day TTL (time to live) on image variants (thumbnails, small, large, etc) was a sweet spot, a good balance between storage and computation. Images had a low probability of being accessed after 20 days so they could be deleted, which saves nearly 4TB of data storage per day, almost halves the number of compute servers needed, and saves millions of dollars a year.

  • On demand. Old image variants could be deleted because they could be recreated on the fly rather than precomputed. Performing services on demand increases flexibility, it lets you be lot smarter about how tasks are performed, and gives a central point of control.

  • Progressive JPEG is a real winner as a standard image format. It has great frontend and backend support and performs very well on slower networks.

Lots of good things happened on Twitter’s journey to a media rich future, let’s learn how they did it...

The Old Way - Twitter in 2012

Click to read more ...

Monday
Mar282016

How we implemented the video player in Mail.Ru Cloud

We’ve recently added video streaming service to Mail.Ru Cloud. Development started with contemplating the new feature as an all-purpose “Swiss Army knife” that would both play files of any format and work on any device with the Cloud available. Video content uploaded to the Cloud mostly falls into one of the two categories: “movies/series” and “users’ videos”. The latter are the videos that users shoot with their phones and cameras, and these videos are most versatile in terms of formats and codecs. For many reasons, it is often a problem to watch these videos on other end-user devices without prior normalization: a required codec is missing, or the file size is too big to download, or whatever.

In this article, I’ll go into detail to explain how video playback works in Mail.Ru Cloud, and how we made the Cloud player “omnivorous” and ensured support on a maximum number of end-user devices.

Storing and Caching: two approaches

Click to read more ...

Wednesday
Mar232016

What does Etsy's architecture look like today?

This is a guest post by Christophe Limpalair based on an interview (video) he did with Jon Cowie, Staff Operations Engineer and Breaksmith @ Etsy.

Etsy has been a fascinating platform to watch, and study, as they transitioned from a new platform to a stable and well-established e-commerce engine. That shift required a lot of cultural change, but the end result is striking.

In case you haven't seen it already, there's a post from 2012 that outlines their growth and shift. But what has happened since then? Are they still innovating? How are engineering decisions made, and how does this shape their engineering culture? These are questions we explored with Jon Cowie, a Staff Operations Engineer at Etsy, and the author of Customizing Chef, in a new podcast episode.

What does Etsy's architecture look like nowadays?

Click to read more ...

Wednesday
Mar162016

Jeff Dean on Large-Scale Deep Learning at Google


If you can’t understand what’s in information then it’s going to be very difficult to organize it.

 

This quote is from Jeff Dean, currently a Wizard, er, Fellow in Google’s Systems Infrastructure Group. It’s taken from his recent talk: Large-Scale Deep Learning for Intelligent Computer Systems.

Since AlphaGo vs Lee Se-dol, the modern version of John Henry’s fatal race against a steam hammer, has captivated the world, as has the generalized fear of an AI apocalypse, it seems like an excellent time to gloss Jeff’s talk. And if you think AlphaGo is good now, just wait until it reaches beta.

Jeff is referring, of course, to Google’s infamous motto: organize the world’s information and make it universally accessible and useful.

Historically we might associate ‘organizing’ with gathering, cleaning, storing, indexing, reporting, and searching data. All the stuff early Google mastered. With that mission accomplished Google has moved on to the next challenge.

Now organizing means understanding.

Some highlights from the talk for me:

  • Real neural networks are composed of hundreds of millions of parameters. The skill that Google has is in how to build and rapidly train these huge models on large interesting datasets, apply them to real problems, and then quickly deploy the models into production across a wide variery of different platforms (phones, sensors, clouds, etc.).

  • The reason neural networks didn’t take off in the 90s was a lack of computational power and a lack of large interesting data sets. You can see how Google’s natural love of algorithms combined with their vast infrastructure and ever enlarging datasets created a perfect storm for AI at Google.

  • A critical difference between Google and other companies is that when they started the Google Brain project in 2011, they didn’t keep their research in the ivory tower of a separate research arm of the company. The project team worked closely with other teams like Android, Gmail, and photos to actually improve those properties and solve hard problems. That’s rare and a good lesson for every company. Apply research by working with your people.

  • This idea is powerful: They’ve learned they can take a whole bunch of subsystems, some of which may be machine learned, and replace it with a much more general end-to-end machine learning piece. Often when you have lots of complicated subsystems there’s usually a lot of complicated code to stitch them all together. It’s nice if you can replace all that with data and very simple algorithms.

  • Machine learning will only get better, faster. A paraphrased quote from Jeff: The machine learning community moves really really fast. People publish a paper and within a week lots of research groups throughout the world have downloaded the paper, have read it, dissected it, understood it, implemented some extensions to it, and published their own extensions to it on arXiv.org. It’s different than a lot other parts of computer science where people would submit a paper, and six months later a conference would decide to accept it or not, and then it would come out in the conference proceeding three months later. By then it’s a year. Getting that time down from a year to a week is amazing.

  • Techniques can be combined in magical ways. The Translate Team wrote an app using computer vision that recognizes text in a viewfinder. It translates the text and then superimposes the translated text on the image itself. Another example is writing image captions. It combines image recognition with the Sequence-to-Sequence neural network. You can only imagine how all these modular components will be strung together in the future.

  • Models with impressive functionality are small enough run on Smartphones. For technology to disappear intelligence must move to the edge. It can’t be dependent on network umbilical cord connected to a remote cloud brain. Since TensorFlow models can run on a phone, that might just be possible.

  • If you’re not considering how to use deep neural nets to solve your data understanding problems, you almost certainly should be. This line is taken directly from the talk, but it’s truth is abundantly clear after you watch hard problem after hard problem made tractable using deep neural nets.

Jeff always gives great talks and this one is no exception. It’s straightforward, interesting, in-depth, and relatively easy to understand. If you are trying to get a handle on Deep Learning or just want to see what Google is up to, then it's a must see.

There’s not a lot of fluff in the talk. It’s packed. So I’m not sure how much value add this article will give you. So if you want to just watch the video I’ll understand.

As often happens with Google talks there’s this feeling you get that we’ve only been invited into the lobby of Willy Wonka’s Chocolate factory. In front of us is a locked door and we're not invited in. What’s beyond that door must be full of wonders. But even Willy Wonka’s lobby is interesting.

So let’s learn what Jeff has to say about the future…it’s fascinating...

What is Meant by Understanding?

Click to read more ...