The email sent will contain a link to this article, the article title, and an article excerpt (if available). For security reasons, your IP address will also be included in the sent email.

The most frequent questions we answer for developers and devops are about our architecture and how we achieve such high availability. Some of them are very skeptical about high availability with bare metal servers, while others are skeptical about how we distribute data worldwide. However, the question I prefer is “How is it possible for a startup to build an infrastructure like this”. It is true that our current architecture is impressive for a young company:
-
Our high-end dedicated machines are hosted in 13 worldwide regions with 25 data-centers
-
our master-master setup replicates our search engine on at least 3 different machines
-
we process over 6 billion queries per month
-
we receive and handle over 20 billion write operations per month
Just like Rome wasn't built in a day, our infrastructure wasn't as well. This series of posts will explore the 15 instrumental steps we took when building our infrastructure. I will even discuss our outages and bugs in order to you to understand how we used them to improve our architecture.
The first blog post of this series focused on our early days in beta and the second post on the first 18 months of the service, including our first outages. In this last post, I will describe how we transformed our "startup" architecture into something new that was able to meet the expectation of big public companies.
Step 11: February 2015
Launch of our Synchronized Worldwide infrastructure