« Sponsored Post: StatusPage.io, Redis Labs, Jut.io, SignalFx, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7 | Main | Stuff The Internet Says On Scalability For December 4th, 2015 »
Monday
Dec072015

The Serverless Start-up - Down with Servers!

teletext.io

This is a guest post by Marcel Panse and Sander Nagtegaal from Teletext.io.

In our early Peecho days, we wrote an article explaining how to build a really scalable architecture for next to nothing, using Amazon Web Services. Auto-scaling, merciless decoupling and even automated bidding on unused server capacity were the tricks we used back then to operate on a shoestring. Now, it is time to take it one step further.

We would like to introduce Teletext.io, also known as the serverless start-up - again, entirely built around AWS, but leveraging only the Amazon API Gateway, Lambda functions, DynamoDb, S3 and Cloudfront.

The Virtues of Constraint

We like rules. At our previous start-up Peecho, product owners had to do fifty push-ups as payment for each user story that they wanted to add to an ongoing sprint. Now, at our current company myTomorrows, our developer dance-offs are legendary: during the daily stand-ups, you are only allowed to speak while dancing - leading to the most efficient meetings ever.

This way of thinking goes all the way into our product development. It may seem counter-intuitive at first, but constraints fuel creativity. For example, all our logo design is done with technical diagramming tool Omnigraffle, so there is no way we could use hideous lens flares and such. Anyway - recently, we launched yet another initiative called Teletext.io. So, we needed a new restriction.

At Teletext.io, we are not allowed to use servers. Not even one.

It was a good choice. We will explain why.

Why Servers are Bad

In the past years, the Amazon cloud already made us very happy with the ability to use auto-scaled clustering of EC2 server instances with a load balancer on top. This means that if one of your servers goes down, a new one can be initiated automatically. The same happens when unexpectedly high peaks of traffic occur. Extra servers will then be initiated.

Although this is pretty cool, there are disadvantages.

  • First of all, you need to keep a minimum number of server instances alive to be able to serve any visitor, even if you don't have any traffic - or revenue. This costs money.
  • Secondly, because cloud instances run installed software on top of operating systems, you need not only maintain your own code, but also make sure the server software stays up to date and operational.
  • Third, you can't scale up or down in a granular way, but only one whole server at a time.

In an architecture based on an API with micro-services, this means a lot of overhead for small tasks. Luckily, there are now options to solve this. Let's first take a look at the problem at hand.

Scratch Your own Itch

Our brand new solution was born out of personal frustration about content management within custom software. In most start-ups, HTML texts inside buttons, dashboards, help sections or even entire web pages have to be managed and deployed by programmers instead of the writers of the texts. This is really annoying for both developers and editors.

For years, we tried to find a distributed content management service that would solve this issue properly. We failed. So, a couple of months ago, we were fed up and decided that we would just have to build something ourselves.

The Plan

We planned to create a really simple, Javascript-based service that could inject centrally hosted content, using the common data attributes of HTML for labeling of elements. It should be able to handle localization and dynamic insertion of data. Most importantly, it should give content writers a way to just change texts whenever they want, using an inline WYSIWYG editor in their own app.

Apart from some UX caveats, there are three technical challenges here.

  1. Because live websites rely on it for content, this new commodity service should never go down. Ever.
  2. It should be really, really fast, so your visitors don't even notice the content is loading.
  3. The content should be indexed by Google.

The third issue is not related to architecture per se. The trusty search engine moves in mysterious ways, so all we could do was to test the assumption. Spoiler: yes, it works. The solution to the first two problems is in your own hands, though. These can be solved at low cost - with a clever architecture.

Building Blocks

Let's dive into the building blocks of our system, which are based on several of the latest features of Amazon Web Services.

Amazon API Gateway

Amazon API Gateway is a managed AWS service that allows developers to create APIs at any scale, with just a few clicks in the AWS Management Console. The service can trigger other AWS services, including Lambda functions.

AWS Lambda

Instead of running cloud instances, we use AWS Lambda. The name derives from the Greek letter lambda (λ) used to denote binding a variable in a function. AWS Lambda lets you run code without maintaining any server instances. You may think of an atomic, stateless function with a single task, that may run for a limited amount of time (one minute, currently). The functions can be written in Javascript (Node.js), Python or Java.

If you upload your Lambda code, Amazon will take care of everything required to run and scale your code with high availability. Lambda executes in parallel. So, if a million requests are made, a million Lambda functions will execute without loss of speed or capacity. According to Amazon, "there are no fundamental limits to scaling a function".

The best thing is, that from a developer's perspective, Lambda functions are not even there when they don't get executed. They only appear when needed. And what isn't up, can't come down.

DynamoDB

The Lambda functions put their data in a data store. Because our rule says that we are not allowed to use any servers, we can't use the Relational Database Service (RDS). Instead, we use DynamoDB, Amazon's giant managed data store.

Amazon DynamoDB is a NoSQL database service for all applications at any scale. It is fully managed and supports both document and key-value store models. Scalability of the service has been proven with other users like Netflix, AirBnB and IMDb.

Architecture

Our system is decoupled into three parts:

  1. Content management
  2. Content delivery
  3. Our website

This separation of concerns is by design. A problem in either the content management API or our own website should never lead to an outage of the customer's websites. Therefore, the delivery of content should be entirely autonomous.

Content Management

Teletext.io - editing dynamic content with Amazon Gateway API, DynamoDb and Lambda The content management part is what editors use to edit HTML and texts. Editors can log in using their Google account, via an AWS Cognito identity pool connected to AWS IAM using Javascript only. For loading draft content, storing edits and publishing the result, the Amazon API Gateway gets called, triggering Lambda functions. The Lambda functions all relate to a single API call and store their data in DynamoDb.

As you can see, there are no servers that could crash or get stuck.

Content Delivery

As mentioned, we decided to decouple the delivery of content entirely from the editing features, so your app will work even if disaster strikes. When an editor decides to publish new content to the live version of his app, another Lambda immediately copies the draft content as flat JSON files to S3, Amazon's data store for files. The JSON file contains meta data and i18n localized HTML strings that describe the content.

Teletext.io - publishing static content with S3 and Cloudfront From there, the Teletext.io script in your app can access these files through the Cloudfront CDN, ensuring high availability and performance. We added a clever prefetching algorithm to ensure the most popular files get retrieved and cached in your browser before you need them, so the next clicks are provisioned without actual load of content.

Since there is no server-side logic involved in the delivery of published content, it is really fast and practically bullet proof.

Our Website

Teletext.io static single page app in S3 with proper routing for deeplinking What about our website, though? We chose a simple, but effective concept - again, without servers. The website was built using the React framework as a single-page app and deployed to S3 as a single, static file. Then, Cloudfront was configured as the content delivery mechanism on top, ensuring super-fast content delivery from many endpoints around the world.

Again, this approach is based on flat file delivery and therefore very robust.

The static app uses HTML5 pushState and React Router for URL handling. Usually, there is one problem with that. If you access a specific URL other than the root, a web server must dynamically render the same routes that the front end is rendering dynamically. This is currently impossible in S3. We found a trick, though, that we would like to share here.

  1. Configure the app as a static website in S3, with the root pointing to the main file.
  2. Do not add any S3 redirect rules. Don’t even add the custom error page.
  3. Create a Cloudfront distribution pointing to the S3 bucket.
  4. Create a custom error response in Cloudfront, that points to the main file as well. Make sure to enforce a fixed 200 response.

The result is that all URL paths (except for the root) lead to a 404 response in S3, which then triggers the cached custom error response of Cloudfront. This final response is just your single-page app again. Now, in the browser, you can handle all routing based on the current path.

There is only one disadvantage. You can't return an actual 404 HTTP response code in any case. However, in return, you will get an ultra-cheap, ultra-scalable single-page app.

Practical Perks

Working with Lambda has impact on your development process. Support is improving, though. For example, previously, Lambda functions couldn't be versioned. This had the effect that testing and deploying was very risky. However, Amazon recently rolled out their versioning system and now we can use mutable aliasing. This means that it is now possible to have different versions of the same function that can be updated independently, similar to a test environment versus a production environment.

The Result

Our freemium service is now open for customers. We eat our own dog food, so we use it, too. In the following GIF you can see the functionality at work in our own website.

Teletext.io demo: inline content management inside custom software

However, the truly scalable nature of the system shows in our monthly AWS bill.

Cost

The cost is entirely defined by actual usage. For simplicity's sake, we will ignore temporary free tiers, as well as the slew of smaller services that we use.

  • With Lambda, you pay only for the compute time you consume - there is no charge when your code is not running. There is a perpetual free tier, too.
  • Amazon API Gateway has no minimum fees or startup costs. You pay only for the API calls you receive and the amount of data transferred out.
  • DynamoDb costs are based on pay-for-what-you-use as well, although the pricing is a bit more complicated. To put it simply, it's based on storage, reads and writes.
  • Then, there are S3 and Cloudfront. Basically, you pay for storage and bandwidth out.

We just started - so to calculate the costs, we made some assumptions. Let's consider a fairly big website to be our average customer. We guess that such a client uses 1000 API calls per month (that's editing only), requires therefore 1GB of data in, and needs about 10 GB of traffic-related data out. Persistent storage we estimate to be 500MB. We don't expect the Lambda execution time to get over 2 seconds.

For several different numbers of this type of customer, our monthly cost would look like this (rounded, in American dollars).

Customers API Gateway Lambda DynamoDb S3 Cloudfront Total
0 0 0 0 0.25 1.00 1.25
100 9.50 0 3.50 1.50 85.00 99.50
1000 93.50 4.00 3.50 15.00 850.00 966.00
10000 935.00 35.00 3.50 150.00 8050.00 9173.50
100000 9350.00 410.00 3.50 1500.00 76700.00 87963.50

As you can see, the cost is largely determined by the CDN, which is becoming cheaper at larger traffic numbers. The API and associated Lambdas attribute for a much smaller portion. So, if you would build a service that is less dependent on Cloudfront, you can do much better.

Down with Servers

Using our love for creative constraint, we managed to launch a start-up without servers, auto-scaling, load balancers and operating systems to maintain. This is not only a really scalable solution with an almost infinite peak capacity, but also a very cheap solution - especially in the early days before traction hits.

So... Down with servers!

On HackerNews / On Reddit

Reader Comments (26)

Gentlemen,

Great post. I love it!

Please check out our project on similar effort: https://github.com/MitocGroup/deep-microservices-todo-app (serverless web application) and https://github.com/MitocGroup/deep-framework (serverless web framework). If interested, let's connect (my email address is on Github:)

Best wishes,
Eugene I.

December 7, 2015 | Unregistered CommenterEugene I.

As you're not using most/any of Cloudfront, have you thought about switching to something like Cloudflare where you pay a fixed rate no matter how many client hit your system? It'd eliminate the Cloudfront scaling costs

December 8, 2015 | Unregistered CommenterGiggaflop

Jesus wept.

December 8, 2015 | Unregistered CommenterMike Torino

Incredible. Can you point to some good tutorials for newbies?

December 8, 2015 | Unregistered CommenterMaskevt

Horrible. Pretty much stepping back to the 80s.

December 8, 2015 | Unregistered CommenterBob Thing

As you can see, there are no servers that could crash or get stuck.

Right, Amazon totally has no servers up and running. Also your Cloudfront costs are pathetic and you have no idea of engineering. F***ing Silicon Valley Hipster Fuck-Up.

December 8, 2015 | Unregistered Commenterthe_dude

I don't think your Dynamo pricing is correct. I can't imagine would have the storage and throughput for 100K customers for $3.50/month.

Also, the CDN numbers are so high it would probably make sense to not even use it.

December 10, 2015 | Unregistered CommenterBrad

For $76K you could buy 10PB of CDN traffic from tier 1 provider.

December 11, 2015 | Unregistered CommenterKonstantin R.

Thanks for sharing your story Marcel and Sander, interesting approach.

Please adjust your title of the article, it's very misleading. You're not talking about a serverless setup, you're only talking about getting rid of managing your own severs. They call this Platform as a service (PAAS). This isn't something new, see established solutions like Heroku and Google App Engine just to mention two of them.

BTW you haven't really solved your bold challenge #1 ("this new commodity service should never go down. Ever"), all you have done is relying on Amazon to keep their services fast and up and running. Do you think they are immune for outages? (Hint: they had issues just months ago...). By not having to maintain the servers yourself, less can go wrong of course: many outages are caused by system updates and/or human errors.

December 11, 2015 | Unregistered CommenterJos de Jong

I've been thinking about adapting this architecture but one thing I'm concerned about is security. Where do you put all the (usual) server side logic for authentication or configuration for database access? Inside each lambda function? Or as a separate lambda function that each lambda function then calls (which seems like it could be redundant)?

December 11, 2015 | Unregistered CommenterEvanZ

I'm working on exactly this pattern for my employer: API Gateway -> Lambda -> DynamoDB. No servers to manage. Sweet. But it's a fairly new pattern so the documentation on some topics can be spotty. Permissions are a bit of a hassle. But I see the light: Compared to racking, stacking, and managing our own servers, this looks much cheaper and more reliable (we have more outages annually than AWS). Scalability problems disappear in this model: I can build an entire microservices-based app on the free AWS tier, and the only thing needed to scale that infrastructure to a global application with millions of users is: my credit card.

December 22, 2015 | Unregistered CommenterWarren Spencer

Wow - lots of ignorance and hate in the comments over a pretty darn good article.

@Jos de Jung - The title isn't very misleading, it is dead on. You just don't understand the meaning of the words nor, based on your other comments, the concept that they have developed (hint: it is much more complex than PaaS). The title is "Serverless Startup" - meaning they, as a startup, did not own or lease any servers. It is 100% accurate. That they purchase compute time, storage space, cdn, etc. that runs on AWS servers is besides the point.

@the_dude - looks like you forgot to take your meds today...

Marcel Panse and Sander Nagtegaal - thanks for the excellent article and inspiration. I am going to work actively using this model in my next project. Best of luck to you!

December 22, 2015 | Unregistered CommenterJoe Groom

Hi EvanZ, glad to hear you're thinking about it, but you've got some reading to do! AWS provides a couple of built-in mechanisms (Cognito, IAM), as well as an integration pattern to call out to your own security provider. My current implementation is using Cognito.

Cheers,

ws

December 24, 2015 | Unregistered CommenterWarren Spencer

This is ridiculously more complicated and just as prone to failure if not more so. Amazon can still fail, and with this many services in use, any failure anywhere in the chain can cause problems.

A small EC2 instance can take care of all this. Use 2 for failover just to be safe. For this app, 99% is just reading data so those 2 small instances will pretty much scale forever. And you get to just program in any basic webapp framework and deploy like normal.

Digital Ocean would be even cheaper and there are tons of CDNs that are cheaper than CloudFront. Also they are definitely not going to be pushing out 50TBs of data ever, that's a ridiculous amount of bandwidth for highly compressible text content. That's like saying they'll serve all the text for the top 100 news sites out there.

Also, their business model sucks as I don't see who would pay for this. Usually text that needs to be edited (like articles in a CMS) already has editors. How often is the text on a website randomly changed? Even then, how long does it a developer to just enter the new text and deploy?

Terrible company with crap engineering and a useless blog post.

December 26, 2015 | Unregistered CommenterSpin

I've been using the same pattern to build by site with Angular2.

I've been wanting to create a personal site to use as a blog and dumping ground for sharing projects/designs. Except, I really didn't want to deal with building, maintaining, and paying for a back-end. S3 is cheap as dirt and guarantees 5 9s of uptime (ie better than I could manage alone).

So, I created an Angular2 SPA (Single Page Application), configured S3 redirection, and added path rewriting to the angular router. I'm really hoping I can get the Angular dev team to take notice so they can improve the router to handle the server-less edge case, incl html history rewriting.

Syncing was easy to automate using grunt and a s3 plugin. It's like Jekyll in that Markdown will be used as the format for all content, it's different in that there's no compilation step. I created a web component that converts markdown to html directly (ie and will soon support inline caching of AJAX requests).

It's under development but the dev version can be seen @

Once it's fully functional, I plan to write about the process: building a site with Angular2/ES6; using JSPM; creating an ng2 web component. Until then, if you're interested feel free to check out my handle on GitHub.

December 29, 2015 | Unregistered CommenterEvan Plaice

That's a great idea Giggaflop. Has anyone done this on cloudflare? I'm curious what issues might arise and if it would work. Thanks.

January 30, 2016 | Unregistered CommenterDavid

For those of you looking for a demo web application that tries to create a best practice for the serverless architecture, please check out our SansServer project (https://github.com/bclemenzi/sans-server).

The project utilizes Java-based Lambda functions built and configured automatically in AWS using custom Java annotations at Maven install. There is also a strong focus on supporting multiple developers and environments.

March 23, 2016 | Unregistered CommenterBrendan

I created a new framework to deploy JAVA applications to AWS Lambda+API Gateway infrastructure

https://github.com/lambadaframework/lambadaframework

March 31, 2016 | Unregistered CommenterCagatay Gurturk

Hey Marcel, thanks for educating all of us, especially me, who was dumb in these server aspects and now I know a lot, all thanks to you. Keep writing and sharing such reads :)

April 13, 2016 | Unregistered CommenterMitesh Sanghvi

Great post. Just one question. Where do you store images etc in your project?

September 13, 2016 | Unregistered CommenterNicklas

"our developer dance-offs are legendary: during the daily stand-ups, you are only allowed to speak while dancing - leading to the most efficient meetings ever."

You have got to be kidding me.

September 21, 2016 | Unregistered CommenterAlex Lydiate

Here is a comprehensive step-by-step tutorial on the topic covered in this blog post.

http://serverless-stack.com

The backend tutorial includes chapters on Lambda + API Gateway secured by Cognito. The frontend tutorial includes chapters on React SPA hosted on S3 + CloudFront + SSL + Route53 custom domain. The tutorial details how to build a CRUD Serverless API and hooking it up to a React SPA entirely on AWS. The API functions are in ES6 and authenticated with Cognito User Pool. It also shows you how to host your app on S3 and serve it out using CloudFront and Route 53. It is an end-to-end tutorial that shows the Serverless architecture in action.

March 9, 2017 | Unregistered CommenterFrank

Interesting post.

I also working in a backend team which use serverless AWS Lambda as our node.js backend development tool. We found that serverless lighten our burdens on parts of managing our servers ( the ops part). However, we noticed that our development speed is decreasing compared to the time when we are using express JS based node.js server. This is because we were unable to get up our service up & running in our own local development machine and debug our in there as well. Debugging problem on serverless AWS Lambda is tedious to us: It require us to deploy our code to AWS, invoke it and then see how our logic were doing through looking at a bunch of Cloudwatch's logstreams ( our code has a bunch of console.log in order to see what's going on in there). Based on this fact, I'd like to hear your thought from your side, how could you increase your team development productivity when you are unable to run & debug serverless code in local machines ?

Cheers.

March 15, 2017 | Unregistered CommenterWendy Sanarwanto

Maybe try testing the code from the command line? Here's how I do it:

#! /usr/bin/env node

var question_id = (process.argv.length >= 3) ? process.argv[2] : "test2";

var lambda = require("../index.js");

var AWS = require('aws-sdk');
AWS.config.loadFromPath("./awscfg.json");

var context = {
functionName: "testFunction",
AWS: AWS,
DB: new AWS.DynamoDB(),
DBCLIENT: new AWS.DynamoDB.DocumentClient(),
SES: new AWS.SES()
};

var decision = {
questionId: question_id,
rating: "excellent",
text: "wow!"
}

var request = {
method: "acceptProAnswer",
params: decision,
id: 1
}

lambda.handler(request, context, function(error, response) {
console.log("handler:error:" + error + " response:" + JSON.stringify(response));
})

March 16, 2017 | Registered CommenterHighScalability Team

Wonderful article. For the myriad comments like no more reliable than... and not really serverless, I highly recommend an education on this technology. It's an stack of technologies we all use - just more intelligently provisioned. It's almost trivial to architect a H/A solution and eliminate SPOFs. My concern was performance but, after working on Lamba for a few months, I'm quite pleased.

Great to see others moving forward with this technology. It eliminates a huge chunk of the start-up barriers for new tech companies

October 28, 2017 | Unregistered CommenterJeff Preletz

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>