Entries in Paper (127)

Thursday
May152014

Paper: SwiftCloud: Fault-Tolerant Geo-Replication Integrated all the Way to the Client Machine

So how do you knit multiple datacenters and many thousands of phones and other clients into a single cooperating system?

Usually you don't. It's too hard. We see nascent attempts in services like Firebase and Parse. 

SwiftCloud, as described in SwiftCloud: Fault-Tolerant Geo-Replication Integrated all the Way to the Client Machine, goes two steps further, by leveraging Conflict free Replicated Data Types (CRDTs), which means "data can be replicated at multiple sites and be updated independently with the guarantee that all replicas converge to the same value. In a cloud environment, this allows a user to access the data center closer to the user, thus optimizing the latency for all users."

While we don't see these kind of systems just yet, they are a strong candidate for how things will work in the future, efficiently using resources at every level while supporting huge numbers of cooperating users.

Abstract:

Click to read more ...

Thursday
May012014

Paper: Can Programming Be Liberated From The Von Neumann Style? 

Famous computer scientist John Backus, he's the B in BNF(Backus-Naur form) and the creator of Fortran, gave a Turing Award Lecture titled Can programming be liberated from the von Neumann style?: a functional style and its algebra of programs, that has layed out a division in programming that lives long after it was published in 1977. 

It's the now familiar argument for why functional programming is superior:

The assignment statement is the von Neumann bottleneck of programming languages and keeps us thinking in word-at-a-time terms in much the same way the computer's bottleneck does.

...

The second world of conventional programming languages is the world of statements. The primary statement in that world is the assignment statement itself. All the other statements of the language exist in order to make it possible to perform a computation that must be based on this primitive construct: the assignment statement.

Here's a response by Dijkstra A review of the 1977 Turing Award Levgure by John Backus. And here's an interview with Dijkstra.

Great discussion on a recent Hacker News thread and an older thread. Also on Lambda the Ultimate. Nice summary of the project by David Bolton

There's nothing I can really add to the discussion as much smarter people than me have argued this endlessly. Personally, I'm more of a biology than a math and a languages are for communicating with people sort of programmer. So the argument from formal systems have never persuaded me greatly. Doing a proof of a bubble sort in school was quite enough for me. Its applicability to real complex systems has always been in doubt.

The problem of how to best utilize distributed cores is a compelling concern. Though the assumption that parallelism has to be solved at the language level and not how we've done it, at the system level, is not as compelling.

It's a passionate paper and the discussion is equally passionate. While nothing is really solved, if you haven't deep dived into this dialogue across the generations, it's well worth your time to do so. 

Thursday
Apr102014

Paper: Scalable Atomic Visibility with RAMP Transactions - Scale Linearly to 100 Servers

We are not yet at the End of History for database theory as Peter Bailis and the Database Group at UC Berkeley continue to prove with a great companion blog post to their new paper: Scalable Atomic Visibility with RAMP Transactions. I like the approach of pairing a blog post with a paper. A paper almost by definition is formal, but a blog post can help put a paper in context and give it some heart.

From the abstract:

Databases can provide scalability by partitioning data across several servers. However, multi-partition, multi-operation transactional access is often expensive, employing coordination-intensive locking, validation, or scheduling mechanisms. Accordingly, many real-world systems avoid mechanisms that provide useful semantics for multi-partition operations. This leads to incorrect behavior for a large class of applications including secondary indexing, foreign key enforcement, and materialized view maintenance. In this work, we identify a new isolation model—Read Atomic (RA) isolation—that matches the requirements of these use cases by ensuring atomic visibility: either all or none of each transaction’s updates are observed by other transactions. We present algorithms for Read Atomic Multi-Partition (RAMP) transactions that enforce atomic visibility while offering excellent scalability, guaranteed commit despite partial failures (via synchronization independence), and minimized communication between servers (via partition independence). These RAMP transactions correctly mediate atomic visibility of updates and provide readers with snapshot access to database state by using limited multi-versioning and by allowing clients to independently resolve non-atomic reads. We demonstrate that, in contrast with existing algorithms, RAMP transactions incur limited overhead—even under high contention—and scale linearly to 100 servers.

What is RAMP?

Click to read more ...

Thursday
Apr032014

Leslie Lamport to Programmers: You're Doing it Wrong

Famous computer scientist Leslie Lamport is definitely not a worse is better kind of guy. In Computation and State Machines he wants to make the case that to get better programs we need to teach programmers to think better. And programmers will think better when they learn to think in terms of concepts firmly grounded in the language of mathematics.

I was disappointed that there was so much English in the paper. Surely it would have been more convincing if it was written as a mathematical proof. Or would it?

This whole topic has been argued extensively throughout thousands of years of philosophy. Mathematics has always been a strange attractor for those trying to escape a flawed human rationality. In the end as alluring as the utopia of mathematics is, it lacks a coherent theory of meaning and programming is not about rearranging ungrounded symbols, it's about manipulating and shaping meaning.

For programmers I think Ludwig Wittgenstein has the right sense of things. Meaning is derived by use within a community. Programs built and maintained by programmers is at bottom a community of effort.

Abstract:

For quite a while, I’ve been disturbed by the emphasis on language in computer science. One result of that emphasis is programmers who are C++ experts but can’t write programs that do what they’re supposed to. The typical computer science response is that programmers need to use the right programming/specification/development language instead of/in addition to C++. The typical industrial response is to provide the programmer with better debugging tools, on the theory that we can obtain good programs by putting a monkey at a keyboard and automatically finding the errors in its code.

I believe that the best way to get better programs is to teach programmers how to think better. Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. But how does one teach concepts without getting distracted by the language in which those concepts are expressed? My answer is to use the same language as every other branch of science and engineering—namely, mathematics. But how should that be done in practice? This note represents a small step towards an answer. It doesn’t discuss how to teach computer science; it simply addresses the preliminary question of what is computation. 

Related Articles

Thursday
Mar202014

Paper: Log-structured Memory for DRAM-based Storage - High Memory Utilization Plus High Performance

Most every programmer who gets sucked into deep performance analysis for long running processes eventually realizes memory allocation is the heart of evil at the center of many of their problems. So you replace malloc with something less worse. Or you tune your garbage collector like a fine ukulele. But there's a smarter approach brought to you from the folks at RAMCloud, a Stanford University production, which is a large scale, distributed, in-memory key-value database.

What they've found is that typical memory management approaches don't work and using a log structured approach yields massive benefits:

Performance measurements of log-structured memory in RAMCloud show that it enables high client through- put at 80-90% memory utilization, even with artificially stressful workloads. In the most stressful workload, a single RAMCloud server can support 270,000-410,000 durable 100-byte writes per second at 90% memory utilization. The two-level approach to cleaning improves performance by up to 6x over a single-level approach at high memory utilization, and reduces disk bandwidth overhead by 7-87x for medium-sized objects (1 to 10 KB). Parallel cleaning effectively hides the cost of cleaning: an active cleaner adds only about 2% to the latency of typical client write requests.

And for your edification they've written an excellent paper on their findings. Log-structured Memory for DRAM-based Storage:

Click to read more ...

Wednesday
Mar122014

Paper: Scalable Eventually Consistent Counters over Unreliable Networks

Counting at scale in a distributed environment is surprisingly hard. And it's a subject we've covered before in various ways: Big Data Counting: How to count a billion distinct objects using only 1.5KB of Memory, How to update video views count effectively?, Numbers Everyone Should Know (sharded counters).

Kellabyte (which is an excellent blog) in Scalable Eventually Consistent Counters talks about how the Cassandra counter implementation scores well on the scalability and high availability front, but in so doing has "over and under counting problem in partitioned environments."

Which is often fine. But if you want more accuracy there's a PN-counter, which is a CRDT (convergent replicated data type) where "all the changes made to a counter on each node rather than storing and modifying a single value so that you can merge all the values into the proper final value. Of course the trade-off here is additional storage and processing but there are ways to optimize this."

And there's a paper you can count on that goes into more details: Scalable Eventually Consistent Counters over Unreliable Networks:

Click to read more ...

Wednesday
Feb192014

Planetary-Scale Computing Architectures for Electronic Trading and How Algorithms Shape Our World

Algorithms are moving out of the Platonic realm and are becoming dynamic first class players in real life. We've seen corporations become people. Algorithms will likely also follow that path to agency.

Kevin Slavin in his intriguing TED talk: How Algorithms Shape Our World, gives many and varied examples of how algorithms have penetrated RL. 

One of his most interesting examples is from a highly technical paper on Relativistic statistical arbitrage, which says to make money on markets you have to be where the people are, the red dots (on the diagram below), which means you have to put servers where the blue dots are, many of which are in the ocean. Here's the diagram from the paper:

Mr. Slavin neatly sums this up by saying:

And it's not the money that's so interesting actually. It's what the money motivates, that we're actually terraforming the Earth itself with this kind of algorithmic efficiency. And in that light, you go back and you look at Michael Najjar's photographs, and you realize that they're not metaphor, they're prophecy. They're prophecy for the kind of seismic, terrestrial effects of the math that we're making. And the landscape was always made by this sort of weird, uneasy collaboration between nature and man. But now there's this third co-evolutionary force: algorithms -- the Boston Shuffler, the Carnival. And we will have to understand those as nature, and in a way, they are.

The introduction to the paper spells out why this is so:

Click to read more ...

Wednesday
Feb122014

Paper: Network Stack Specialization for Performance 

In the scalability is specialization department here is an interesting paper presented at HotNets '13 on high performance networking: Network Stack Specialization for Performance.

The idea is generalizing a service so it fits in the kernel comes at a high performance cost. So move TCP into user space.  The result is a web server with ~3.5x the throughput of Nginx "while experiencing low CPU utilization, linear scaling on multicore systems, and saturating current NIC hardware."

Here's a good description of the paper published on Layer 9:

Click to read more ...

Wednesday
Jan012014

Paper: Nanocubes: Nanocubes for Real-Time Exploration of Spatiotemporal Datasets

How do you turn Big Data into fast, useful, and interesting visualizations? Using R and technology called Nanocubes. The visualizations are stunning and amazingly reactive. Almost as interesting as the technologies behind them.

David Smith wrote a great article explaining the technology and showing a demo by Simon Urbanek of a visualization that uses 32Tb of Twitter data. It runs smoothly and interactively on a single machine with 16Gb of RAM.  For more information and demos go to nanocubes.net.

David Smith sums it up nicely:

Despite the massive number of data points and the beauty and complexity of the real-time data visualization, it runs impressively quickly. The underlying data structure is based on Nanocubes, a fast datastructure for in-memory data cubes. The basic idea is that nanocubes aggregate data hierarchically, so that as you zoom in and out of the interactive application, one pixel on the screen is mapped to just one data point, aggregated from the many that sit "behind" that pixel. Learn more about nanocubes, and try out the application yourself (modern browser required) at the link below.

Abstract from Nanocubes for Real-Time Exploration of Spatiotemporal Datasets:

Consider real-time exploration of large multidimensional spatiotemporal datasets with billions of entries, each defined by a location, a time, and other attributes. Are certain attributes correlated spatially or temporally? Are there trends or outliers in the data? Answering these questions requires aggregation over arbitrary regions of the domain and attributes of the data. Many relational databases implement the well-known data cube aggregation operation, which in a sense precomputes every possible aggregate query over the database. Data cubes are sometimes assumed to take a prohibitively large amount of space, and to consequently require disk storage. In contrast, we show how to construct a data cube that fits in a modern laptop’s main memory, even for billions of entries; we call this data structure a nanocube. We present algorithms to compute and query a nanocube, and show how it can be used to generate well-known visual encodings such as heatmaps, histograms, and parallel coordinate plots. When compared to exact visualizations created by scanning an entire dataset, nanocube plots have bounded screen error across a variety of scales, thanks to a hierarchical structure in space and time. We demonstrate the effectiveness of our technique on a variety of real-world datasets, and present memory, timing, and network bandwidth measurements. We find that the timings for the queries in our examples are dominated by network and user-interaction latencies.
Thursday
Nov072013

Paper: Tempest: Scalable Time-Critical Web Services Platform

An interesting and different implementation approach: Tempest: Scalable Time-Critical Web Services Platform

Tempest is a new framework for developing time-critical web services. Tempest enables developers to build scalable, fault-tolerant services that can then be automatically replicated and deployed across clusters of computing nodes. The platform automatically adapts to load fluctuations, reacts when components fail, and ensures consistency between replicas by repairing when inconsistencies do occur. Tempest relies on a family of epidemic protocols and on Ricochet, a reliable time critical multicast protocol with probabilistic guarantees.

Tempest is built around a novel storage abstraction called the TempestCollection in which application developers store the state of a service. Our platform handles the replication of this state across clones of the service, persistence, and failure handling. To minimize the need for specialized knowledge on the part of the application developer, the TempestCollection employs interfaces almost identical to those used by the Java Collections standard. Elements can be accessed on an individual basis, but it is also possible to access the full set by iterating over it, just as in a standard Collection. The hope is that we can free developers from the complexities of scalability and fault-tolerance, leaving them to focus on application functionality.

Traditionally, services relying on a transactional database backend offer a strong data consistency model in which every read operation returns the result of the latest update that occurred on a data item. With Tempest we take a different approach by relaxing the model such that services offer sequential consistency [10]: Every replica of the service sees the operations on the same data item in the same order, but the order may be different from the order in which the operations were issued. Later, we will see that this is a non-trivial design decision; Tempest services can sometimes return results that would be erroneous were we using a more standard transactional execution model. For applications where these semantics are adequate, sequential consistency buys us scheduling flexibility that enables much better real-time responsiveness.