Paper: Propagation Networks: A Flexible and Expressive Substrate for Computation

Alexey Radul in his fascinating 174 page dissertation Propagation Networks: A Flexible and Expressive Substrate for Computation, offers to help us break free of the tyranny of linear time by arranging computation as a network of autonomous but interconnected machines. We can do this by organizing computation as a network of interconnected machines of some kind, each of which is free to run when it pleases, propagating information around the network as proves possible. The consequence of this freedom is that the structure of the aggregate does not impose an order of time. The abstract from his thesis is:
In this dissertation I propose a shift in the foundations of computation. Modern programming systems are not expressive enough. The traditional image of a single computer that has global effects on a large memory is too restrictive. The propagation paradigm replaces this with computing by networks of local, independent, stateless machines interconnected with stateful storage cells. In so doing, it offers great flexibility and expressive power, and has therefore been much studied, but has not yet been tamed for general-purpose computation. The novel insight that should finally permit computing with general-purpose propagation is that a cell should not be seen as storing a value, but as accumulating information about a value.Various forms of the general idea of propagation have been used with great success for various special purposes; perhaps the most immediate example is constraint propagation in constraint satisfaction systems. This success is evidence both that traditional linear computation is not expressive enough, and that propagation is more expressive. These special-purpose systems, however, are all complex and all different, and neither compose well, nor interoperate well, nor generalize well. A foundational layer is missing.
I present in this dissertation the design and implementation of a prototype general-purpose propagation system. I argue that the structure of the prototype follows from the overarching principle of computing by propagation and of storage by accumulating information—there are no important arbitrary decisions. I illustrate on several worked examples how the resulting organization supports arbitrary computation; recovers the expressivity benefits that have been derived from special-purpose propagation systems in a single general-purpose framework, allowing them to compose and interoperate; and offers further expressive power beyond what we have known in the past. I reflect on the new light the propagation perspective sheds on the deep nature of computation.
I really like the sound of all this, it seems like a good match for large scale distributed systems, but I have to admit in the end I didn't get it. Does anyone know of a more basic primer on this area of study?
Reader Comments (3)
I hate to be "that guy," but from reading a bit here and there of Radul's dissertation, these propagation networks seem very much analogous to Kahn Process Networks ( http://en.wikipedia.org/wiki/Kahn_process_networks ) which were defined in 1974 as a model of parallel distributed computing based on message passing over a directed graph. With respect to a programming language for such a computing architecture, there have been many languages within the category of Flow Based Programming ( http://en.wikipedia.org/wiki/Flow-based_programming ). When looking through the thesis I see no mention of Kahn Process Networks. Am I missing something?
I agree with Marko's take. Radul's dissertation is excellent--computer science dissertation's are rarely this readable; frankly, I hope a publisher takes note. In the preface, he alludes to an effort to shun the details of prior work to focus on the clarity of explanation and he does reference some prior flow-based work, but missing is a complete comparative study.
There is a good pointer to related primers in a stackoverflow post: http://stackoverflow.com/questions/1028250
Another thing missing (as far as I can see in a 10 minute read) is any discussion of how this actually improves software engineering at all. There have been a vat of constraint/unification/logic programming languages that have all foundered on the inherent difficult in writing reliable software. All made huge claims and, except in a very few exceptional cases, all have failed to deliver due to the extraordinary difficulty in comprehension of large programs written in these styles.
A great example of this difficulty in reliable program can be found in spreadsheets. It is enormously difficult to actually validate that a spreadsheet does what is intended and it is very, very common to find huge errors in large spreadsheets. Small UI changes like highlighting antecedents help, but the fundamental problem of validation by inspection remains.
Propagation networks are like this, but to an extraordinarily greater degree because we now have to reason not only about a huge number of tiny programs all executing asynchronously, we also have to worry about unbounded interaction effects.