End-To-End Arguments in System Design

Citation: J. H. Saltzer, D. P. Reeed, D. D. Clark, “End-to-End Arguments in System Design,” 2nd International Conference on Distributed Computing Systems, Paris, (April 1981), pp. 509-512. [PDF]

Summary

The crux of the end-to-end argument is that in most cases endpoints (e.g., applications) know best what they need, and the underlying system should provide only a minimal set of attributes/functionalities that are absolutely necessary for it to support those applications. The system must not force anything that may or may not be useful to every single application and should avoid redundancy and cost inefficiency. The authors argue that simplicity is the key to successful system designing; low level mechanisms are only justified for performance enhancements, not for core design. They explain, through several use cases, that the flexibility resulting from the minimalistic design enables modular/layered system architectures that have empirically been found to be the key to success for many well-known distributed systems.

TCP/IP and possibly the whole network stack is the biggest example of applying the end-to-end argument, where each layer has defined functionalities, and one layer builds on top of another growing upto the application layer where actual applications have the last words. Several other examples in the areas of computer architecture (e.g., RISC), database systems are also presented in this paper that successfully maximized on this mantra.

Critique

It is not always clear where such endpoints lie. In a network, for example, OSes can be the endpoints as well as the applications. Throughout the paper, the authors keep suggesting that system designers must make intelligent and informed decisions to identify the endpoints in a system without any definite answer or rule-of-thumb. The end-to-end argument can at best be used as a guideline (or warning!) against pushing mechanisms too far down a layered system, but there is no definite way of telling that you have moved far enough.

As a result, over the years, the Internet has accumulated many functionalities/mechanisms in its core that can be considered to have gone against the end-to-end argument. Most of them, e.g., NATs, firewalls, are engineering fixes for particular problems or administrative/management requirements. It’s not clear whether this is a flaw of the end-to-end argument itself or its the classic case of engineering triumph over scientific axioms.

However, in recent years, proposals for next-generation architectures for replacing the Internet put immense emphasis on simplified, flexible core that will allow heterogeneous network architectures and services to coexist (in this case each coexisting architecture can be considered to be an endpoint to the shared substrate). This again brings us back full circle to the end-to-end principle justifying its worth.

Overall, this oft-cited but seldom-read article provides a great perspective on the design of the Internet as we see it today and on systems development in general. Great read for opening the scorecard!

In Retrospect

A retrospective view on this paper and what has changed in the Internet since then can be found in the following paper.

M. S. Blumenthal and D. D. Clark, “Rethinking the Design of the Internet: The End-to-End Arguments vs. the Brave New World,” ACM Transactions on Internet Technology, 1(1): 70-109, ACM Press, August, 2001. [PDF]

A single sentence, pulled verbatim from the paper, can sum up the changes:

Of all the changes that are transforming the Internet, the loss of trust may be the most fundamental.

Unfortunately, this is probably true for everything :(

One thought on “End-To-End Arguments in System Design”

Leave a Reply

Your email address will not be published. Required fields are marked *