Category Archives: Reviews

The Design Philosophy of the DARPA Internet Protocols

Citation: D. D. Clark, “The Design Philosophy of the DARPA Internet Protocols,” ACM SIGCOMM Conference, (August 1988). [ACM]

Summary

This paper summarizes the rationale behind the design choices made for the Internet (initiated by DARPA) and explains how the Internet, its protocols, and its underlying mechanisms ended up being the way they were in 1988. The main objective of this paper is to provide necessary context and background for designing extensions to the Internet. In addition, the authors also discuss some of the evolutionary changes to the Internet architecture up to that point and the motivations/requirements behind those changes.

As pointed out in this paper, the primary goal of the DARPA Internet design was to interconnect its wired and radio networks, which can be generalized as the problem of interconnecting separately administered network domains, through multiplexing of underlying resources. In this respect, the DARPA designers used packet-switched store-and-forward networking technology because of their greater understanding of how it works and also due to the frequently-used applications/tasks that would’ve been benefited from this choice. The choice of unifying heterogeneous administrative domains through gateways instead of trying to create a monolithic mega-network turned out to be the biggest factor that allowed the Internet to scale so much in later years. Apart from the basic goal of interconnection, secondary objectives included survivability, heterogeneity in terms of technology and equipments, distributed management, and cost-effectiveness among others. What stands out in this list is the absence of security completely as well as the position of accountability/attribution as a low-priority objective. The take-home advice is that a system, in many cases, is highly coupled with and dictated by its requirements and premises; a list out of commercial interest could easily put accountability as one of the topmost goals. However, it is not clear why a military network was not concerned about security, privacy, authentication etc.!

Critique

One thing prominent in this paper is the influence/presence of the end-to-end argument on/in the design of the Internet – TCP/IP separation, fate-sharing, layering mechanism, UDP – all these concepts are direct or indirect results of simplifying the core. One must notice that the end-to-end argument was not applied to come up with those concepts, rather the way those concepts were materialized naturally points to the principle of simple flexible core and comparatively complex endpoints, i.e., the end-to-end argument (in Sections 5 and 6). The discussions on the evolutionary changes to TCP acknowledgement scheme, choosing between byte-vs-packet numbering, EOL and PSH-related issues provide retrospective views on the choices made, their implications, and what could’ve been done better. The part on EOL and PSH is not very clear though.

The paper also describes some of the problems/deficiencies/complexities that arose from the design choices, inabilities to meet some of the goals, and eventually brings up the issue of trade-off  that is ever-present in any system design. In this case, the trade-off is between simplifying the core versus putting too much complexities in the endpoints that might hinder performance. The question of where the endpoints really are in the end-to-end argument is completely exposed here without any definitive answer.

End-To-End Arguments in System Design

Citation: J. H. Saltzer, D. P. Reeed, D. D. Clark, “End-to-End Arguments in System Design,” 2nd International Conference on Distributed Computing Systems, Paris, (April 1981), pp. 509-512. [PDF]

Summary

The crux of the end-to-end argument is that in most cases endpoints (e.g., applications) know best what they need, and the underlying system should provide only a minimal set of attributes/functionalities that are absolutely necessary for it to support those applications. The system must not force anything that may or may not be useful to every single application and should avoid redundancy and cost inefficiency. The authors argue that simplicity is the key to successful system designing; low level mechanisms are only justified for performance enhancements, not for core design. They explain, through several use cases, that the flexibility resulting from the minimalistic design enables modular/layered system architectures that have empirically been found to be the key to success for many well-known distributed systems.

TCP/IP and possibly the whole network stack is the biggest example of applying the end-to-end argument, where each layer has defined functionalities, and one layer builds on top of another growing upto the application layer where actual applications have the last words. Several other examples in the areas of computer architecture (e.g., RISC), database systems are also presented in this paper that successfully maximized on this mantra.

Critique

It is not always clear where such endpoints lie. In a network, for example, OSes can be the endpoints as well as the applications. Throughout the paper, the authors keep suggesting that system designers must make intelligent and informed decisions to identify the endpoints in a system without any definite answer or rule-of-thumb. The end-to-end argument can at best be used as a guideline (or warning!) against pushing mechanisms too far down a layered system, but there is no definite way of telling that you have moved far enough.

As a result, over the years, the Internet has accumulated many functionalities/mechanisms in its core that can be considered to have gone against the end-to-end argument. Most of them, e.g., NATs, firewalls, are engineering fixes for particular problems or administrative/management requirements. It’s not clear whether this is a flaw of the end-to-end argument itself or its the classic case of engineering triumph over scientific axioms.

However, in recent years, proposals for next-generation architectures for replacing the Internet put immense emphasis on simplified, flexible core that will allow heterogeneous network architectures and services to coexist (in this case each coexisting architecture can be considered to be an endpoint to the shared substrate). This again brings us back full circle to the end-to-end principle justifying its worth.

Overall, this oft-cited but seldom-read article provides a great perspective on the design of the Internet as we see it today and on systems development in general. Great read for opening the scorecard!

In Retrospect

A retrospective view on this paper and what has changed in the Internet since then can be found in the following paper.

M. S. Blumenthal and D. D. Clark, “Rethinking the Design of the Internet: The End-to-End Arguments vs. the Brave New World,” ACM Transactions on Internet Technology, 1(1): 70-109, ACM Press, August, 2001. [PDF]

A single sentence, pulled verbatim from the paper, can sum up the changes:

Of all the changes that are transforming the Internet, the loss of trust may be the most fundamental.

Unfortunately, this is probably true for everything :(