Citation: D. D. Clark, “The Design Philosophy of the DARPA Internet Protocols,” ACM SIGCOMM Conference, (August 1988). [ACM]
This paper summarizes the rationale behind the design choices made for the Internet (initiated by DARPA) and explains how the Internet, its protocols, and its underlying mechanisms ended up being the way they were in 1988. The main objective of this paper is to provide necessary context and background for designing extensions to the Internet. In addition, the authors also discuss some of the evolutionary changes to the Internet architecture up to that point and the motivations/requirements behind those changes.
As pointed out in this paper, the primary goal of the DARPA Internet design was to interconnect its wired and radio networks, which can be generalized as the problem of interconnecting separately administered network domains, through multiplexing of underlying resources. In this respect, the DARPA designers used packet-switched store-and-forward networking technology because of their greater understanding of how it works and also due to the frequently-used applications/tasks that would’ve been benefited from this choice. The choice of unifying heterogeneous administrative domains through gateways instead of trying to create a monolithic mega-network turned out to be the biggest factor that allowed the Internet to scale so much in later years. Apart from the basic goal of interconnection, secondary objectives included survivability, heterogeneity in terms of technology and equipments, distributed management, and cost-effectiveness among others. What stands out in this list is the absence of security completely as well as the position of accountability/attribution as a low-priority objective. The take-home advice is that a system, in many cases, is highly coupled with and dictated by its requirements and premises; a list out of commercial interest could easily put accountability as one of the topmost goals. However, it is not clear why a military network was not concerned about security, privacy, authentication etc.!
One thing prominent in this paper is the influence/presence of the end-to-end argument on/in the design of the Internet – TCP/IP separation, fate-sharing, layering mechanism, UDP – all these concepts are direct or indirect results of simplifying the core. One must notice that the end-to-end argument was not applied to come up with those concepts, rather the way those concepts were materialized naturally points to the principle of simple flexible core and comparatively complex endpoints, i.e., the end-to-end argument (in Sections 5 and 6). The discussions on the evolutionary changes to TCP acknowledgement scheme, choosing between byte-vs-packet numbering, EOL and PSH-related issues provide retrospective views on the choices made, their implications, and what could’ve been done better. The part on EOL and PSH is not very clear though.
The paper also describes some of the problems/deficiencies/complexities that arose from the design choices, inabilities to meet some of the goals, and eventually brings up the issue of trade-off that is ever-present in any system design. In this case, the trade-off is between simplifying the core versus putting too much complexities in the endpoints that might hinder performance. The question of where the endpoints really are in the end-to-end argument is completely exposed here without any definitive answer.