Scalable Reliable Multicast

S. Floyd, V. Jacobson, S. McCanne, C-G Liu, L. Zhang, “A Reliable Multicast Framework for Light-weight Sessions and Application Level Framing,” ACM SIGCOMM Conference, (August 1995). [PDF]


Unlike unicast transmissions where requirements for different reliable, sequential data delivery are fairly general, different multicast applications have different requirements for reliability. The authors argue that designing a one-size-fits-all multicast protocol – that can simultaneously meet the functionality, scalability and efficiency requirements of all applications – is not possible. In this paper, they propose Scalable Reliable Multicast (SRM), a framework for supporting diverse multicast requirements in different applications by integrating application semantics (similar to Application Level Framing (ALF)) and session semantics (similar to Light-Weight Sessions (LWS)).

In order to ensure reliability, some party must take responsibility for loss detection and recovery. In TCP, the sender is that responsible party. However, in a multicast scenario, the sender will face ACK explosion from all the receivers in the multicast domain. In addition, the sender will have to keep track of the changing set of active receivers, corresponding windows. RTT estimation algorithms as well as concepts like congestion window also require rethinking. The authors argue that receiver-based reliability, where responsibilities are distributed among many receivers instead of being piled up on the single sender , is a better solution in the multicast domain. Another design choice is to promote the use of naming in application data units (ADUs) instead of generic protocol-specific naming; ADUs ensure that application semantics are intact and provides a level of indirection that can be exploited for anonymity purposes.

SRM is discussed using a distributed whiteboard (wb) application. The whiteboard separates drawing into pages. Each member can create a page and any member can manipulate any page. Each member has a globally unique identifier, so does each page. Names and identifiers are persistent and refers to the same entity over time. The authors also assume the presence of IP multicast datagram delivery and do not separate senders from receivers. Also, wb has no requirement for ordered data delivery because the operations are idempotent and they can be performed in any order.

Whenever new data is generated by wb, it is multicast to the group. Each member of the group individually keep track of their losses (detected by gaps in sequence numbers) and retransmissions. Each member sends low-rate, periodic, session messages to the group announcing the highest sequence number they received. The same messages also carry timestamps that are used to estimate distances between members. When receiver(s) detects missing data, they wait for a random amount of time (using a request timer) before sending a repair request. The wait time is a function of their distance from the source. Repair requests and retransmissions are multicast to the whole group, which suppress repair request implosion (from other receivers who also didn’t receive the same data). If a host receives a request for the same data before its request timer goes off, it resets the request timer after doing an exponential backoff. Any host with the requested data can reply back, but it also waits a random amount of time (using a repair timer) based on its distance from the requester. Similar to the previous case, this prevents response implosion. If a host receives the repair before its repair timer goes off, it cancels the timer.

The authors have analyzed the framework using three example topologies: chains, which explain deterministic suppression; stars, which explore probabilistic suppression; and bounded degree trees, which combine both the suppression mechanisms. Based on the analysis, they have come up with some parameters and observations that are best described in the paper. In short, by tuning those parameters the framework can be tuned to different application requirements.

Finally, SRM is extended to support adaptive adjustment of random timer algorithms based on the past behavior of request/repair algorithms. Simulations show that the adaptive algorithms quickly reduce the average number of repairs with little penalty in additional delay, and they are effective in controlling the number of duplicates over a range of scenarios.


The ideas presented in the paper are simple but intuitively effective. The best part would be the analysis of the framework using three different example topologies. The biggest complain about this paper, as well as about some others around the same time period, would be the absence of any axis or legends whatsoever in the graphs, which is very very annoying for the readers. Another observation is the extensive use of simulations.

There seems to be a lot of similarities between SRM and opportunistic routing protocols in the wireless networks. I wonder whether and which aspects of the opportunistic protocols can be used in a multicast scenario similar to SRM or vice versa.

Leave a Reply

Your email address will not be published. Required fields are marked *