Distributed in-memory datasets

AMPLab, “Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing,” UCB/EECS-2011-82, 2011. [PDF]

Russell Power, Jinyang Li, “Piccolo: Building Fast, Distributed Programs with Partitioned Tables,” OSDI, 2010. [PDF]

Summary

MapReduce and similar frameworks, while widely applicable, are limited to directed acyclic data flow models, do not expose global states, and generally slow due to the lack of support for in-memory computations. MPI, while extremely powerful, is hard to use for non-experts. An ideal solution would be a compromise between the two approaches. Spark and Piccolo try to approximate that ideal within the MapReduce-to-MPI spectrum using in-memory data abstractions.

Piccolo

Piccolo provides a distributed key-value store-like abstraction, where applications/tasks can read from and write to a shared storage. Users write partition functions to divide the data across multiple machines, control functions to decide the workflow, kernel functions for performing distributed operations on mutable states, and conflict resolution functions to resolve write-write conflicts. Piccolo uses Chandi-Lamport snapshot algorithm for periodic checkpointing and rolls back all tasks of a failed job from the last checkpoint when required.

Spark

Spark is a distributed programming model based on a distributed in-memory data abstraction called Resilient Distributed Datasets (RDDs). RDDs are immutable, support coarse-grained transformations, and keep track of which transformations have been applied to them so far using lineages that can be used for RDD reconstruction. As a result, checkpointing requirements/overheads are low in Spark.

Spark vs Piccolo

There are two key differences between Spark and Piccolo.

  1. RDDs only support coarse-grained writes (transformations) as opposed to finer-grained writes supported by distributed tables used by Piccolo. This allows efficient storage of lineage information, which reduces checkpointing overhead and fast fault recovery. However, this makes Spark unsuitable for applications that depend on fine-grained updates.
  2. RDDs are immutable, which enables straggler mitigation by speculative execution in Spark.

Comments

Piccolo is closer to MPI, while Spark is closer to MapReduce on the MapReduce-to-MPI spectrum. The key tradeoff in both cases, however, is between framework usability vs its applicability/power (framework complexity follows power). Both frameworks are much faster than Hadoop (but remember that Hadoop is not the best implementation of MapReduce), a large fraction of which comes from the use of memory. May be I am biased as a member of the Spark project, but Spark should be good enough for most applications unless they absolutely require fine-grained updates.

Leave a Reply

Your email address will not be published. Required fields are marked *