Very large data sets often have a flat but regular structure and span multiple disks and
machines. Examples include telephone call records, network logs, and web document reposi-
tories. These large data sets are not amenable to study using traditional database techniques, if
only because they can be too large to fit in a single relational database. On the other hand, many
of the analyses done on them can be expressed using simple, easily distributed computations:
filtering, aggregation, extraction of statistics, and so on.
There is a growing need for ad-hoc analysis of extremely
large data sets, especially at internet companies where inno-
vation critically depends on being able to analyze terabytes
of data collected every day. Parallel database products, e.g.,
Teradata, offer a solution, but are usually prohibitively ex-
pensive at this scale. Besides, many of the people who ana-
lyze this data are entrenched procedural programmers, who
find the declarative, SQL style to be unnatural. The success
of the more procedural map-reduce programming model, and