We demonstrate a powerful and easy-to-use tool called Dedoop (Deduplication with Hadoop) for MapReduce-based entity resolution (ER) of large datasets. Dedoop supports a browser-based specification of complex ER workflows including blocking and matching steps as well as the optional use of machine learning for the automatic generation of match classifiers. Specified workflows are automatically translated into MapReduce jobs for parallel execution on different Hadoop clusters. To achieve high performance Dedoop supports several advanced load balancing strategies.
Abstract—Fuzzy/similarity joins have been widely studied in the research community and extensively used in real-world applications. This paper proposes and evaluates several algorithms for finding all pairs of elements from an input set that meet a similarity threshold. The computation model is a single MapReduce job. Because we allow only one MapReduce round, the Reduce function must be designed so a given output pair is produced by only one task; for many algorithms, satisfying this condition is one of the biggest challenges.
We present an automatic skew mitigation approach for user- defined MapReduce programs and present SkewTune, a sys- tem that implements this approach as a drop-in replacement for an existing MapReduce implementation. There are three key challenges: (a) require no extra input from the user yet work for all MapReduce applications, (b) be completely transparent, and (c) impose minimal overhead if there is no skew.
The effectiveness and scalability of MapReduce-based implementations of complex data-intensive tasks depend on an even redistribution of data between map and reduce tasks. In the presence of skewed data, sophisticated redistribution approaches thus become necessary to achieve load balancing among all reduce tasks to be executed in parallel. For the complex problem of entity resolution, we propose and evaluate two approaches for such skew handling and load balancing.
This paper describes a workflow manager developed and
deployed at Yahoo called Nova, which pushes continually-
arriving data through graphs of Pig programs executing on
Hadoop clusters. (Pig is a structured dataflow language and
runtime for the Hadoop map-reduce system.)
Nova is like data stream managers in its support for
stateful incremental processing, but unlike them in that it
deals with data in large batches using disk-based processing.
Batched incremental processing is a good fit for a large frac-
tion of Yahoo’s data processing use-cases, which deal with
Implementations of map-reduce are being used to perform
many operations on very large data. We examine strategies
for joining several relations in the map-reduce environment.
Our new approach begins by identifying the “map-key,” the
set of attributes that identify the Reduce process to which a
Map process must send a particular tuple. Each attribute of
the map-key gets a “share,” which is the number of buckets
into which its values are hashed, to form a component of the
identifier of a Reduce process. Relations have their tuples
Joins are essential for many data analysis tasks, but are
not supported directly by the MapReduce paradigm. While
there has been progress on equi-joins, implementation of join
algorithms in MapReduce in general is not sufficiently un-
derstood. We study the problem of how to map arbitrary
join conditions to Map and Reduce functions, i.e., a parallel
infrastructure that controls data flow based on key-equality
only. Our proposed join model simplifies creation of and
reasoning about joins in MapReduce. Using this model, we
derive a surprisingly simple randomized algorithm, called 1-
Entity resolution is a crucial step for data quality and data
integration. Learning-based approaches show high effective-
ness at the expense of poor efficiency. To reduce the typ-
ically high execution times, we investigate how learning-
based entity resolution can be realized in a cloud infras-
tructure using MapReduce. We propose and evaluate two
efficient MapReduce-based strategies for pair-wise similar-
ity computation and classifier application on the Cartesian
product of two input sources. Our evaluation is based on
real-world datasets and shows the high efficiency and effec-
The effectiveness and scalability of MapReduce-based im-
plementations of complex data-intensive tasks depend on an
even redistribution of data between map and reduce tasks.
In the presence of skewed data, sophisticated redistribution
approaches thus become necessary to achieve load balanc-
ing among all reduce tasks to be executed in parallel. For
the complex problem of entity resolution with blocking, we
propose BlockSplit, a load balancing approach that supports
blocking techniques to reduce the search space of entity res-
Abstract Cloud infrastructures enable the efficient parallel
execution of data-intensive tasks such as entity resolution on
large datasets. We investigate challenges and possible solu-
tions of using the MapReduce programming model for par-
allel entity resolution using Sorting Neighborhood blocking
(SN). We propose and evaluate two efficient MapReduce-
based implementations for single- and multi-pass SN that
either use multiple MapReduce jobs or apply a tailored data
replication. We also propose an automatic data partitioning
approach for multi-pass SN to achieve load balancing. Our