The effectiveness and scalability of MapReduce-based implementations of complex data-intensive tasks depend on an even redistribution of data between map and reduce tasks. In the presence of skewed data, sophisticated redistribution approaches thus become necessary to achieve load balancing among all reduce tasks to be executed in parallel. For the complex problem of entity resolution, we propose and evaluate two approaches for such skew handling and load balancing.
Entity resolution is a crucial step for data quality and data
integration. Learning-based approaches show high effective-
ness at the expense of poor efficiency. To reduce the typ-
ically high execution times, we investigate how learning-
based entity resolution can be realized in a cloud infras-
tructure using MapReduce. We propose and evaluate two
efficient MapReduce-based strategies for pair-wise similar-
ity computation and classifier application on the Cartesian
product of two input sources. Our evaluation is based on
real-world datasets and shows the high efficiency and effec-
The effectiveness and scalability of MapReduce-based im-
plementations of complex data-intensive tasks depend on an
even redistribution of data between map and reduce tasks.
In the presence of skewed data, sophisticated redistribution
approaches thus become necessary to achieve load balanc-
ing among all reduce tasks to be executed in parallel. For
the complex problem of entity resolution with blocking, we
propose BlockSplit, a load balancing approach that supports
blocking techniques to reduce the search space of entity res-
Abstract Cloud infrastructures enable the efficient parallel
execution of data-intensive tasks such as entity resolution on
large datasets. We investigate challenges and possible solu-
tions of using the MapReduce programming model for par-
allel entity resolution using Sorting Neighborhood blocking
(SN). We propose and evaluate two efficient MapReduce-
based implementations for single- and multi-pass SN that
either use multiple MapReduce jobs or apply a tailored data
replication. We also propose an automatic data partitioning
approach for multi-pass SN to achieve load balancing. Our
Cloud infrastructures enable the efficient parallel execution of data-intensive
tasks such as entity resolution on large datasets. We investigate challenges and possi-
ble solutions of using the MapReduce programming model for parallel entity resolu-
tion. In particular, we propose and evaluate two MapReduce-based implementations
for Sorted Neighborhood blocking that either use multiple MapReduce jobs or apply
a tailored data replication.
In this paper we study how to efficiently perform set-simi-
larity joins in parallel using the popular MapReduce frame-
work. We propose a 3-stage approach for end-to-end set-
similarity joins. We take as input a set of records and output
a set of joined records based on a set-similarity condition.
We efficiently partition the data across nodes in order to
balance the workload and minimize the need for replication.
We study both self-join and R-S join cases, and show how to
carefully control the amount of data kept in main memory