Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We demonstrate that learning procedures that rely on aggregated labels, e.g.,
label information distilled from noisy responses, enjoy robustness properties
impossible without data cleaning. This robustness appears in several ways. In
the context of risk consistency -- when one takes the standard approach in
machine learning of minimizing a surrogate (typically convex) loss in place of
a desired task loss (such as the zero-one mis-classification error) --
procedures using label aggregation obtain stronger consistency guarantees than
those even possible using raw labels. And while classical statistical scenarios
of fitting perfectly-specified models suggest that incorporating all possible
information -- modeling uncertainty in labels -- is statistically efficient,
consistency fails for ``standard'' approaches as soon as a loss to be minimized
is even slightly mis-specified. Yet procedures leveraging aggregated
information still converge to optimal classifiers, highlighting how
incorporating a fuller view of the data analysis pipeline, from collection to
model-fitting to prediction time, can yield a more robust methodology by
refining noisy signals.