License: Creative Commons Attribution-NoDerivs 3.0 Unported license (CC BY-ND 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.STACS.2013.185
URN: urn:nbn:de:0030-drops-39337
Go to the corresponding LIPIcs Volume Portal

Darnstädt, Malte ; Simon, Hans Ulrich ; Szörényi, Balázs

Unlabeled Data Does Provably Help

21.pdf (0.6 MB)


A fully supervised learner needs access to correctly labeled examples whereas a semi-supervised learner has access to examples part of which are labeled and part of which are not. The hope is that a large collection of unlabeled examples significantly reduces the need for labeled-ones. It is widely believed that this reduction of "label complexity" is marginal unless the hidden target concept and the domain distribution satisfy some "compatibility assumptions". There are some recent papers in support of this belief. In this paper, we revitalize the discussion by presenting a result that goes in the other direction. To this end, we consider the PAC-learning model in two settings: the (classical) fully supervised setting and the semi-supervised setting. We show that the "label-complexity gap"' between the semi-supervised and the fully supervised setting can become arbitrarily large for concept classes of infinite VC-dimension (or sequences of classes whose VC-dimensions are finite but become arbitrarily large). On the other hand, this gap is bounded by O(ln |C|) for each finite concept class C that contains the constant zero- and the constant one-function. A similar statement holds for all classes C of finite VC-dimension.

BibTeX - Entry

  author =	{Malte Darnst{\"a}dt and Hans Ulrich Simon and Bal{\'a}zs Sz{\"o}r{\'e}nyi},
  title =	{{Unlabeled Data Does Provably Help}},
  booktitle =	{30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)},
  pages =	{185--196},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-50-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{20},
  editor =	{Natacha Portier and Thomas Wilke},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{},
  URN =		{urn:nbn:de:0030-drops-39337},
  doi =		{10.4230/LIPIcs.STACS.2013.185},
  annote =	{Keywords: algorithmic learning, sample complexity, semi-supervised learning}

Keywords: algorithmic learning, sample complexity, semi-supervised learning
Collection: 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)
Issue Date: 2013
Date of publication: 26.02.2013

DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI