License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITCS.2021.78
URN: urn:nbn:de:0030-drops-136177
Go to the corresponding LIPIcs Volume Portal

Schoenebeck, Grant ; Yu, Fang-Yi

Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach

LIPIcs-ITCS-2021-78.pdf (0.6 MB)


Peer prediction mechanisms incentivize agents to truthfully report their signals even in the absence of verification by comparing agents' reports with those of their peers. In the detail-free multi-task setting, agents are asked to respond to multiple independent and identically distributed tasks, and the mechanism does not know the prior distribution of agents' signals. The goal is to provide an ε-strongly truthful mechanism where truth-telling rewards agents "strictly" more than any other strategy profile (with ε additive error) even for heterogeneous agents, and to do so while requiring as few tasks as possible.
We design a family of mechanisms with a scoring function that maps a pair of reports to a score. The mechanism is strongly truthful if the scoring function is "prior ideal". Moreover, the mechanism is ε-strongly truthful as long as the scoring function used is sufficiently close to the ideal scoring function. This reduces the above mechanism design problem to a learning problem - specifically learning an ideal scoring function. Because learning the prior distribution is sufficient (but not necessary) to learn the scoring function, we can apply standard learning theory techniques that leverage side information about the prior (e.g., that it is close to some parametric model). Furthermore, we derive a variational representation of an ideal scoring function and reduce the learning problem into an empirical risk minimization.
We leverage this reduction to obtain very general results for peer prediction in the multi-task setting. Specifically,
- Sample Complexity: We show how to derive good bounds on the number of tasks required for different types of priors-in some cases exponentially improving previous results. In particular, we can upper bound the required number of tasks for parametric models with bounded learning complexity. Furthermore, our reduction applies to myriad continuous signal space settings. To the best of our knowledge, this is the first peer-prediction mechanism on continuous signals designed for the multi-task setting.
- Connection to Machine Learning: We show how to turn a soft-predictor of an agent’s signals (given the other agents' signals) into a mechanism. This allows the practical use of machine learning algorithms that give good results even when many agents provide noisy information.
- Stronger Properties: In the finite setting, we obtain ε-strongly truthful mechanisms for any stochastically relevant prior. Prior works either only apply to more restrictive settings, or achieve a weaker notion of truthfulness (informed truthfulness).

BibTeX - Entry

  author =	{Grant Schoenebeck and Fang-Yi Yu},
  title =	{{Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{78:1--78:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{James R. Lee},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{},
  URN =		{urn:nbn:de:0030-drops-136177},
  doi =		{10.4230/LIPIcs.ITCS.2021.78},
  annote =	{Keywords: Information elicitation without verification, crowdsourcing, machine learning}

Keywords: Information elicitation without verification, crowdsourcing, machine learning
Collection: 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)
Issue Date: 2021
Date of publication: 04.02.2021

DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI