License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITCS.2021.58
URN: urn:nbn:de:0030-drops-135979
Go to the corresponding LIPIcs Volume Portal

Yoshida, Yuichi ; Zhou, Samson

Sensitivity Analysis of the Maximum Matching Problem

LIPIcs-ITCS-2021-58.pdf (0.6 MB)


We consider the sensitivity of algorithms for the maximum matching problem against edge and vertex modifications. When an algorithm A for the maximum matching problem is deterministic, the sensitivity of A on G is defined as max_{e ∈ E(G)}|A(G) △ A(G - e)|, where G-e is the graph obtained from G by removing an edge e ∈ E(G) and △ denotes the symmetric difference. When A is randomized, the sensitivity is defined as max_{e ∈ E(G)}d_{EM}(A(G),A(G-e)), where d_{EM}(⋅,⋅) denotes the earth mover’s distance between two distributions. Thus the sensitivity measures the difference between the output of an algorithm after the input is slightly perturbed. Algorithms with low sensitivity, or stable algorithms are desirable because they are robust to edge failure or attack.
In this work, we show a randomized (1-ε)-approximation algorithm with worst-case sensitivity O_ε(1), which substantially improves upon the (1-ε)-approximation algorithm of Varma and Yoshida (SODA'21) that obtains average sensitivity n^O(1/(1+ε²)) sensitivity algorithm, and show a deterministic 1/2-approximation algorithm with sensitivity exp(O(log^*n)) for bounded-degree graphs. We then show that any deterministic constant-factor approximation algorithm must have sensitivity Ω(log^* n). Our results imply that randomized algorithms are strictly more powerful than deterministic ones in that the former can achieve sensitivity independent of n whereas the latter cannot. We also show analogous results for vertex sensitivity, where we remove a vertex instead of an edge.
Finally, we introduce the notion of normalized weighted sensitivity, a natural generalization of sensitivity that accounts for the weights of deleted edges. For a graph with weight function w, the normalized weighted sensitivity is defined to be the sum of the weighted edges in the symmetric difference of the algorithm normalized by the altered edge, i.e., max_{e ∈ E(G)}1/(w(e))w (A(G) △ A(G - e)). Hence the normalized weighted sensitivity measures the weighted difference between the output of an algorithm after the input is slightly perturbed, normalized by the weight of the perturbation. We show that if all edges in a graph have polynomially bounded weight, then given a trade-off parameter α > 2, there exists an algorithm that outputs a 1/(4α)-approximation to the maximum weighted matching in O(m log_α n) time, with normalized weighted sensitivity O(1).

BibTeX - Entry

  author =	{Yuichi Yoshida and Samson Zhou},
  title =	{{Sensitivity Analysis of the Maximum Matching Problem}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{58:1--58:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{James R. Lee},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{},
  URN =		{urn:nbn:de:0030-drops-135979},
  doi =		{10.4230/LIPIcs.ITCS.2021.58},
  annote =	{Keywords: Sensitivity analysis, maximum matching, graph algorithms}

Keywords: Sensitivity analysis, maximum matching, graph algorithms
Collection: 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)
Issue Date: 2021
Date of publication: 04.02.2021

DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI