Abstract
The most studied linear algebraic operation, matrix multiplication, has surprisingly fast O(n^ω) time algorithms for ω < 2.373. On the other hand, the (min,+) matrix product which is at the heart of many fundamental graph problems such as AllPairs Shortest Paths, has received only minor n^o(1) improvements over its bruteforce cubic running time and is widely conjectured to require n^{3o(1)} time. There is a plethora of matrix products and graph problems whose complexity seems to lie in the middle of these two problems. For instance, the MinMax matrix product, the Minimum Witness matrix product, AllPairs Shortest Paths in directed unweighted graphs and determining whether an edgecolored graph contains a monochromatic triangle, can all be solved in Õ(n^{(3+ω)/2}) time. While slight improvements are sometimes possible using rectangular matrix multiplication, if ω=2, the best runtimes for these "intermediate" problems are all Õ(n^2.5).
A similar phenomenon occurs for convolution problems. Here, using the FFT, the usual (+,×)convolution of two nlength sequences can be solved in O(n log n) time, while the (min,+) convolution is conjectured to require n^{2o(1)} time, the brute force running time for convolution problems. There are analogous intermediate problems that can be solved in O(n^1.5) time, but seemingly not much faster: MinMax convolution, Minimum Witness convolution, etc.
Can one improve upon the running times for these intermediate problems, in either the matrix product or the convolution world? Or, alternatively, can one relate these problems to each other and to other key problems in a meaningful way?
This paper makes progress on these questions by providing a network of finegrained reductions. We show for instance that APSP in directed unweighted graphs and Minimum Witness product can be reduced to both the MinMax product and a variant of the monochromatic triangle problem, so that a significant improvement over n^{(3+ω)/2} time for any of the latter problems would result in a similar improvement for both of the former problems. We also show that a natural convolution variant of monochromatic triangle is finegrained equivalent to the famous 3SUM problem. As this variant is solvable in O(n^1.5) time and 3SUM is in O(n^2) time (and is conjectured to require n^{2o(1)} time), our result gives the first finegrained equivalence between natural problems of different running times. We also relate 3SUM to monochromatic triangle, and a coin change problem to monochromatic convolution, and thus to 3SUM.
BibTeX  Entry
@InProceedings{lincoln_et_al:LIPIcs:2020:11738,
author = {Andrea Lincoln and Adam Polak and Virginia Vassilevska Williams},
title = {{Monochromatic Triangles, Intermediate Matrix Products, and Convolutions}},
booktitle = {11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
pages = {53:153:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {9783959771344},
ISSN = {18688969},
year = {2020},
volume = {151},
editor = {Thomas Vidick},
publisher = {Schloss DagstuhlLeibnizZentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2020/11738},
URN = {urn:nbn:de:0030drops117382},
doi = {10.4230/LIPIcs.ITCS.2020.53},
annote = {Keywords: 3SUM, finegrained complexity, matrix multiplication, monochromatic triangle}
}
Keywords: 

3SUM, finegrained complexity, matrix multiplication, monochromatic triangle 
Collection: 

11th Innovations in Theoretical Computer Science Conference (ITCS 2020) 
Issue Date: 

2020 
Date of publication: 

06.01.2020 