In computational complexity theory, a fine-grained reduction is a transformation from one computational problem to another, used to relate the difficulty of improving the time bounds for the two problems. Intuitively, it provides a method for solving one problem efficiently by using the solution to the other problem as a subroutine. If problem can be solved in time and problem can be solved in time , then the existence of an -reduction from problem to problem implies that any significant speedup for problem would also lead to a speedup for problem .
Definition
Let and be computational problems, specified as the desired output for each possible input. Let and both be time-constructible functions that take an integer argument and produce an integer result. Usually, and are the time bounds for known or naive algorithms for the two problems, and often they are monomials such as .[1]
Then is said to be -reducible to if, for every real number , there exists a real number and an algorithm that solves instances of problem by transforming it into a sequence of instances of problem , taking time for the transformation on instances of size , and producing a sequence of instances whose sizes are bounded by .[1]
An -reduction is given by the mapping from to the pair of an algorithm and .[1]
Speedup implication
Suppose is -reducible to , and there exists such that can be solved in time . Then, with these assumptions, there also exists such that can be solved in time . Namely, let be the value given by the -reduction, and solve by applying the transformation of the reduction and using the fast algorithm for for each resulting subproblem.[1]
Equivalently, if cannot be solved in time significantly faster than , then cannot be solved in time significantly faster than .[1]
History
Fine-grained reductions were defined, in the special case that and are equal monomials, by Virginia Vassilevska Williams and Ryan Williams in 2010. They also showed the existence of -reductions between several problems including all-pairs shortest paths, finding the second-shortest path between two given vertices in a weighted graph, finding negative-weight triangles in weighted graphs, and testing whether a given distance matrix describes a metric space. According to their results, either all of these problems have time bounds with exponents less than three, or none of them do.[2]
The term "fine-grained reduction" comes from later work by Virginia Vassilevska Williams in an invited presentation at the 10th International Symposium on Parameterized and Exact Computation.[1]
Although the original definition of fine-grained reductions involved deterministic algorithms, the corresponding concepts for randomized algorithms and nondeterministic algorithms have also been considered.[3]
References
- 1 2 3 4 5 6 Williams, Virginia V. (2015), "Hardness of easy problems: basing hardness on popular conjectures such as the Strong Exponential Time Hypothesis", 10th International Symposium on Parameterized and Exact Computation, LIPIcs. Leibniz Int. Proc. Inform., vol. 43, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, pp. 17–29, MR 3452406
- ↑ Williams, Virginia Vassilevska; Williams, R. Ryan (2018), "Subcubic equivalences between path, matrix, and triangle problems", Journal of the ACM, 65 (5): A27:1–A27:38, doi:10.1145/3186893, hdl:1721.1/134750, MR 3856539. A preliminary version of these results, including the definition of a "subcubic reduction", a special case of a fine-grained reduction, was presented at the 2010 Symposium on Foundations of Computer Science.
- ↑ Carmosino, Marco L.; Gao, Jiawei; Impagliazzo, Russell; Mihajlin, Ivan; Paturi, Ramamohan; Schneider, Stefan (2016), "Nondeterministic extensions of the strong exponential time hypothesis and consequences for non-reducibility", ITCS'16—Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, ACM, New York, pp. 261–270, MR 3629829