STOC 2018 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H I J K L M N O P R S T U V W X Y Z
Aaronson, Scott |
STOC '18: "Shadow Tomography of Quantum ..."
Shadow Tomography of Quantum States
Scott Aaronson (University of Texas at Austin, USA) We introduce the problem of *shadow tomography*: given an unknown D-dimensional quantum mixed state ρ, as well as known two-outcome measurements E1,…,EM, estimate the probability that Ei accepts ρ, to within additive error ε, for each of the M measurements. How many copies of ρ are needed to achieve this, with high probability? Surprisingly, we give a procedure that solves the problem by measuring only O( ε−5·log4 M·logD) copies. This means, for example, that we can learn the behavior of an arbitrary n-qubit state, on *all* accepting/rejecting circuits of some fixed polynomial size, by measuring only nO( 1) copies of the state. This resolves an open problem of the author, which arose from his work on private-key quantum money schemes, but which also has applications to quantum copy-protected software, quantum advice, and quantum one-way communication. Recently, building on this work, Brandão et al. have given a different approach to shadow tomography using semidefinite programming, which achieves a savings in computation time. @InProceedings{STOC18p325, author = {Scott Aaronson}, title = {Shadow Tomography of Quantum States}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {325--338}, doi = {}, year = {2018}, } |
|
Abboud, Amir |
STOC '18: "More Consequences of Falsifying ..."
More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture
Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof (IBM Research, USA; Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Eindhoven University of Technology, Netherlands) The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ε>0 for which an O(N2−ε) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that: - For all d and all large enough k, there is a randomized algorithm that takes O(n(1−ε)k) time to solve the Zero-Weight-k-Clique and Min-Weight-k-Clique problems on d-hypergraphs with n vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. - For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)n). @InProceedings{STOC18p253, author = {Amir Abboud and Karl Bringmann and Holger Dell and Jesper Nederlof}, title = {More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {253--266}, doi = {}, year = {2018}, } |
|
Abraham, Ittai |
STOC '18: "Metric Embedding via Shortest ..."
Metric Embedding via Shortest Path Decompositions
Ittai Abraham, Arnold Filtser, Anupam Gupta, and Ofer Neiman (VMware, USA; Ben-Gurion University of the Negev, Israel; Carnegie Mellon University, USA) We study the problem of embedding weighted graphs of pathwidth k into ℓp spaces. Our main result is an O(kmin{1p,12})-distortion embedding. For p=1, this is a super-exponential improvement over the best previous bound of Lee and Sidiropoulos. Our distortion bound is asymptotically tight for any fixed p >1. Our result is obtained via a novel embedding technique that is based on low depth decompositions of a graph via shortest paths. The core new idea is that given a geodesic shortest path P, we can probabilistically embed all points into 2 dimensions with respect to P. For p>2 our embedding also implies improved distortion on bounded treewidth graphs (O((klogn)1p)). For asymptotically large p, our results also implies improved distortion on graphs excluding a minor. @InProceedings{STOC18p952, author = {Ittai Abraham and Arnold Filtser and Anupam Gupta and Ofer Neiman}, title = {Metric Embedding via Shortest Path Decompositions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {952--963}, doi = {}, year = {2018}, } |
|
Abrahamsen, Mikkel |
STOC '18: "The Art Gallery Problem Is ..."
The Art Gallery Problem Is ∃ ℝ-Complete
Mikkel Abrahamsen, Anna Adamaszek, and Tillmann Miltzow (University of Copenhagen, Denmark; Université Libre de Bruxelles, Belgium) We prove that the art gallery problem is equivalent under polynomial time reductions to deciding whether a system of polynomial equations over the real numbers has a solution. The art gallery problem is a classic problem in computational geometry, introduced in 1973 by Victor Klee. Given a simple polygon P and an integer k, the goal is to decide if there exists a set G of k guards within P such that every point p ∈ P is seen by at least one guard g∈ G. Each guard corresponds to a point in the polygon P, and we say that a guard g sees a point p if the line segment pg is contained in P. The art gallery problem has stimulated extensive research in geometry and in algorithms. However, the complexity status of the art gallery problem has not been resolved. It has long been known that the problem is NP-hard, but no one has been able to show that it lies in NP. Recently, the computational geometry community became more aware of the complexity class ∃ ℝ, which has been studied earlier by other communities. The class ∃ ℝ consists of problems that can be reduced in polynomial time to the problem of deciding whether a system of polynomial equations with integer coefficients and any number of real variables has a solution. It can be easily seen that NP ⊆ ∃ ℝ. We prove that the art gallery problem is ∃ ℝ-complete, implying that (1) any system of polynomial equations over the real numbers can be encoded as an instance of the art gallery problem, and (2) the art gallery problem is not in the complexity class NP unless NP=∃ ℝ. As an illustration of our techniques, we can show that for every compact semi-algebraic set S⊂ [0,1]2 there exists a polygon with rational coordinates that enforces one of the guards to be at any position p∈ S, in any optimal guarding. As a corollary of our construction, we prove that for any real algebraic number α there is an instance of the art gallery problem where one of the coordinates of the guards equals α in any guard set of minimum cardinality. That rules out many natural geometric approaches to the problem, as it shows that any approach based on constructing a finite set of candidate points for placing guards has to include points with coordinates being roots of polynomials with arbitrary degree. In the ∃ ℝ-hardness proof for the art gallery problem we introduce a new ∃ ℝ-complete problem ETR-INV. We believe that this problem is of independent interest, as it can be used to obtain ∃ ℝ-hardness proofs for other problems. In particular, ETR-INV has been very recently used to prove ∃ ℝ-hardness of other geometric problems. @InProceedings{STOC18p65, author = {Mikkel Abrahamsen and Anna Adamaszek and Tillmann Miltzow}, title = {The Art Gallery Problem Is ∃ ℝ-Complete}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {65--73}, doi = {}, year = {2018}, } Info STOC '18: "Fast Fencing ..." Fast Fencing Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent Cohen-Addad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in nO(k) time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NP-hard for general k. Our polynomial time algorithm refutes this unless P equals NP. @InProceedings{STOC18p564, author = {Mikkel Abrahamsen and Anna Adamaszek and Karl Bringmann and Vincent Cohen-Addad and Mehran Mehr and Eva Rotenberg and Alan Roytman and Mikkel Thorup}, title = {Fast Fencing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564--573}, doi = {}, year = {2018}, } |
|
Adamaszek, Anna |
STOC '18: "The Art Gallery Problem Is ..."
The Art Gallery Problem Is ∃ ℝ-Complete
Mikkel Abrahamsen, Anna Adamaszek, and Tillmann Miltzow (University of Copenhagen, Denmark; Université Libre de Bruxelles, Belgium) We prove that the art gallery problem is equivalent under polynomial time reductions to deciding whether a system of polynomial equations over the real numbers has a solution. The art gallery problem is a classic problem in computational geometry, introduced in 1973 by Victor Klee. Given a simple polygon P and an integer k, the goal is to decide if there exists a set G of k guards within P such that every point p ∈ P is seen by at least one guard g∈ G. Each guard corresponds to a point in the polygon P, and we say that a guard g sees a point p if the line segment pg is contained in P. The art gallery problem has stimulated extensive research in geometry and in algorithms. However, the complexity status of the art gallery problem has not been resolved. It has long been known that the problem is NP-hard, but no one has been able to show that it lies in NP. Recently, the computational geometry community became more aware of the complexity class ∃ ℝ, which has been studied earlier by other communities. The class ∃ ℝ consists of problems that can be reduced in polynomial time to the problem of deciding whether a system of polynomial equations with integer coefficients and any number of real variables has a solution. It can be easily seen that NP ⊆ ∃ ℝ. We prove that the art gallery problem is ∃ ℝ-complete, implying that (1) any system of polynomial equations over the real numbers can be encoded as an instance of the art gallery problem, and (2) the art gallery problem is not in the complexity class NP unless NP=∃ ℝ. As an illustration of our techniques, we can show that for every compact semi-algebraic set S⊂ [0,1]2 there exists a polygon with rational coordinates that enforces one of the guards to be at any position p∈ S, in any optimal guarding. As a corollary of our construction, we prove that for any real algebraic number α there is an instance of the art gallery problem where one of the coordinates of the guards equals α in any guard set of minimum cardinality. That rules out many natural geometric approaches to the problem, as it shows that any approach based on constructing a finite set of candidate points for placing guards has to include points with coordinates being roots of polynomials with arbitrary degree. In the ∃ ℝ-hardness proof for the art gallery problem we introduce a new ∃ ℝ-complete problem ETR-INV. We believe that this problem is of independent interest, as it can be used to obtain ∃ ℝ-hardness proofs for other problems. In particular, ETR-INV has been very recently used to prove ∃ ℝ-hardness of other geometric problems. @InProceedings{STOC18p65, author = {Mikkel Abrahamsen and Anna Adamaszek and Tillmann Miltzow}, title = {The Art Gallery Problem Is ∃ ℝ-Complete}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {65--73}, doi = {}, year = {2018}, } Info STOC '18: "Fast Fencing ..." Fast Fencing Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent Cohen-Addad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in nO(k) time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NP-hard for general k. Our polynomial time algorithm refutes this unless P equals NP. @InProceedings{STOC18p564, author = {Mikkel Abrahamsen and Anna Adamaszek and Karl Bringmann and Vincent Cohen-Addad and Mehran Mehr and Eva Rotenberg and Alan Roytman and Mikkel Thorup}, title = {Fast Fencing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564--573}, doi = {}, year = {2018}, } |
|
Agarwal, Udit |
STOC '18: "Fine-Grained Complexity for ..."
Fine-Grained Complexity for Sparse Graphs
Udit Agarwal and Vijaya Ramachandran (University of Texas at Austin, USA) We consider the fine-grained complexity of sparse graph problems that currently have Õ(mn) time algorithms, where m is the number of edges and n is the number of vertices in the input graph. This class includes several important path problems on both directed and undirected graphs, including APSP, MWC (Minimum Weight Cycle), Radius, Eccentricities, BC (Betweenness Centrality), etc. We introduce the notion of a sparse reduction which preserves the sparsity of graphs, and we present near linear-time sparse reductions between various pairs of graph problems in the Õ(mn) class. There are many sub-cubic reductions between graph problems in the Õ(mn) class, but surprisingly few of these preserve sparsity. In the directed case, our results give a partial order on a large collection of problems in the Õ(mn) class (along with some equivalences), and many of our reductions are very nontrivial. In the undirected case we give two nontrivial sparse reductions: from MWC to APSP, and from unweighted ANSC (all nodes shortest cycles) to unweighted APSP. We develop a new ‘bit-sampling’ method for these sparse reductions on undirected graphs, which also gives rise to improved or simpler algorithms for cycle finding problems in undirected graphs. We formulate the the notion of MWC hardness, which is based on the assumption that a minimum weight cycle in a directed graph cannot be computed in time polynomially smaller than mn. Our sparse reductions for directed path problems in the Õ(mn) class establish that several problems in this class, including 2-SiSP (second simple shortest path), s-t Replacement Paths, Radius, Eccentricities and BC are MWC hard. Our sparse reductions give MWC hardness a status for the Õ(mn) class similar to 3SUM hardness for the quadratic class, since they show sub-mn hardness for a large collection of fundamental and well-studied graph problems that have maintained an Õ(mn) time bound for over half a century. We also identify Eccentricities and BC as key problems in the Õ(mn) class which are simultaneously MWC-hard, SETH-hard and k-DSH-hard, where SETH is the Strong Exponential Time Hypothesis, and k-DSH is the hypothesis that a dominating set of size k cannot be computed in time polynomially smaller than nk. Our framework using sparse reductions is very relevant to real-world graphs, which tend to be sparse and for which the Õ(mn) time algorithms are the ones typically used in practice, and not the Õ(n3) time algorithms. @InProceedings{STOC18p239, author = {Udit Agarwal and Vijaya Ramachandran}, title = {Fine-Grained Complexity for Sparse Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {239--252}, doi = {}, year = {2018}, } |
|
Aggarwal, Divesh |
STOC '18: "(Gap/S)ETH Hardness of SVP ..."
(Gap/S)ETH Hardness of SVP
Divesh Aggarwal and Noah Stephens-Davidowitz (National University of Singapore, Singapore; Princeton University, USA) We prove the following quantitative hardness results for the Shortest Vector Problem in the ℓp norm (SVP_p), where n is the rank of the input lattice. For “almost all” p > p0 ≈ 2.1397, there is no 2n/Cp-time algorithm for SVP_p for some explicit (easily computable) constant Cp > 0 unless the (randomized) Strong Exponential Time Hypothesis (SETH) is false. (E.g., for p ≥ 3, Cp < 1 + (p+3) 2−p + 10 p2 2−2p.) For any 1 ≤ p ≤ ∞, there is no 2o(n)-time algorithm for SVP_p unless the non-uniform Gap-Exponential Time Hypothesis (Gap-ETH) is false. Furthermore, for each such p, there exists a constant γp > 1 such that the same result holds even for γp-approximate SVP_p. For p > 2, the above statement holds under the weaker assumption of randomized Gap-ETH. I.e., there is no 2o(n)-time algorithm for γp-approximate SVP_p unless randomized Gap-ETH is false. See http://arxiv.org/abs/1712.00942 for a complete exposition. @InProceedings{STOC18p228, author = {Divesh Aggarwal and Noah Stephens-Davidowitz}, title = {(Gap/S)ETH Hardness of SVP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {228--238}, doi = {}, year = {2018}, } |
|
Agrawal, Manindra |
STOC '18: "Bootstrapping Variables in ..."
Bootstrapping Variables in Algebraic Circuits
Manindra Agrawal, Sumanta Ghosh, and Nitin Saxena (IIT Kanpur, India) We show that for the blackbox polynomial identity testing (PIT) problem it suffices to study circuits that depend only on the first extremely few variables. One only need to consider size-s degree-s circuits that depend on the first log∘ c s variables (where c is a constant and we are composing c logarithms). Thus, hitting-set generator (hsg) manifests a bootstrapping behavior— a partial hsg against very few variables can be efficiently grown to a complete hsg. A boolean analog, or a pseudorandom generator property of this type, is unheard-of. Our idea is to use the partial hsg and its annihilator polynomial to efficiently bootstrap the hsg exponentially wrt variables. This is repeated c times in an efficient way. Pushing the envelope further we show that: (1) a quadratic-time blackbox PIT for 6913-variate degree-s size-s polynomials, will lead to a “near”-complete derandomization of PIT, and (2) a blackbox PIT for n-variate degree-s size-s circuits in snδ-time, for δ<1/2, will lead to a “near”-complete derandomization of PIT (in contrast, sn-time is trivial). Our second idea is to study depth-4 circuits that depend on constantly many variables. We show that a polynomial-time computable, O(s1.49)-degree hsg for trivariate depth-4 circuits bootstraps to a quasipolynomial time hsg for general poly-degree circuits, and implies a lower bound that is a bit stronger than Kabanets-Impagliazzo (STOC 2003). @InProceedings{STOC18p1166, author = {Manindra Agrawal and Sumanta Ghosh and Nitin Saxena}, title = {Bootstrapping Variables in Algebraic Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1166--1179}, doi = {}, year = {2018}, } Info |
|
Allen-Zhu, Zeyuan |
STOC '18: "Operator Scaling via Geodesically ..."
Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing
Zeyuan Allen-Zhu, Ankit Garg, Yuanzhi Li, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; Princeton University, USA; University of Toronto, Canada; Institute for Advanced Study at Princeton, USA) We propose a new second-order method for geodesically convex optimization on the natural hyperbolic metric over positive definite matrices. We apply it to solve the operator scaling problem in time polynomial in the input size and logarithmic in the error. This is an exponential improvement over previous algorithms which were analyzed in the usual Euclidean, "commutative" metric (for which the above problem is not convex). Our method is general and applicable to other settings. As a consequence, we solve the equivalence problem for the left-right group action underlying the operator scaling problem. This yields a deterministic polynomial-time algorithm for a new class of Polynomial Identity Testing (PIT) problems, which was the original motivation for studying operator scaling. @InProceedings{STOC18p172, author = {Zeyuan Allen-Zhu and Ankit Garg and Yuanzhi Li and Rafael Oliveira and Avi Wigderson}, title = {Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {172--181}, doi = {}, year = {2018}, } |
|
Alman, Josh |
STOC '18: "Cell-Probe Lower Bounds from ..."
Cell-Probe Lower Bounds from Online Communication Complexity
Josh Alman, Joshua R. Wang, and Huacheng Yu (Massachusetts Institute of Technology, USA; Stanford University, USA; Harvard University, USA) In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piece-by-piece, our model presents one of the players, Bob, his input piece-by-piece, and has the players Alice and Bob cooperate to compute a result each time before the next piece is revealed to Bob. This model has a closer and more natural correspondence to dynamic data structures than classic communication models do, and hence presents a new perspective on data structures. We first present a tight lower bound for the online set intersection problem in the online communication model, demonstrating a general approach for proving online communication lower bounds. The online communication model prevents a batching trick that classic communication complexity allows, and yields a stronger lower bound. We then apply the online communication model to prove data structure lower bounds for two dynamic data structure problems: the Group Range problem and the Dynamic Connectivity problem for forests. Both of the problems admit a worst case O(logn)-time data structure. Using online communication complexity, we prove a tight cell-probe lower bound for each: spending o(logn) (even amortized) time per operation results in at best an exp(−δ2 n) probability of correctly answering a (1/2+δ)-fraction of the n queries. @InProceedings{STOC18p1003, author = {Josh Alman and Joshua R. Wang and Huacheng Yu}, title = {Cell-Probe Lower Bounds from Online Communication Complexity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1003--1012}, doi = {}, year = {2018}, } |
|
Andoni, Alexandr |
STOC '18: "Data-Dependent Hashing via ..."
Data-Dependent Hashing via Nonlinear Spectral Gaps
Alexandr Andoni, Assaf Naor, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Princeton University, USA; University of Toronto, Canada; Microsoft Research, USA) We establish a generic reduction from _nonlinear spectral gaps_ of metric spaces to data-dependent Locality-Sensitive Hashing, yielding a new approach to the high-dimensional Approximate Near Neighbor Search problem (ANN) under various distance functions. Using this reduction, we obtain the following results: * For _general_ d-dimensional normed spaces and n-point datasets, we obtain a _cell-probe_ ANN data structure with approximation O(logd/ε2), space dO(1) n1+ε, and dO(1)nε cell probes per query, for any ε>0. No non-trivial approximation was known before in this generality other than the O(√d) bound which follows from embedding a general norm into ℓ2. * For ℓp and Schatten-p norms, we improve the data structure further, to obtain approximation O(p) and sublinear query _time_. For ℓp, this improves upon the previous best approximation 2O(p) (which required polynomial as opposed to near-linear in n space). For the Schatten-p norm, no non-trivial ANN data structure was known before this work. Previous approaches to the ANN problem either exploit the low dimensionality of a metric, requiring space exponential in the dimension, or circumvent the curse of dimensionality by embedding a metric into a ”tractable” space, such as ℓ1. Our new generic reduction proceeds differently from both of these approaches using a novel partitioning method. @InProceedings{STOC18p787, author = {Alexandr Andoni and Assaf Naor and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Data-Dependent Hashing via Nonlinear Spectral Gaps}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {787--800}, doi = {}, year = {2018}, } |
|
Assadi, Sepehr |
STOC '18: "Fully Dynamic Maximal Independent ..."
Fully Dynamic Maximal Independent Set with Sublinear Update Time
Sepehr Assadi, Krzysztof Onak, Baruch Schieber, and Shay Solomon (University of Pennsylvania, USA; IBM Research, USA) A maximal independent set (MIS) can be maintained in an evolving m-edge graph by simply recomputing it from scratch in O(m) time after each update. But can it be maintained in time sublinear in m in fully dynamic graphs? We answer this fundamental open question in the affirmative. We present a deterministic algorithm with amortized update time O(min{Δ,m3/4}), where Δ is a fixed bound on the maximum degree in the graph and m is the (dynamically changing) number of edges. We further present a distributed implementation of our algorithm with O(min{Δ,m3/4}) amortized message complexity, and O(1) amortized round complexity and adjustment complexity (the number of vertices that change their output after each update). This strengthens a similar result by Censor-Hillel, Haramaty, and Karnin (PODC’16) that required an assumption of a non-adaptive oblivious adversary. @InProceedings{STOC18p815, author = {Sepehr Assadi and Krzysztof Onak and Baruch Schieber and Shay Solomon}, title = {Fully Dynamic Maximal Independent Set with Sublinear Update Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {815--826}, doi = {}, year = {2018}, } |
|
Atserias, Albert |
STOC '18: "Clique Is Hard on Average ..."
Clique Is Hard on Average for Regular Resolution
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Alexander Razborov (Universitat Politècnica de Catalunya, Spain; KTH, Sweden; Sapienza University of Rome, Italy; University of Chicago, USA; Russian Academy of Sciences, Russia) We prove that for k ≪ n1/4 regular resolution requires length nΩ(k) to establish that an Erdos-Renyi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional nΩ(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs. @InProceedings{STOC18p866, author = {Albert Atserias and Ilario Bonacina and Susanna F. de Rezende and Massimo Lauria and Jakob Nordström and Alexander Razborov}, title = {Clique Is Hard on Average for Regular Resolution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {866--877}, doi = {}, year = {2018}, } |
|
Backurs, Arturs |
STOC '18: "Towards Tight Approximation ..."
Towards Tight Approximation Bounds for Graph Diameter and Eccentricities
Arturs Backurs, Liam Roditty, Gilad Segal, Virginia Vassilevska Williams, and Nicole Wein (Massachusetts Institute of Technology, USA; Bar-Ilan University, Israel) Among the most important graph parameters is the Diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the Diameter exactly. Thus, much research has been devoted to how fast this parameter can be approximated. Chechik et al. [SODA 2014] showed that the diameter can be approximated within a multiplicative factor of 3/2 in Õ(m3/2) time. Furthermore, Roditty and Vassilevska W. [STOC 13] showed that unless the Strong Exponential Time Hypothesis (SETH) fails, no O(n2−ε) time algorithm can achieve an approximation factor better than 3/2 in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than 3/2. It was, however, completely plausible that a 3/2-approximation is possible in linear time. In this work we conditionally rule out such a possibility by showing that unless SETH fails no O(m3/2−ε) time algorithm can achieve an approximation factor better than 5/3. Another fundamental set of graph parameters are the Eccentricities. The Eccentricity of a vertex v is the distance between v and the farthest vertex from v. Chechik et al. [SODA 2014] showed that the Eccentricities of all vertices can be approximated within a factor of 5/3 in Õ(m3/2) time and Abboud et al. [SODA 2016] showed that no O(n2−ε) algorithm can achieve better than 5/3 approximation in sparse graphs. We show that the runtime of the 5/3 approximation algorithm is also optimal by proving that under SETH, there is no O(m3/2−ε) algorithm that achieves a better than 9/5 approximation. We also show that no near-linear time algorithm can achieve a better than 2 approximation for the Eccentricities. This is the first lower bound in fine-grained complexity that addresses near-linear time computation. We show that our lower bound for near-linear time algorithms is essentially tight by giving an algorithm that approximates Eccentricities within a 2+δ factor in Õ(m/δ) time for any 0<δ<1. This beats all Eccentricity algorithms in Cairo et al. [SODA 2016] and is the first constant factor approximation for Eccentricities in directed graphs. To establish the above lower bounds we study the S-T Diameter problem: Given a graph and two subsets S and T of vertices, output the largest distance between a vertex in S and a vertex in T. We give new algorithms and show tight lower bounds that serve as a starting point for all other hardness results. Our lower bounds apply only to sparse graphs. We show that for dense graphs, there are near-linear time algorithms for S-T Diameter, Diameter and Eccentricities, with almost the same approximation guarantees as their Õ(m3/2) counterparts, improving upon the best known algorithms for dense graphs. @InProceedings{STOC18p267, author = {Arturs Backurs and Liam Roditty and Gilad Segal and Virginia Vassilevska Williams and Nicole Wein}, title = {Towards Tight Approximation Bounds for Graph Diameter and Eccentricities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {267--280}, doi = {}, year = {2018}, } |
|
Badrinarayanan, Saikrishna |
STOC '18: "Succinct Delegation for Low-Space ..."
Succinct Delegation for Low-Space Non-deterministic Computation
Saikrishna Badrinarayanan, Yael Tauman Kalai, Dakshita Khurana, Amit Sahai, and Daniel Wichs (University of California at Los Angeles, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA; Northeastern University, USA) We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in non-deterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)). Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a non-interactive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs. Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on sub-exponential LWE, for many natural languages believed to be outside of P. @InProceedings{STOC18p709, author = {Saikrishna Badrinarayanan and Yael Tauman Kalai and Dakshita Khurana and Amit Sahai and Daniel Wichs}, title = {Succinct Delegation for Low-Space Non-deterministic Computation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {709--721}, doi = {}, year = {2018}, } Info |
|
Balkanski, Eric |
STOC '18: "The Adaptive Complexity of ..."
The Adaptive Complexity of Maximizing a Submodular Function
Eric Balkanski and Yaron Singer (Harvard University, USA) In this paper we study the adaptive complexity of submodular optimization. Informally, the adaptive complexity of a problem is the minimal number of sequential rounds required to achieve a constant factor approximation when polynomially-many queries can be executed in parallel at each round. Adaptivity is a fundamental concept that is heavily studied in computer science, largely due to the need for parallelizing computation. Somewhat surprisingly, very little is known about adaptivity in submodular optimization. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, to the best of our knowledge, all that is known to date is that the adaptive complexity is between 1 and Ω(n). Our main result in this paper is a tight characterization showing that the adaptive complexity of maximizing a monotone submodular function under a cardinality constraint is Θ(log n): - We describe an algorithm which requires O(log n) sequential rounds and achieves an approximation that is arbitrarily close to 1/3; - We show that no algorithm can achieve an approximation better than O(1 / log n) with fewer than O(log n / log log n) rounds. Thus, when allowing for parallelization, our algorithm achieves a constant factor approximation exponentially faster than any known existing algorithm for submodular maximization. Importantly, the approximation algorithm is achieved via adaptive sampling and complements a recent line of work on optimization of functions learned from data. In many cases we do not know the functions we optimize and learn them from labeled samples. Recent results show that no algorithm can obtain a constant factor approximation guarantee using polynomially-many labeled samples as in the PAC and PMAC models, drawn from any distribution. Since learning with non-adaptive samples over any distribution results in a sharp impossibility, we consider learning with adaptive samples where the learner obtains poly(n) samples drawn from a distribution of her choice in every round. Our result implies that in the realizable case, where there is a true underlying function generating the data, Θ(log n) batches of adaptive samples are necessary and sufficient to approximately “learn to optimize” a monotone submodular function under a cardinality constraint. @InProceedings{STOC18p1138, author = {Eric Balkanski and Yaron Singer}, title = {The Adaptive Complexity of Maximizing a Submodular Function}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1138--1151}, doi = {}, year = {2018}, } |
|
Balliu, Alkida |
STOC '18: "New Classes of Distributed ..."
New Classes of Distributed Time Complexity
Alkida Balliu, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Dennis Olivetti, and Jukka Suomela (Aalto University, Finland; University of Freiburg, Germany) A number of recent papers – e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) – have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π in which a solution can be verified by checking all radius-O(1) neighbourhoods, and the question is what is the smallest T such that a solution can be computed so that each node chooses its own output based on its radius-T neighbourhood. Here T is the distributed time complexity of Π. The time complexity classes for deterministic algorithms in bounded-degree graphs that are known to exist by prior work are Θ(1), Θ(log* n), Θ(logn), Θ(n1/k), and Θ(n). It is also known that there are two gaps: one between ω(1) and o(loglog* n), and another between ω(log* n) and o(logn). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple – indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(logα n) for any α ≥ 1, 2Θ(logα n) for any α ≤ 1, and Θ(nα) for any α < 1/2 in the high end of the complexity spectrum, and Θ(logα log* n) for any α ≥ 1, 2Θ(logα log* n) for any α ≤ 1, and Θ((log* n)α) for any α ≤ 1 in the low end of the complexity spectrum; here α is a positive rational number. @InProceedings{STOC18p1307, author = {Alkida Balliu and Juho Hirvonen and Janne H. Korhonen and Tuomo Lempiäinen and Dennis Olivetti and Jukka Suomela}, title = {New Classes of Distributed Time Complexity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1307--1318}, doi = {}, year = {2018}, } |
|
Bansal, Nikhil |
STOC '18: "The Gram-Schmidt Walk: A Cure ..."
The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues
Nikhil Bansal, Daniel Dadush, Shashwat Garg, and Shachar Lovett (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of California at San Diego, USA) An important result in discrepancy due to Banaszczyk states that for any set of n vectors in ℝm of ℓ2 norm at most 1 and any convex body K in ℝm of Gaussian measure at least half, there exists a ± 1 combination of these vectors which lies in 5K. This result implies the best known bounds for several problems in discrepancy. Banaszczyk’s proof of this result is non-constructive and an open problem has been to give an efficient algorithm to find such a ± 1 combination of the vectors. In this paper, we resolve this question and give an efficient randomized algorithm to find a ± 1 combination of the vectors which lies in cK for c>0 an absolute constant. This leads to new efficient algorithms for several problems in discrepancy theory. @InProceedings{STOC18p587, author = {Nikhil Bansal and Daniel Dadush and Shashwat Garg and Shachar Lovett}, title = {The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {587--597}, doi = {}, year = {2018}, } |
|
Bateni, MohammadHossein |
STOC '18: "Fast Algorithms for Knapsack ..."
Fast Algorithms for Knapsack via Convolution and Prediction
MohammadHossein Bateni, MohammadTaghi Hajiaghayi, Saeed Seddighin, and Cliff Stein (Google, USA; University of Maryland, USA; Columbia University, USA) The knapsack problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most well-known NP-hard problems. The goal is to pack a knapsack of size t with the maximum value from a collection of n items with given sizes and values. Recent evidence suggests that a classic O(nt) dynamic-programming solution for the knapsack problem might be the fastest in the worst case. In fact, solving the knapsack problem was shown to be computationally equivalent to the (min, +) convolution problem, which is thought to be facing a quadratic-time barrier. This hardness is in contrast to the more famous (+, ·) convolution (generally known as polynomial multiplication), that has an O(nlogn)-time solution via Fast Fourier Transform. Our main results are algorithms with near-linear running times (in terms of the size of the knapsack and the number of items) for the knapsack problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by , the running time of our algorithm is Õ((n+t)). If the item values are integers bounded by , our algorithm runs in time Õ(n+t). Best previously known running times were O(nt), O(n2) and O(n) (Pisinger, J. of Alg., 1999). At the core of our algorithms lies the prediction technique: Roughly speaking, this new technique enables us to compute the convolution of two vectors in time (n) when an approximation of the solution within an additive error of is available. Our results also improve the best known strongly polynomial time solutions for knapsack. In the limited size setting, when the items have multiplicities, the fastest strongly polynomial time algorithms for knapsack run in time O(n2 2) and O(n3 2) for the cases of infinite and given multiplicities, respectively. Our results improve both running times by a factor of (n max{1, n/}). @InProceedings{STOC18p1269, author = {MohammadHossein Bateni and MohammadTaghi Hajiaghayi and Saeed Seddighin and Cliff Stein}, title = {Fast Algorithms for Knapsack via Convolution and Prediction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1269--1282}, doi = {}, year = {2018}, } |
|
Bezáková, Ivona |
STOC '18: "Inapproximability of the Independent ..."
Inapproximability of the Independent Set Polynomial in the Complex Plane
Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, and Daniel Štefankovič (Rochester Institute of Technology, USA; University of Oxford, UK; University of Rochester, USA) We study the complexity of approximating the value of the independent set polynomial ZG(λ) of a graph G with maximum degree Δ when the activity λ is a complex number. When λ is real, the complexity picture is well-understood, and is captured by two real-valued thresholds λ* and λc, which depend on Δ and satisfy 0<λ*<λc. It is known that if λ is a real number in the interval (−λ*,λc) then there is an FPTAS for approximating ZG(λ) on graphs G with maximum degree at most Δ. On the other hand, if λ is a real number outside of the (closed) interval, then approximation is NP-hard. The key to establishing this picture was the interpretation of the thresholds λ* and λc on the Δ-regular tree. The ”occupation ratio” of a Δ-regular tree T is the contribution to ZT(λ) from independent sets containing the root of the tree, divided by ZT(λ) itself. This occupation ratio converges to a limit, as the height of the tree grows, if and only if λ∈ [−λ*,λc]. Unsurprisingly, the case where λ is complex is more challenging. It is known that there is an FPTAS when λ is a complex number with norm at most λ* and also when λ is in a small strip surrounding the real interval [0,λc). However, neither of these results is believed to fully capture the truth about when approximation is possible. Peters and Regts identified the values of λ for which the occupation ratio of the Δ-regular tree converges. These values carve a cardioid-shaped region ΛΔ in the complex plane, whose boundary includes the critical points −λ* and λc. Motivated by the picture in the real case, they asked whether ΛΔ marks the true approximability threshold for general complex values λ. Our main result shows that for every λ outside of ΛΔ, the problem of approximating ZG(λ) on graphs G with maximum degree at most Δ is indeed NP-hard. In fact, when λ is outside of ΛΔ and is not a positive real number, we give the stronger result that approximating ZG(λ) is actually #P-hard. Further, on the negative real axis, when λ<−λ*, we show that it is #P-hard to even decide whether ZG(λ)>0, resolving in the affirmative a conjecture of Harvey, Srivastava and Vondrak. Our proof techniques are based around tools from complex analysis — specifically the study of iterative multivariate rational maps. @InProceedings{STOC18p1234, author = {Ivona Bezáková and Andreas Galanis and Leslie Ann Goldberg and Daniel Štefankovič}, title = {Inapproximability of the Independent Set Polynomial in the Complex Plane}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1234--1240}, doi = {}, year = {2018}, } |
|
Bitansky, Nir |
STOC '18: "Multi-collision Resistance: ..."
Multi-collision Resistance: A Paradigm for Keyless Hash Functions
Nir Bitansky, Yael Tauman Kalai, and Omer Paneth (Tel Aviv University, Israel; Microsoft, USA; Massachusetts Institute of Technology, USA) We introduce a new notion of multi-collision resistance for keyless hash functions. This is a natural relaxation of collision resistance where it is hard to find multiple inputs with the same hash in the following sense. The number of colliding inputs that a polynomial-time non-uniform adversary can find is not much larger than its advice. We discuss potential candidates for this notion and study its applications. Assuming the existence of such hash functions, we resolve the long-standing question of the round complexity of zero knowledge protocols --- we construct a 3-message zero knowledge argument against arbitrary polynomial-size non-uniform adversaries. We also improve the round complexity in several other central applications, including a 3-message succinct argument of knowledge for NP, a 4-message zero-knowledge proof, and a 5-message public-coin zero-knowledge argument. Our techniques can also be applied in the keyed setting, where we match the round complexity of known protocols while relaxing the underlying assumption from collision-resistance to keyed multi-collision resistance. The core technical contribution behind our results is a domain extension transformation from multi-collision-resistant hash functions for a fixed input length to ones with an arbitrary input length and a local opening property. The transformation is based on a combination of classical domain extension techniques, together with new information-theoretic tools. In particular, we define and construct a new variant of list-recoverable codes, which may be of independent interest. @InProceedings{STOC18p671, author = {Nir Bitansky and Yael Tauman Kalai and Omer Paneth}, title = {Multi-collision Resistance: A Paradigm for Keyless Hash Functions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {671--684}, doi = {}, year = {2018}, } |
|
Bläser, Markus |
STOC '18: "Generalized Matrix Completion ..."
Generalized Matrix Completion and Algebraic Natural Proofs
Markus Bläser, Christian Ikenmeyer, Gorav Jindal, and Vladimir Lysikov (Saarland University, Germany; Max Planck Institute for Informatics, Germany) Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P#P ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case. @InProceedings{STOC18p1193, author = {Markus Bläser and Christian Ikenmeyer and Gorav Jindal and Vladimir Lysikov}, title = {Generalized Matrix Completion and Algebraic Natural Proofs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1193--1206}, doi = {}, year = {2018}, } |
|
Bodlaender, Hans L. |
STOC '18: "A Framework for ETH-Tight ..."
A Framework for ETH-Tight Algorithms and Lower Bounds in Geometric Intersection Graphs
Mark de Berg, Hans L. Bodlaender, Sándor Kisfaludi-Bak, Dániel Marx, and Tom C. van der Zanden (Eindhoven University of Technology, Netherlands; Utrecht University, Netherlands; Hungarian Academy of Sciences, Hungary) We give an algorithmic and lower-bound framework that facilitates the construction of subexponential algorithms and matching conditional complexity bounds. It can be applied to a wide range of geometric intersection graphs (intersections of similarly sized fat objects), yielding algorithms with running time 2O(n1−1/d) for any fixed dimension d≥ 2 for many well known graph problems, including Independent Set, r-Dominating Set for constant r, and Steiner Tree. For most problems, we get improved running times compared to prior work; in some cases, we give the first known subexponential algorithm in geometric intersection graphs. Additionally, most of the obtained algorithms work on the graph itself, i.e., do not require any geometric information. Our algorithmic framework is based on a weighted separator theorem and various treewidth techniques. The lower-bound framework is based on a constructive embedding of graphs into d-dimensional grids, and it allows us to derive matching 2Ω(n1−1/d) lower bounds under the Exponential Time Hypothesis even in the much more restricted class of d-dimensional induced grid graphs. @InProceedings{STOC18p574, author = {Mark de Berg and Hans L. Bodlaender and Sándor Kisfaludi-Bak and Dániel Marx and Tom C. van der Zanden}, title = {A Framework for ETH-Tight Algorithms and Lower Bounds in Geometric Intersection Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {574--586}, doi = {}, year = {2018}, } |
|
Bonacina, Ilario |
STOC '18: "Clique Is Hard on Average ..."
Clique Is Hard on Average for Regular Resolution
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Alexander Razborov (Universitat Politècnica de Catalunya, Spain; KTH, Sweden; Sapienza University of Rome, Italy; University of Chicago, USA; Russian Academy of Sciences, Russia) We prove that for k ≪ n1/4 regular resolution requires length nΩ(k) to establish that an Erdos-Renyi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional nΩ(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs. @InProceedings{STOC18p866, author = {Albert Atserias and Ilario Bonacina and Susanna F. de Rezende and Massimo Lauria and Jakob Nordström and Alexander Razborov}, title = {Clique Is Hard on Average for Regular Resolution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {866--877}, doi = {}, year = {2018}, } |
|
Brand, Cornelius |
STOC '18: "Extensor-Coding ..."
Extensor-Coding
Cornelius Brand, Holger Dell, and Thore Husfeldt (Saarland University, Germany; M2CI, Germany; Lund University, Sweden; IT University of Copenhagen, Denmark) We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time ε−2 4k(n+m) poly(k). The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2k·poly(n) time algorithm to find a k-path in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect k-multilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants. @InProceedings{STOC18p151, author = {Cornelius Brand and Holger Dell and Thore Husfeldt}, title = {Extensor-Coding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {151--164}, doi = {}, year = {2018}, } |
|
Braverman, Mark |
STOC '18: "Hitting Sets with Near-Optimal ..."
Hitting Sets with Near-Optimal Error for Read-Once Branching Programs
Mark Braverman, Gil Cohen, and Sumegha Garg (Princeton University, USA) Nisan (Combinatorica’92) constructed a pseudorandom generator for length n, width n read-once branching programs (ROBPs) with error ε and seed length O(log2n + logn · log(1/ε)). A major goal in complexity theory is to reduce the seed length, hopefully, to the optimal O(logn+log(1/ε)), or to construct improved hitting sets, as these would yield stronger derandomization of BPL and RL, respectively. In contrast to a successful line of work in restricted settings, no progress has been made for general, unrestricted, ROBPs. Indeed, Nisan’s construction is the best pseudorandom generator and, prior to this work, also the best hitting set for unrestricted ROBPs. In this work, we make the first improvement for the general case by constructing a hitting set with seed length O(log2n+log(1/ε)). That is, we decouple ε and n, and obtain near-optimal dependence on the former. The regime of parameters in which our construction strictly improves upon prior works, namely, log(1/ε) ≫ logn, is well-motivated by the work of Saks and Zhou (J.CSS’99) who use pseudorandom generators with error ε = 2−(logn)2 in their proof for BPL ⊆ L3/2. We further suggest a research program towards proving that BPL ⊆ L4/3 in which our result achieves one step. As our main technical tool, we introduce and construct a new type of primitive we call pseudorandom pseudo-distributions. Informally, this is a generalization of pseudorandom generators in which one may assign negative and unbounded weights to paths as opposed to working with probability distributions. We show that such a primitive yields hitting sets and, for derandomization purposes, can be used to derandomize two-sided error algorithms. @InProceedings{STOC18p353, author = {Mark Braverman and Gil Cohen and Sumegha Garg}, title = {Hitting Sets with Near-Optimal Error for Read-Once Branching Programs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {353--362}, doi = {}, year = {2018}, } STOC '18: "Interactive Compression to ..." Interactive Compression to External Information Mark Braverman and Gillat Kol (Princeton University, USA) We describe a new way of compressing two-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants’ private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) · loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost. @InProceedings{STOC18p964, author = {Mark Braverman and Gillat Kol}, title = {Interactive Compression to External Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {964--977}, doi = {}, year = {2018}, } |
|
Bringmann, Karl |
STOC '18: "Fast Fencing ..."
Fast Fencing
Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent Cohen-Addad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in nO(k) time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NP-hard for general k. Our polynomial time algorithm refutes this unless P equals NP. @InProceedings{STOC18p564, author = {Mikkel Abrahamsen and Anna Adamaszek and Karl Bringmann and Vincent Cohen-Addad and Mehran Mehr and Eva Rotenberg and Alan Roytman and Mikkel Thorup}, title = {Fast Fencing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564--573}, doi = {}, year = {2018}, } STOC '18: "More Consequences of Falsifying ..." More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof (IBM Research, USA; Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Eindhoven University of Technology, Netherlands) The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ε>0 for which an O(N2−ε) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that: - For all d and all large enough k, there is a randomized algorithm that takes O(n(1−ε)k) time to solve the Zero-Weight-k-Clique and Min-Weight-k-Clique problems on d-hypergraphs with n vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. - For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)n). @InProceedings{STOC18p253, author = {Amir Abboud and Karl Bringmann and Holger Dell and Jesper Nederlof}, title = {More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {253--266}, doi = {}, year = {2018}, } |
|
Bubeck, Sébastien |
STOC '18: "k-Server via Multiscale Entropic ..."
k-Server via Multiscale Entropic Regularization
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Mądry (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA) We present an O((logk)2)-competitive randomized algorithm for the k-server problem on hierarchically separated trees (HSTs). This is the first o(k)-competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)2 logn)-competitive algorithm on any n-point metric space. We give a new dynamic HST embedding that yields an O((logk)3 logΔ)-competitive algorithm on any metric space where the ratio of the largest to smallest non-zero distance is at most Δ. @InProceedings{STOC18p3, author = {Sébastien Bubeck and Michael B. Cohen and Yin Tat Lee and James R. Lee and Aleksander Mądry}, title = {k-Server via Multiscale Entropic Regularization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3--16}, doi = {}, year = {2018}, } STOC '18: "An Homotopy Method for lp ..." An Homotopy Method for lp Regression Provably beyond Self-Concordance and in Input-Sparsity Time Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, and Yuanzhi Li (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA; Princeton University, USA) We consider the problem of linear regression where the ℓ2n norm loss (i.e., the usual least squares loss) is replaced by the ℓpn norm. We show how to solve such problems up to machine precision in Õp(n|1/2 − 1/p|) (dense) matrix-vector products and Õp(1) matrix inversions, or alternatively in Õp(n|1/2 − 1/p|) calls to a (sparse) linear system solver. This improves the state of the art for any p∉{1,2,+∞}. Furthermore we also propose a randomized algorithm solving such problems in input sparsity time, i.e., Õp(N + poly(d)) where N is the size of the input and d is the number of variables. Such a result was only known for p=2. Finally we prove that these results lie outside the scope of the Nesterov-Nemirovski’s theory of interior point methods by showing that any symmetric self-concordant barrier on the ℓpn unit ball has self-concordance parameter Ω(n). @InProceedings{STOC18p1130, author = {Sébastien Bubeck and Michael B. Cohen and Yin Tat Lee and Yuanzhi Li}, title = {An Homotopy Method for l<sub>p</sub> Regression Provably beyond Self-Concordance and in Input-Sparsity Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1130--1137}, doi = {}, year = {2018}, } |
|
Bun, Mark |
STOC '18: "Composable and Versatile Privacy ..."
Composable and Versatile Privacy via Truncated CDP
Mark Bun, Cynthia Dwork, Guy N. Rothblum, and Thomas Steinke (Princeton University, USA; Harvard University, USA; Weizmann Institute of Science, Israel; IBM Research, USA) We propose truncated concentrated differential privacy (tCDP), a refinement of differential privacy and of concentrated differential privacy. This new definition provides robust and efficient composition guarantees, supports powerful algorithmic techniques such as privacy amplification via sub-sampling, and enables more accurate statistical analyses. In particular, we show a central task for which the new definition enables exponential accuracy improvement. @InProceedings{STOC18p74, author = {Mark Bun and Cynthia Dwork and Guy N. Rothblum and Thomas Steinke}, title = {Composable and Versatile Privacy via Truncated CDP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {74--86}, doi = {}, year = {2018}, } STOC '18: "The Polynomial Method Strikes ..." The Polynomial Method Strikes Back: Tight Quantum Query Bounds via Dual Polynomials Mark Bun, Robin Kothari, and Justin Thaler (Princeton University, USA; Microsoft Research, USA; Georgetown University, USA) The approximate degree of a Boolean function f is the least degree of a real polynomial that approximates f pointwise to error at most 1/3. The approximate degree of f is known to be a lower bound on the quantum query complexity of f (Beals et al., FOCS 1998 and J. ACM 2001). We resolve or nearly resolve the approximate degree and quantum query complexities of several basic functions. Specifically, we show the following: *k-distinctness: For any constant k, the approximate degree and quantum query complexity of the k-distinctness function is Ω(n3/4−1/(2k)). This is nearly tight for large k, as Belovs (FOCS 2012) has shown that for any constant k, the approximate degree and quantum query complexity of k-distinctness is O(n3/4−1/(2k+2−4)). *Image Size Testing: The approximate degree and quantum query complexity of testing the size of the image of a function [n] → [n] is Ω(n1/2). This proves a conjecture of Ambainis et al. (SODA 2016), and it implies tight lower bounds on the approximate degree and quantum query complexity of the following natural problems. **k-junta testing: A tight Ω(k1/2) lower bound for k-junta testing, answering the main open question of Ambainis et al. (SODA 2016). **Statistical Distance from Uniform: A tight Ω(n1/2) lower bound for approximating the statistical distance from uniform of a distribution, answering the main question left open by Bravyi et al. (STACS 2010 and IEEE Trans. Inf. Theory 2011). **Shannon entropy: A tight Ω(n1/2) lower bound for approximating Shannon entropy up to a certain additive constant, answering a question of Li and Wu (2017). *Surjectivity: The approximate degree of the Surjectivity function is Ω(n3/4). The best prior lower bound was Ω(n2/3). Our result matches an upper bound of Õ(n3/4) due to Sherstov, which we reprove using different techniques. The quantum query complexity of this function is known to be Θ(n) (Beame and Machmouchi, Quantum Inf. Comput. 2012 and Sherstov, FOCS 2015). Our upper bound for Surjectivity introduces new techniques for approximating Boolean functions by low-degree polynomials. Our lower bounds are proved by significantly refining techniques recently introduced by Bun and Thaler (FOCS 2017). @InProceedings{STOC18p297, author = {Mark Bun and Robin Kothari and Justin Thaler}, title = {The Polynomial Method Strikes Back: Tight Quantum Query Bounds via Dual Polynomials}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {297--310}, doi = {}, year = {2018}, } |
|
Byrka, Jarosław |
STOC '18: "Constant-Factor Approximation ..."
Constant-Factor Approximation for Ordered k-Median
Jarosław Byrka, Krzysztof Sornat, and Joachim Spoerhase (University of Wrocław, Poland) We study the Ordered k-Median problem, in which the solution is evaluated by first sorting the client connection costs and then multiplying them with a predefined non-increasing weight vector (higher connection costs are taken with larger weights). Since the 1990s, this problem has been studied extensively in the discrete optimization and operations research communities and has emerged as a framework unifying many fundamental clustering and location problems such as k-Median and k-Center. Obtaining non-trivial approximation algorithms was an open problem even for simple topologies such as trees. Recently, Aouad and Segev (2017) were able to obtain an O(log n) approximation algorithm for Ordered k-Median using a sophisticated local-search approach. The existence of a constant-factor approximation algorithm, however, remained open even for the rectangular weight vector. In this paper, we provide an LP-rounding constant-factor approximation algorithm for the Ordered k-Median problem. We achieve this result by revealing an interesting connection to the classic k-Median problem. In particular, we propose a novel LP relaxation that uses the constraints of the natural LP relaxation for k-Median but minimizes over a non-metric, distorted cost vector. This cost function (approximately) emulates the weighting of distances in an optimum solution and can be guessed by means of a clever enumeration scheme of Aouad and Segev. Although the resulting LP has an unbounded integrality gap, we can show that the LP rounding process by Charikar and Li (2012) for k-Median, operating on the original, metric space, gives a constant-factor approximation when relating not only to the LP value but also to a combinatorial bound derived from the guessing phase. To analyze the rounding process under the non-linear, ranking-based objective of Ordered k-Median, we employ several new ideas and technical ingredients that we believe could be of interest in some of the numerous other settings related to ordered, weighted cost functions. @InProceedings{STOC18p620, author = {Jarosław Byrka and Krzysztof Sornat and Joachim Spoerhase}, title = {Constant-Factor Approximation for Ordered k-Median}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {620--631}, doi = {}, year = {2018}, } |
|
Błasiok, Jarosław |
STOC '18: "General Strong Polarization ..."
General Strong Polarization
Jarosław Błasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, and Madhu Sudan (Harvard University, USA; Carnegie Mellon University, USA; SUNY Buffalo, USA) Arikan’s exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constant-sized) invertible matrix M, a family of polar codes can be associated with this matrix and its ability to approach capacity follows from the polarization of an associated [0,1]-bounded martingale, namely its convergence in the limit to either 0 or 1 with probability 1. Arikan showed appropriate polarization of the martingale associated with the matrix G2 = ( 10 11 ) to get capacity achieving codes. His analysis was later extended to all matrices M which satisfy an obvious necessary condition for polarization. While Arikan’s theorem does not guarantee that the codes achieve capacity at small blocklengths (specifically in length which is a polynomial in 1/є where є is the difference between the capacity of a channel and the rate of the code), it turns out that a “strong” analysis of the polarization of the underlying martingale would lead to such constructions. Indeed for the martingale associated with G2 such a strong polarization was shown in two independent works ([Guruswami and Xia, IEEE IT ’15] and [Hassani et al., IEEE IT’14]), thereby resolving a major theoretical challenge associated with the efficient attainment of Shannon capacity. In this work we extend the result above to cover martingales associated with all matrices that satisfy the necessary condition for (weak) polarization. In addition to being vastly more general, our proofs of strong polarization are (in our view) also much simpler and modular. Key to our proof is a notion of local polarization that only depends on the evolution of the martingale in a single time step. We show that local polarization always implies strong polarization. We then apply relatively simple reasoning about conditional entropies to prove local polarization in very general settings. Specifically, our result shows strong polarization over all prime fields and leads to efficient capacity-achieving source codes for compressing arbitrary i.i.d. sources, and capacity-achieving channel codes for arbitrary symmetric memoryless channels. @InProceedings{STOC18p485, author = {Jarosław Błasiok and Venkatesan Guruswami and Preetum Nakkiran and Atri Rudra and Madhu Sudan}, title = {General Strong Polarization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {485--492}, doi = {}, year = {2018}, } |
|
Canonne, Clément L. |
STOC '18: "Testing Conditional Independence ..."
Testing Conditional Independence of Discrete Distributions
Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (Stanford University, USA; University of Southern California, USA; University of California at San Diego, USA) We study the problem of testing *conditional independence* for discrete distributions. Specifically, given samples from a discrete random variable (X, Y, Z) on domain [ℓ1]×[ℓ2] × [n], we want to distinguish, with probability at least 2/3, between the case that X and Y are conditionally independent given Z from the case that (X, Y, Z) is є-far, in ℓ1-distance, from every distribution that has this property. Conditional independence is a concept of central importance in probability and statistics with important applications in various scientific domains. As such, the statistical task of testing conditional independence has been extensively studied in various forms within the statistics and econometrics community for nearly a century. Perhaps surprisingly, this problem has not been previously considered in the framework of distribution property testing and in particular no tester with *sublinear* sample complexity is known, even for the important special case that the domains of X and Y are binary. The main algorithmic result of this work is the first conditional independence tester with sublinear sample complexity. To complement our upper bound, we prove information-theoretic lower bounds establishing that the sample complexity of our algorithm is optimal, up to constant factors, for a number of settings. Specifically, for the prototypical setting when ℓ1, ℓ2 = O(1), we show that the sample complexity of testing conditional independence (upper bound and matching lower bound) is O⎛ ⎜ ⎝max⎛ ⎜ ⎝√n /є2,min⎛ ⎝n7/8/є,n6/7/є8/7⎞ ⎠⎞ ⎟ ⎠ ⎞ ⎟ ⎠ . We also achieve sublinear sample complexity for the general case of arbitrary ℓ1, ℓ2, and n. To obtain our tester, we employ a variety of tools, including (1) a subtle weighted adaptation of the ”flattening” technique, and (2) the design and analysis of an optimal (unbiased) estimator for the following statistical problem of independent interest: Given a degree-d polynomial Q∶ℝn → ℝ and sample access to a distribution p over [n], estimate Q(p1, …, pn) up to small additive error. Obtaining tight variance analyses for specific estimators of this form has been a major technical hurdle in distribution testing. As an important contribution of this work, we develop a general theory providing tight variance bounds for *all* such estimators. @InProceedings{STOC18p735, author = {Clément L. Canonne and Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart}, title = {Testing Conditional Independence of Discrete Distributions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {735--748}, doi = {}, year = {2018}, } |
|
Chakraborty, Diptarka |
STOC '18: "Tight Cell Probe Bounds for ..."
Tight Cell Probe Bounds for Succinct Boolean Matrix-Vector Multiplication
Diptarka Chakraborty, Lior Kamma, and Kasper Green Larsen (Charles University in Prague, Czechia; Aarhus University, Denmark) The conjectured hardness of Boolean matrix-vector multiplication has been used with great success to prove conditional lower bounds for numerous important data structure problems, see Henzinger et al. [STOC’15]. In recent work, Larsen and Williams [SODA’17] attacked the problem from the upper bound side and gave a surprising cell probe data structure (that is, we only charge for memory accesses, while computation is free). Their cell probe data structure answers queries in Õ(n7/4) time and is succinct in the sense that it stores the input matrix in read-only memory, plus an additional Õ(n7/4) bits on the side. In this paper, we essentially settle the cell probe complexity of succinct Boolean matrix-vector multiplication. We present a new cell probe data structure with query time Õ(n3/2) storing just Õ(n3/2) bits on the side. We then complement our data structure with a lower bound showing that any data structure storing r bits on the side, with n < r < n2 must have query time t satisfying t r = Ω(n3). For r ≤ n, any data structure must have t = Ω(n2). Since lower bounds in the cell probe model also apply to classic word-RAM data structures, the lower bounds naturally carry over. We also prove similar lower bounds for matrix-vector multiplication over F2. @InProceedings{STOC18p1297, author = {Diptarka Chakraborty and Lior Kamma and Kasper Green Larsen}, title = {Tight Cell Probe Bounds for Succinct Boolean Matrix-Vector Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1297--1306}, doi = {}, year = {2018}, } |
|
Chang, Yi-Jun |
STOC '18: "An Optimal Distributed (Δ+1)-Coloring ..."
An Optimal Distributed (Δ+1)-Coloring Algorithm?
Yi-Jun Chang, Wenzheng Li, and Seth Pettie (University of Michigan, USA; Tsinghua University, China) Vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. In this paper we present a new algorithm for (Δ+1)-list coloring in the randomized LOCAL model running in O(log∗n + Detd(poly logn)) time, where Detd(n′) is the deterministic complexity of (deg+1)-list coloring (v’s palette has size deg(v)+1) on n′-vertex graphs. This improves upon a previous randomized algorithm of Harris, Schneider, and Su (STOC 2016). with complexity O(√logΔ + loglogn + Detd(poly logn)), and (when Δ is sufficiently large) is much faster than the best known deterministic algorithm of Fraigniaud, Heinrich, and Kosowski (FOCS 2016), with complexity O(√Δlog2.5Δ + log* n). Our algorithm appears to be optimal. It matches the Ω(log∗n) randomized lower bound, due to Naor (SIDMA 1991) and sort of matches the Ω(Det(poly logn)) randomized lower bound due to Chang, Kopelowitz, and Pettie (FOCS 2016), where Det is the deterministic complexity of (Δ+1)-list coloring. The best known upper bounds on Detd(n′) and Det(n′) are both 2O(√logn′) by Panconesi and Srinivasan (Journal of Algorithms 1996), and it is quite plausible that the complexities of both problems are the same, asymptotically. @InProceedings{STOC18p445, author = {Yi-Jun Chang and Wenzheng Li and Seth Pettie}, title = {An Optimal Distributed (Δ+1)-Coloring Algorithm?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {445--456}, doi = {}, year = {2018}, } |
|
Chattopadhyay, Arkadev |
STOC '18: "Simulation Beats Richness: ..."
Simulation Beats Richness: New Data-Structure Lower Bounds
Arkadev Chattopadhyay, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay (TIFR, India; Charles University in Prague, Czechia; INESC TEC, Portugal; University of Porto, Portugal; KTH, Sweden) We develop a new technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is the first simulation theorem in the asymmetric setting, where Alice gets a p × n matrix x over F2 and Bob gets a vector y ∈ F2n. Alice and Bob need to evaluate f(x· y) for a Boolean function f: {0,1}p → {0,1}. Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C· n for Alice and C for Bob, if and only if there exists a deterministic/randomized *parity decision tree* of cost Θ(C) for evaluating f. As applications of this technique, we obtain the following results: 1. The first strong lower-bounds against randomized data-structure schemes for the Vector-Matrix-Vector product problem over F2. Moreover, our method yields strong lower bounds even when the data-structure scheme has tiny advantage over random guessing. 2. The first lower bounds against randomized data-structures schemes for two natural Boolean variants of Orthogonal Vector Counting. 3. We construct an asymmetric communication problem and obtain a deterministic lower-bound for it which is provably better than any lower-bound that may be obtained by the classical Richness Method of Miltersen et al. (STOC ’95). This seems to be the first known limitation of the Richness Method in the context of proving deterministic lower bounds. @InProceedings{STOC18p1013, author = {Arkadev Chattopadhyay and Michal Koucký and Bruno Loff and Sagnik Mukhopadhyay}, title = {Simulation Beats Richness: New Data-Structure Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1013--1020}, doi = {}, year = {2018}, } |
|
Chattopadhyay, Eshan |
STOC '18: "Improved Pseudorandomness ..."
Improved Pseudorandomness for Unordered Branching Programs through Local Monotonicity
Eshan Chattopadhyay, Pooya Hatami, Omer Reingold, and Avishay Tal (Cornell University, USA; Institute for Advanced Study at Princeton, USA; University of Texas at Austin, USA; Stanford University, USA) We present an explicit pseudorandom generator with seed length Õ((logn)w+1) for read-once, oblivious, width w branching programs that can read their input bits in any order. This improves upon the work of Impagliazzo, Meka and Zuckerman (FOCS’12) where they required seed length n1/2+o(1). A central ingredient in our work is the following bound that we prove on the Fourier spectrum of branching programs. For any width w read-once, oblivious branching program B:{0,1}n→ {0,1}, any k ∈ {1,…,n}, ∑S⊆[n]: |S|=k |B(S)| ≤ O(logn)wk. This settles a conjecture posed by Reingold, Steinke and Vadhan (RANDOM’13). Our analysis crucially uses a notion of local monotonicity on the edge labeling of the branching program. We carry critical parts of our proof under the assumption of local monotonicity and show how to deduce our results for unrestricted branching programs. @InProceedings{STOC18p363, author = {Eshan Chattopadhyay and Pooya Hatami and Omer Reingold and Avishay Tal}, title = {Improved Pseudorandomness for Unordered Branching Programs through Local Monotonicity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {363--375}, doi = {}, year = {2018}, } Info |
|
Chen, Xi |
STOC '18: "Distribution-Free Junta Testing ..."
Distribution-Free Junta Testing
Zhengyang Liu, Xi Chen, Rocco A. Servedio, Ying Sheng, and Jinyu Xie (Shanghai Jiao Tong University, China; Columbia University, USA) We study the problem of testing whether an unknown n-variable Boolean function is a k-junta in the distribution-free property testing model, where the distance between functions is measured with respect to an arbitrary and unknown probability distribution over {0,1}n. Our first main result is that distribution-free k-junta testing can be performed, with one-sided error, by an adaptive algorithm that uses Õ(k2)/є queries (independent of n). Complementing this, our second main result is a lower bound showing that any non-adaptive distribution-free k-junta testing algorithm must make Ω(2k/3) queries even to test to accuracy є=1/3. These bounds establish that while the optimal query complexity of non-adaptive k-junta testing is 2Θ(k), for adaptive testing it is poly(k), and thus show that adaptivity provides an exponential improvement in the distribution-free query complexity of testing juntas. @InProceedings{STOC18p749, author = {Zhengyang Liu and Xi Chen and Rocco A. Servedio and Ying Sheng and Jinyu Xie}, title = {Distribution-Free Junta Testing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {749--759}, doi = {}, year = {2018}, } |
|
Cheraghchi, Mahdi |
STOC '18: "Capacity Upper Bounds for ..."
Capacity Upper Bounds for Deletion-Type Channels
Mahdi Cheraghchi (Imperial College London, UK) We develop a systematic approach, based on convex programming and real analysis, for obtaining upper bounds on the capacity of the binary deletion channel and, more generally, channels with i.i.d. insertions and deletions. Other than the classical deletion channel, we give a special attention to the Poisson-repeat channel introduced by Mitzenmacher and Drinea (IEEE Transactions on Information Theory, 2006). Our framework can be applied to obtain capacity upper bounds for any repetition distribution (the deletion and Poisson-repeat channels corresponding to the special cases of Bernoulli and Poisson distributions). Our techniques essentially reduce the task of proving capacity upper bounds to maximizing a univariate, real-valued, and often concave function over a bounded interval. The corresponding univariate function is carefully designed according to the underlying distribution of repetitions and the choices vary depending on the desired strength of the upper bounds as well as the desired simplicity of the function (e.g., being only efficiently computable versus having an explicit closed-form expression in terms of elementary, or common special, functions). Among our results, we show that the capacity of the binary deletion channel with deletion probability d is at most (1−d) logϕ for d ≥ 1/2, and, assuming the capacity function is convex, is at most 1−d log(4/ϕ) for d<1/2, where ϕ=(1+√5)/2 is the golden ratio. This is the first nontrivial capacity upper bound for any value of d outside the limiting case d → 0 that is fully explicit and proved without computer assistance. Furthermore, we derive the first set of capacity upper bounds for the Poisson-repeat channel. Our results uncover further striking connections between this channel and the deletion channel, and suggest, somewhat counter-intuitively, that the Poisson-repeat channel is actually analytically simpler than the deletion channel and may be of key importance to a complete understanding of the deletion channel. Finally, we derive several novel upper bounds on the capacity of the deletion channel. All upper bounds are maximums of efficiently computable, and concave, univariate real functions over a bounded domain. In turn, we upper bound these functions in terms of explicit elementary and standard special functions, whose maximums can be found even more efficiently (and sometimes, analytically, for example for d=1/2). Along the way, we develop several new techniques of potentially independent interest. For example, we develop systematic techniques to study channels with mean constraints over the reals. Furthermore, we motivate the study of novel probability distributions over non-negative integers, as well as novel special functions which could be of interest to mathematical analysis. @InProceedings{STOC18p493, author = {Mahdi Cheraghchi}, title = {Capacity Upper Bounds for Deletion-Type Channels}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {493--506}, doi = {}, year = {2018}, } |
|
Christandl, Matthias |
STOC '18: "Universal Points in the Asymptotic ..."
Universal Points in the Asymptotic Spectrum of Tensors
Matthias Christandl, Péter Vrana, and Jeroen Zuiddam (University of Copenhagen, Denmark; Budapest University of Technology and Economics, Hungary; CWI, Netherlands) The asymptotic restriction problem for tensors s and t is to find the smallest β ≥ 0 such that the nth tensor power of t can be obtained from the (β n+o(n))th tensor power of s by applying linear maps to the tensor legs — this is called restriction — when n goes to infinity. Applications include computing the arithmetic complexity of matrix multiplication in algebraic complexity theory, deciding the feasibility of an asymptotic transformation between pure quantum states via stochastic local operations and classical communication in quantum information theory, bounding the query complexity of certain properties in algebraic property testing, and bounding the size of combinatorial structures like tri-colored sum-free sets in additive combinatorics. Naturally, the asymptotic restriction problem asks for obstructions (think of lower bounds in computational complexity) and constructions (think of fast matrix multiplication algorithms). Strassen showed that for obstructions it is sufficient to consider maps from k-tensors to nonnegative reals, that are monotone under restriction, normalised on diagonal tensors, additive under direct sum and multiplicative under tensor product, named spectral points (SFCS 1986 and J. Reine Angew. Math. 1988). Strassen introduced the support functionals, which are spectral points for oblique tensors, a strict subfamily of all tensors (J. Reine Angew. Math. 1991). On the construction side, an important work is the Coppersmith-Winograd method for tight tensors and tight sets. We present the first nontrivial spectral points for the family of all complex tensors, named quantum functionals. Finding such universal spectral points has been an open problem for thirty years. We use techniques from quantum information theory, invariant theory and moment polytopes. We present comparisons among the support functionals and our quantum functionals, and compute generic values. We relate the functionals to instability from geometric invariant theory, in the spirit of Blasiak et al. (Discrete Anal. 2017). We prove that the quantum functionals are asymptotic upper bounds on slice-rank and multi-slice rank, extending a result of Tao and Sawin. Furthermore, we make progress on the construction side of the combinatorial version of the asymptotic restriction problem by extending the Coppersmith–Winograd method via combinatorial degeneration. The regular method constructs large free diagonals in powers of any tight set. Our extended version works for any set that has a combinatorial degeneration to a tight set. This generalizes a result of Kleinberg, Sawin and Speyer. As an application we reprove in hindsight recent results on tri-colored sum-free sets by reducing this problem to a result of Strassen on reduced polynomial multiplication. Proofs are in the full version of this paper, available at https://arxiv.org/abs/1709.07851. @InProceedings{STOC18p289, author = {Matthias Christandl and Péter Vrana and Jeroen Zuiddam}, title = {Universal Points in the Asymptotic Spectrum of Tensors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {289--296}, doi = {}, year = {2018}, } |
|
Chuzhoy, Julia |
STOC '18: "Almost Polynomial Hardness ..."
Almost Polynomial Hardness of Node-Disjoint Paths in Grids
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical Node-Disjoint Paths (NDP) problem, we are given an n-vertex graph G=(V,E), and a collection M={(s1,t1),…,(sk,tk)} of pairs of its vertices, called source-destination, or demand pairs. The goal is to route as many of the demand pairs as possible, where to route a pair we need to select a path connecting it, so that all selected paths are disjoint in their vertices. The best current algorithm for NDP achieves an O(√n)-approximation, while, until recently, the best negative result was a factor Ω(log1/2−єn)-hardness of approximation, for any constant є, unless NP ⊆ ZPTIME(npoly logn). In a recent work, the authors have shown an improved 2Ω(√logn)-hardness of approximation for NDP, unless NP⊆ DTIME(nO(logn)), even if the underlying graph is a subgraph of a grid graph, and all source vertices lie on the boundary of the grid. Unfortunately, this result does not extend to grid graphs. The approximability of the NDP problem on grid graphs has remained a tantalizing open question, with the best current upper bound of Õ(n1/4), and the best current lower bound of APX-hardness. In a recent work, the authors showed a 2Õ(√logn)-approximation algorithm for NDP in grid graphs, if all source vertices lie on the boundary of the grid – a result that can be seen as suggesting that a sub-polynomial approximation may be achievable for NDP in grids. In this paper we show that this is unlikely to be the case, and come close to resolving the approximability of NDP in general, and of NDP in grids in particular. Our main result is that NDP is 2Ω(log1−є n)-hard to approximate for any constant є, assuming that NP⊈RTIME(npoly logn), and that it is nΩ (1/(loglogn)2)-hard to approximate, assuming that for some constant δ>0, NP ⊈RTIME(2nδ). These results hold even for grid graphs and wall graphs, and extend to the closely related Edge-Disjoint Paths problem, even in wall graphs. Our hardness proof performs a reduction from the 3COL(5) problem to NDP, using a new graph partitioning problem as a proxy. Unlike the more standard approach of employing Karp reductions to prove hardness of approximation, our proof is a Cook-type reduction, where, given an input instance of 3COL(5), we produce a large number of instances of NDP, and apply an approximation algorithm for NDP to each of them. The construction of each new instance of NDP crucially depends on the solutions to the previous instances that were found by the approximation algorithm. @InProceedings{STOC18p1220, author = {Julia Chuzhoy and David H. K. Kim and Rachit Nimavat}, title = {Almost Polynomial Hardness of Node-Disjoint Paths in Grids}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1220--1233}, doi = {}, year = {2018}, } |
|
Cohen, Gil |
STOC '18: "Explicit Binary Tree Codes ..."
Explicit Binary Tree Codes with Polylogarithmic Size Alphabet
Gil Cohen, Bernhard Haeupler, and Leonard J. Schulman (Princeton University, USA; Carnegie Mellon University, USA; California Institute of Technology, USA) This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size. We give an explicit binary tree code with constant distance and alphabet size poly(logn), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly(n). At the core of our construction is the first explicit tree code with constant rate and constant distance, though with non-constant arity - a result of independent interest. This construction adapts the polynomial interpolation framework to the online setting. @InProceedings{STOC18p535, author = {Gil Cohen and Bernhard Haeupler and Leonard J. Schulman}, title = {Explicit Binary Tree Codes with Polylogarithmic Size Alphabet}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {535--544}, doi = {}, year = {2018}, } STOC '18: "Hitting Sets with Near-Optimal ..." Hitting Sets with Near-Optimal Error for Read-Once Branching Programs Mark Braverman, Gil Cohen, and Sumegha Garg (Princeton University, USA) Nisan (Combinatorica’92) constructed a pseudorandom generator for length n, width n read-once branching programs (ROBPs) with error ε and seed length O(log2n + logn · log(1/ε)). A major goal in complexity theory is to reduce the seed length, hopefully, to the optimal O(logn+log(1/ε)), or to construct improved hitting sets, as these would yield stronger derandomization of BPL and RL, respectively. In contrast to a successful line of work in restricted settings, no progress has been made for general, unrestricted, ROBPs. Indeed, Nisan’s construction is the best pseudorandom generator and, prior to this work, also the best hitting set for unrestricted ROBPs. In this work, we make the first improvement for the general case by constructing a hitting set with seed length O(log2n+log(1/ε)). That is, we decouple ε and n, and obtain near-optimal dependence on the former. The regime of parameters in which our construction strictly improves upon prior works, namely, log(1/ε) ≫ logn, is well-motivated by the work of Saks and Zhou (J.CSS’99) who use pseudorandom generators with error ε = 2−(logn)2 in their proof for BPL ⊆ L3/2. We further suggest a research program towards proving that BPL ⊆ L4/3 in which our result achieves one step. As our main technical tool, we introduce and construct a new type of primitive we call pseudorandom pseudo-distributions. Informally, this is a generalization of pseudorandom generators in which one may assign negative and unbounded weights to paths as opposed to working with probability distributions. We show that such a primitive yields hitting sets and, for derandomization purposes, can be used to derandomize two-sided error algorithms. @InProceedings{STOC18p353, author = {Mark Braverman and Gil Cohen and Sumegha Garg}, title = {Hitting Sets with Near-Optimal Error for Read-Once Branching Programs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {353--362}, doi = {}, year = {2018}, } |
|
Cohen, Michael B. |
STOC '18: "k-Server via Multiscale Entropic ..."
k-Server via Multiscale Entropic Regularization
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Mądry (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA) We present an O((logk)2)-competitive randomized algorithm for the k-server problem on hierarchically separated trees (HSTs). This is the first o(k)-competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)2 logn)-competitive algorithm on any n-point metric space. We give a new dynamic HST embedding that yields an O((logk)3 logΔ)-competitive algorithm on any metric space where the ratio of the largest to smallest non-zero distance is at most Δ. @InProceedings{STOC18p3, author = {Sébastien Bubeck and Michael B. Cohen and Yin Tat Lee and James R. Lee and Aleksander Mądry}, title = {k-Server via Multiscale Entropic Regularization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3--16}, doi = {}, year = {2018}, } STOC '18: "An Homotopy Method for lp ..." An Homotopy Method for lp Regression Provably beyond Self-Concordance and in Input-Sparsity Time Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, and Yuanzhi Li (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA; Princeton University, USA) We consider the problem of linear regression where the ℓ2n norm loss (i.e., the usual least squares loss) is replaced by the ℓpn norm. We show how to solve such problems up to machine precision in Õp(n|1/2 − 1/p|) (dense) matrix-vector products and Õp(1) matrix inversions, or alternatively in Õp(n|1/2 − 1/p|) calls to a (sparse) linear system solver. This improves the state of the art for any p∉{1,2,+∞}. Furthermore we also propose a randomized algorithm solving such problems in input sparsity time, i.e., Õp(N + poly(d)) where N is the size of the input and d is the number of variables. Such a result was only known for p=2. Finally we prove that these results lie outside the scope of the Nesterov-Nemirovski’s theory of interior point methods by showing that any symmetric self-concordant barrier on the ℓpn unit ball has self-concordance parameter Ω(n). @InProceedings{STOC18p1130, author = {Sébastien Bubeck and Michael B. Cohen and Yin Tat Lee and Yuanzhi Li}, title = {An Homotopy Method for l<sub>p</sub> Regression Provably beyond Self-Concordance and in Input-Sparsity Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1130--1137}, doi = {}, year = {2018}, } |
|
Cohen-Addad, Vincent |
STOC '18: "Fast Fencing ..."
Fast Fencing
Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent Cohen-Addad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in nO(k) time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NP-hard for general k. Our polynomial time algorithm refutes this unless P equals NP. @InProceedings{STOC18p564, author = {Mikkel Abrahamsen and Anna Adamaszek and Karl Bringmann and Vincent Cohen-Addad and Mehran Mehr and Eva Rotenberg and Alan Roytman and Mikkel Thorup}, title = {Fast Fencing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564--573}, doi = {}, year = {2018}, } |
|
Czumaj, Artur |
STOC '18: "Round Compression for Parallel ..."
Round Compression for Parallel Matching Algorithms
Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (University of Warwick, UK; Google Research, USA; Massachusetts Institute of Technology, USA; EPFL, Switzerland; IBM Research, USA; University of Warsaw, Poland) For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem—one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n1+Ω(1) memory, this problem can also be solved 2-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+є)-approximate maximum matching, for any fixed constant є>0, in O((loglogn)2) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of the MPC model, as compared to the PRAM model. Firstly, we use vertex–based graph partitioning, instead of the edge–based approaches that were utilized so far. Secondly, we develop a technique of round compression. This technique enables one to take a (distributed) algorithm that computes an O(1)-approximation of maximum matching in O(logn) independent PRAM phases and implement a super-constant number of these phases in only a constant number of MPC rounds. @InProceedings{STOC18p471, author = {Artur Czumaj and Jakub Łącki and Aleksander Mądry and Slobodan Mitrović and Krzysztof Onak and Piotr Sankowski}, title = {Round Compression for Parallel Matching Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {471--484}, doi = {}, year = {2018}, } |
|
Dadush, Daniel |
STOC '18: "The Gram-Schmidt Walk: A Cure ..."
The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues
Nikhil Bansal, Daniel Dadush, Shashwat Garg, and Shachar Lovett (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of California at San Diego, USA) An important result in discrepancy due to Banaszczyk states that for any set of n vectors in ℝm of ℓ2 norm at most 1 and any convex body K in ℝm of Gaussian measure at least half, there exists a ± 1 combination of these vectors which lies in 5K. This result implies the best known bounds for several problems in discrepancy. Banaszczyk’s proof of this result is non-constructive and an open problem has been to give an efficient algorithm to find such a ± 1 combination of the vectors. In this paper, we resolve this question and give an efficient randomized algorithm to find a ± 1 combination of the vectors which lies in cK for c>0 an absolute constant. This leads to new efficient algorithms for several problems in discrepancy theory. @InProceedings{STOC18p587, author = {Nikhil Bansal and Daniel Dadush and Shashwat Garg and Shachar Lovett}, title = {The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {587--597}, doi = {}, year = {2018}, } STOC '18: "A Friendly Smoothed Analysis ..." A Friendly Smoothed Analysis of the Simplex Method Daniel Dadush and Sophie Huiberts (CWI, Netherlands) Explaining the excellent practical performance of the simplex method for linear programming has been a major topic of research for over 50 years. One of the most successful frameworks for understanding the simplex method was given by Spielman and Teng (JACM ‘04), who the developed the notion of smoothed analysis. Starting from an arbitrary linear program with d variables and n constraints, Spielman and Teng analyzed the expected runtime over random perturbations of the LP (smoothed LP), where variance σ Gaussian noise is added to the LP data. In particular, they gave a two-stage shadow vertex simplex algorithm which uses an expected O(n86 d55 σ−30) number of simplex pivots to solve the smoothed LP. Their analysis and runtime was substantially improved by SpielmanDeshpande (FOCS ‘05) and later Vershynin (SICOMP ‘09). The fastest current algorithm, due to Vershynin, solves the smoothed LP using an expected O(d3 log3 n σ−4 + d9log7 n) number of pivots, improving the dependence on n from polynomial to logarithmic. While the original proof of SpielmanTeng has now been substantially simplified, the resulting analyses are still quite long and complex and the parameter dependencies far from optimal. In this work, we make substantial progress on this front, providing an improved and simpler analysis of shadow simplex methods, where our main algorithm requires an expected O(d2 √logn σ−2 + d5 log3/2 n) number of simplex pivots. We obtain our results via an improved shadow bound, key to earlier analyses as well, combined with algorithmic techniques of Borgwardt (ZOR ‘82) and Vershynin. As an added bonus, our analysis is completely modular, allowing us to obtain non-trivial bounds for perturbations beyond Gaussians, such as Laplace perturbations. @InProceedings{STOC18p390, author = {Daniel Dadush and Sophie Huiberts}, title = {A Friendly Smoothed Analysis of the Simplex Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {390--403}, doi = {}, year = {2018}, } |
|
Daskalakis, Constantinos |
STOC '18: "A Converse to Banach's ..."
A Converse to Banach's Fixed Point Theorem and Its CLS-Completeness
Constantinos Daskalakis, Christos Tzamos, and Manolis Zampetakis (Massachusetts Institute of Technology, USA; Microsoft Research, USA) Banach's fixed point theorem for contraction maps has been widely used to analyze the convergence of iterative methods in non-convex problems. It is a common experience, however, that iterative maps fail to be globally contracting under the natural metric in their domain, making the applicability of Banach's theorem limited. We explore how generally we can apply Banach's fixed point theorem to establish the convergence of iterative methods when pairing it with carefully designed metrics. Our first result is a strong converse of Banach's theorem, showing that it is a universal analysis tool for establishing global convergence of iterative methods to unique fixed points, and for bounding their convergence rate. In other words, we show that, whenever an iterative map globally converges to a unique fixed point, there exists a metric under which the iterative map is contracting and which can be used to bound the number of iterations until convergence. We illustrate our approach in the widely used power method, providing a new way of bounding its convergence rate through contraction arguments. We next consider the computational complexity of Banach's fixed point theorem. Making the proof of our converse theorem constructive, we show that computing a fixed point whose existence is guaranteed by Banach's fixed point theorem is CLS-complete. We thus provide the first natural complete problem for the class CLS, which was defined in [DP11] to capture the complexity of problems such as P-matrix LCP, computing KKT-points, and finding mixed Nash equilibria in congestion and network coordination games. @InProceedings{STOC18p44, author = {Constantinos Daskalakis and Christos Tzamos and Manolis Zampetakis}, title = {A Converse to Banach's Fixed Point Theorem and Its CLS-Completeness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {44--50}, doi = {}, year = {2018}, } |
|
De Berg, Mark |
STOC '18: "A Framework for ETH-Tight ..."
A Framework for ETH-Tight Algorithms and Lower Bounds in Geometric Intersection Graphs
Mark de Berg, Hans L. Bodlaender, Sándor Kisfaludi-Bak, Dániel Marx, and Tom C. van der Zanden (Eindhoven University of Technology, Netherlands; Utrecht University, Netherlands; Hungarian Academy of Sciences, Hungary) We give an algorithmic and lower-bound framework that facilitates the construction of subexponential algorithms and matching conditional complexity bounds. It can be applied to a wide range of geometric intersection graphs (intersections of similarly sized fat objects), yielding algorithms with running time 2O(n1−1/d) for any fixed dimension d≥ 2 for many well known graph problems, including Independent Set, r-Dominating Set for constant r, and Steiner Tree. For most problems, we get improved running times compared to prior work; in some cases, we give the first known subexponential algorithm in geometric intersection graphs. Additionally, most of the obtained algorithms work on the graph itself, i.e., do not require any geometric information. Our algorithmic framework is based on a weighted separator theorem and various treewidth techniques. The lower-bound framework is based on a constructive embedding of graphs into d-dimensional grids, and it allows us to derive matching 2Ω(n1−1/d) lower bounds under the Exponential Time Hypothesis even in the much more restricted class of d-dimensional induced grid graphs. @InProceedings{STOC18p574, author = {Mark de Berg and Hans L. Bodlaender and Sándor Kisfaludi-Bak and Dániel Marx and Tom C. van der Zanden}, title = {A Framework for ETH-Tight Algorithms and Lower Bounds in Geometric Intersection Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {574--586}, doi = {}, year = {2018}, } |
|
Dell, Holger |
STOC '18: "Extensor-Coding ..."
Extensor-Coding
Cornelius Brand, Holger Dell, and Thore Husfeldt (Saarland University, Germany; M2CI, Germany; Lund University, Sweden; IT University of Copenhagen, Denmark) We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time ε−2 4k(n+m) poly(k). The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2k·poly(n) time algorithm to find a k-path in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect k-multilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants. @InProceedings{STOC18p151, author = {Cornelius Brand and Holger Dell and Thore Husfeldt}, title = {Extensor-Coding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {151--164}, doi = {}, year = {2018}, } STOC '18: "Fine-Grained Reductions from ..." Fine-Grained Reductions from Approximate Counting to Decision Holger Dell and John Lapinskas (Saarland University, Germany; M2CI, Germany; University of Oxford, UK) In this paper, we introduce a general framework for fine-grained reductions of approximate counting problems to their decision versions. (Thus we use an oracle that decides whether any witness exists to multiplicatively approximate the number of witnesses with minimal overhead.) This mirrors a foundational result of Sipser (STOC 1983) and Stockmeyer (SICOMP 1985) in the polynomial-time setting, and a similar result of Müller (IWPEC 2006) in the FPT setting. Using our framework, we obtain such reductions for some of the most important problems in fine-grained complexity: the Orthogonal Vectors problem, 3SUM, and the Negative-Weight Triangle problem (which is closely related to All-Pairs Shortest Path). While all these problems have simple algorithms over which it is conjectured that no polynomial improvement is possible, our reductions would remain interesting even if these conjectures were proved; they have only polylogarithmic overhead, and can therefore be applied to subpolynomial improvements such as the n3/exp(Θ(√logn))-time algorithm for the Negative-Weight Triangle problem due to Williams (STOC 2014). Our framework is also general enough to apply to versions of the problems for which more efficient algorithms are known. For example, the Orthogonal Vectors problem over GF(m)d for constant m can be solved in time n·poly(d) by a result of Williams and Yu (SODA 2014); our result implies that we can approximately count the number of orthogonal pairs with essentially the same running time. We also provide a fine-grained reduction from approximate #SAT to SAT. Suppose the Strong Exponential Time Hypothesis (SETH) is false, so that for some 1<c<2 and all k there is an O(cn)-time algorithm for #k-SAT. Then we prove that for all k, there is an O((c+o(1))n)-time algorithm for approximate #k-SAT. In particular, our result implies that the Exponential Time Hypothesis (ETH) is equivalent to the seemingly-weaker statement that there is no algorithm to approximate #3-SAT to within a factor of 1+ε in time 2o(n)/ε2 (taking ε > 0 as part of the input). A full version of this paper containing detailed proofs is available at https://arxiv.org/abs/1707.04609. @InProceedings{STOC18p281, author = {Holger Dell and John Lapinskas}, title = {Fine-Grained Reductions from Approximate Counting to Decision}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {281--288}, doi = {}, year = {2018}, } STOC '18: "More Consequences of Falsifying ..." More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof (IBM Research, USA; Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Eindhoven University of Technology, Netherlands) The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ε>0 for which an O(N2−ε) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that: - For all d and all large enough k, there is a randomized algorithm that takes O(n(1−ε)k) time to solve the Zero-Weight-k-Clique and Min-Weight-k-Clique problems on d-hypergraphs with n vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. - For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)n). @InProceedings{STOC18p253, author = {Amir Abboud and Karl Bringmann and Holger Dell and Jesper Nederlof}, title = {More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {253--266}, doi = {}, year = {2018}, } |
|
De Loera, Jesús A. |
STOC '18: "The Minimum Euclidean-Norm ..."
The Minimum Euclidean-Norm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm Is Exponential
Jesús A. De Loera, Jamie Haddock, and Luis Rademacher (University of California at Davis, USA) The complexity of Philip Wolfe’s method for the minimum Euclidean-norm point problem over a convex polytope has remained unknown since he proposed the method in 1974. We present the first example that Wolfe’s method takes exponential time. Additionally, we improve previous results to show that linear programming reduces in strongly-polynomial time to the minimum norm point problem over a simplex @InProceedings{STOC18p545, author = {Jesús A. De Loera and Jamie Haddock and Luis Rademacher}, title = {The Minimum Euclidean-Norm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm Is Exponential}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {545--553}, doi = {}, year = {2018}, } |
|
De Rezende, Susanna F. |
STOC '18: "Clique Is Hard on Average ..."
Clique Is Hard on Average for Regular Resolution
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Alexander Razborov (Universitat Politècnica de Catalunya, Spain; KTH, Sweden; Sapienza University of Rome, Italy; University of Chicago, USA; Russian Academy of Sciences, Russia) We prove that for k ≪ n1/4 regular resolution requires length nΩ(k) to establish that an Erdos-Renyi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional nΩ(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs. @InProceedings{STOC18p866, author = {Albert Atserias and Ilario Bonacina and Susanna F. de Rezende and Massimo Lauria and Jakob Nordström and Alexander Razborov}, title = {Clique Is Hard on Average for Regular Resolution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {866--877}, doi = {}, year = {2018}, } |
|
Diakonikolas, Ilias |
STOC '18: "List-Decodable Robust Mean ..."
List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians
Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (University of Southern California, USA; University of California at San Diego, USA) We study the problem of list-decodable (robust) Gaussian mean estimation and the related problem of learning mixtures of separated spherical Gaussians. In the former problem, we are given a set T of points in n with the promise that an α-fraction of points in T, where 0< α < 1/2, are drawn from an unknown mean identity covariance Gaussian G, and no assumptions are made about the remaining points. The goal is to output a small list of candidate vectors with the guarantee that at least one of the candidates is close to the mean of G. In the latter problem, we are given samples from a k-mixture of spherical Gaussians on n and the goal is to estimate the unknown model parameters up to small accuracy. We develop a set of techniques that yield new efficient algorithms with significantly improved guarantees for these problems. Specifically, our main contributions are as follows: List-Decodable Mean Estimation. Fix any d ∈ + and 0< α <1/2. We design an algorithm with sample complexity Od ((nd/α)) and runtime Od ((n/α)d) that outputs a list of O(1/α) many candidate vectors such that with high probability one of the candidates is within ℓ2-distance Od(α−1/(2d)) from the mean of G. The only previous algorithm for this problem achieved error Õ(α−1/2) under second moment conditions. For d = O(1/), where >0 is a constant, our algorithm runs in polynomial time and achieves error O(α). For d = Θ(log(1/α)), our algorithm runs in time (n/α)O(log(1/α)) and achieves error O(log3/2(1/α)), almost matching the information-theoretically optimal bound of Θ(log1/2(1/α)) that we establish. We also give a Statistical Query (SQ) lower bound suggesting that the complexity of our algorithm is qualitatively close to best possible. Learning Mixtures of Spherical Gaussians. We give a learning algorithm for mixtures of spherical Gaussians, with unknown spherical covariances, that succeeds under significantly weaker separation assumptions compared to prior work. For the prototypical case of a uniform k-mixture of identity covariance Gaussians we obtain the following: For any >0, if the pairwise separation between the means is at least Ω(k+√log(1/δ)), our algorithm learns the unknown parameters within accuracy δ with sample complexity and running time (n, 1/δ, (k/)1/). Moreover, our algorithm is robust to a small dimension-independent fraction of corrupted data. The previously best known polynomial time algorithm required separation at least k1/4 (k/δ). Finally, our algorithm works under separation of Õ(log3/2(k)+√log(1/δ)) with sample complexity and running time (n, 1/δ, klogk). This bound is close to the information-theoretically minimum separation of Ω(√logk). Our main technical contribution is a new technique, using degree-d multivariate polynomials, to remove outliers from high-dimensional datasets where the majority of the points are corrupted. @InProceedings{STOC18p1047, author = {Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart}, title = {List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1047--1060}, doi = {}, year = {2018}, } STOC '18: "Learning Geometric Concepts ..." Learning Geometric Concepts with Nasty Noise Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (University of Southern California, USA; University of California at San Diego, USA) We study the efficient learnability of geometric concept classes — specifically, low-degree polynomial threshold functions (PTFs) and intersections of halfspaces — when a fraction of the training data is adversarially corrupted. We give the first polynomial-time PAC learning algorithms for these concept classes with dimension-independent error guarantees in the presence of nasty noise under the Gaussian distribution. In the nasty noise model, an omniscient adversary can arbitrarily corrupt a small fraction of both the unlabeled data points and their labels. This model generalizes well-studied noise models, including the malicious noise model and the agnostic (adversarial label noise) model. Prior to our work, the only concept class for which efficient malicious learning algorithms were known was the class of origin-centered halfspaces. At the core of our results is an efficient algorithm to approximate the low-degree Chow-parameters of any bounded function in the presence of nasty noise. Our robust approximation algorithm for the Chow parameters provides near-optimal error guarantees for a range of distribution families satisfying mild concentration bounds and moment conditions. At the technical level, this algorithm employs an iterative “spectral” technique for outlier detection and removal inspired by recent work in robust unsupervised learning, which makes essential use of low-degree multivariate polynomials. Our robust learning algorithm for low-degree PTFs provides dimension-independent error guarantees for a class of tame distributions, including Gaussians and, more generally, any logconcave distribution with (approximately) known low-degree moments. For LTFs under the Gaussian distribution, using a refinement of the localization technique, we give a polynomial-time algorithm that achieves a near-optimal error of O(є), where є is the noise rate. Our robust learning algorithm for intersections of halfspaces proceeds by projecting down to an appropriate low-dimensional subspace. Its correctness makes essential use of a novel robust inverse independence lemma that is of independent interest. @InProceedings{STOC18p1061, author = {Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart}, title = {Learning Geometric Concepts with Nasty Noise}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1061--1073}, doi = {}, year = {2018}, } STOC '18: "Testing Conditional Independence ..." Testing Conditional Independence of Discrete Distributions Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (Stanford University, USA; University of Southern California, USA; University of California at San Diego, USA) We study the problem of testing *conditional independence* for discrete distributions. Specifically, given samples from a discrete random variable (X, Y, Z) on domain [ℓ1]×[ℓ2] × [n], we want to distinguish, with probability at least 2/3, between the case that X and Y are conditionally independent given Z from the case that (X, Y, Z) is є-far, in ℓ1-distance, from every distribution that has this property. Conditional independence is a concept of central importance in probability and statistics with important applications in various scientific domains. As such, the statistical task of testing conditional independence has been extensively studied in various forms within the statistics and econometrics community for nearly a century. Perhaps surprisingly, this problem has not been previously considered in the framework of distribution property testing and in particular no tester with *sublinear* sample complexity is known, even for the important special case that the domains of X and Y are binary. The main algorithmic result of this work is the first conditional independence tester with sublinear sample complexity. To complement our upper bound, we prove information-theoretic lower bounds establishing that the sample complexity of our algorithm is optimal, up to constant factors, for a number of settings. Specifically, for the prototypical setting when ℓ1, ℓ2 = O(1), we show that the sample complexity of testing conditional independence (upper bound and matching lower bound) is O⎛ ⎜ ⎝max⎛ ⎜ ⎝√n /є2,min⎛ ⎝n7/8/є,n6/7/є8/7⎞ ⎠⎞ ⎟ ⎠ ⎞ ⎟ ⎠ . We also achieve sublinear sample complexity for the general case of arbitrary ℓ1, ℓ2, and n. To obtain our tester, we employ a variety of tools, including (1) a subtle weighted adaptation of the ”flattening” technique, and (2) the design and analysis of an optimal (unbiased) estimator for the following statistical problem of independent interest: Given a degree-d polynomial Q∶ℝn → ℝ and sample access to a distribution p over [n], estimate Q(p1, …, pn) up to small additive error. Obtaining tight variance analyses for specific estimators of this form has been a major technical hurdle in distribution testing. As an important contribution of this work, we develop a general theory providing tight variance bounds for *all* such estimators. @InProceedings{STOC18p735, author = {Clément L. Canonne and Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart}, title = {Testing Conditional Independence of Discrete Distributions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {735--748}, doi = {}, year = {2018}, } |
|
Dinur, Irit |
STOC '18: "Towards a Proof of the 2-to-1 ..."
Towards a Proof of the 2-to-1 Games Conjecture?
Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We present a polynomial time reduction from gap-3LIN to label cover with 2-to-1 constraints. In the “yes” case the fraction of satisfied constraints is at least 1 −ε, and in the “no” case we show that this fraction is at most ε, assuming a certain (new) combinatorial hypothesis on the Grassmann graph. In other words, we describe a combinatorial hypothesis that implies the 2-to-1 conjecture with imperfect completeness. The companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] makes some progress towards proving this hypothesis. Our work builds on earlier work by a subset of the authors [Khot, Minzer and Safra, STOC 2017] where a slightly different hypothesis was used to obtain hardness of approximating vertex cover to within factor of √2−ε. The most important implication of this work is (assuming the hypothesis) an NP-hardness gap of 1/2−ε vs. ε for unique games. In addition, we derive optimal NP-hardness for approximating the max-cut-gain problem, NP-hardness of coloring an almost 4-colorable graph with any constant number of colors, and the same √2−ε NP-hardness for approximate vertex cover that was already obtained based on a slightly different hypothesis. Recent progress towards proving our hypothesis [Barak, Kothari and Steurer, ECCC TR18-077], [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] directly implies some new unconditional NP-hardness results. These include new points of NP-hardness for unique games and for 2-to-1 and 2-to-2 games. More recently, the full version of our hypothesis was proven [Khot, Minzer and Safra, ECCC TR18-006]. @InProceedings{STOC18p376, author = {Irit Dinur and Subhash Khot and Guy Kindler and Dor Minzer and Muli Safra}, title = {Towards a Proof of the 2-to-1 Games Conjecture?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {376--389}, doi = {}, year = {2018}, } STOC '18: "On Non-optimally Expanding ..." On Non-optimally Expanding Sets in Grassmann Graphs Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We study the structure of non-expanding sets in the Grassmann graph. We put forth a hypothesis stating that every small set whose expansion is smaller than 1−δ must be correlated with one of a specified list of sets which are isomorphic to smaller Grassmann graphs. We develop a framework of Fourier analysis for analyzing functions over the Grassmann graph, and prove that our hypothesis holds for all sets whose expansion is below 7/8. In the companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018], the authors show that a linearity agreement hypothesis implies an NP-hardness gap of 1/2− vs for unique games and other inapproximability results. In [Barak, Kothari and Steurer, ECCC TR18-077], the authors show that the hypothesis in this work implies the linearity agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018]. Combined with our main theorem here this proves a version of the linearity agreement hypothesis with certain specific parameters. Short of proving the entire hypothesis, this nevertheless suffices for getting new unconditional NP hardness gaps for label cover with 2-to-1 and unique constraints. Our Expansion Hypothesis has been subsequently proved in its full form [Khot, Minzer and Safra, ECCC TR18-006] thereby proving the agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] and completing the proof of the 2-to-1 Games Conjecture (albeit with imperfect completeness). @InProceedings{STOC18p940, author = {Irit Dinur and Subhash Khot and Guy Kindler and Dor Minzer and Muli Safra}, title = {On Non-optimally Expanding Sets in Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {940--951}, doi = {}, year = {2018}, } |
|
Dudek, Bartłomiej |
STOC '18: "Universal Protocols for Information ..."
Universal Protocols for Information Dissemination using Emergent Signals
Bartłomiej Dudek and Adrian Kosowski (University of Wrocław, Poland; Inria, France) We consider a population of n agents which communicate with each other in a decentralized manner, through random pairwise interactions. One or more agents in the population may act as authoritative sources of information, and the objective of the remaining agents is to obtain information from or about these source agents. We study two basic tasks: broadcasting, in which the agents are to learn the bit-state of an authoritative source which is present in the population, and source detection, in which the agents are required to decide if at least one source agent is present in the population or not. We focus on designing protocols which meet two natural conditions: (1) universality, i.e., independence of population size, and (2) rapid convergence to a correct global state after a reconfiguration, such as a change in the state of a source agent. Our main positive result is to show that both of these constraints can be met. For both the broadcasting problem and the source detection problem, we obtain solutions with an expected convergence time of O(logn), from any starting configuration. The solution to broadcasting is exact, which means that all agents reach the state broadcast by the source, while the solution to source detection admits one-sided error on a ε-fraction of the population (which is unavoidable for this problem). Both protocols are easy to implement in practice and are self-stabilizing, in the sense that the stated bounds on convergence time hold starting from any possible initial configuration of the system. Our protocols exploit the properties of self-organizing oscillatory dynamics. On the hardness side, our main structural insight is to prove that any protocol which meets the constraints of universality and of rapid convergence after reconfiguration must display a form of non-stationary behavior (of which oscillatory dynamics are an example). We also observe that the periodicity of the oscillatory behavior of the protocol, when present, must necessarily depend on the number #X of source agents present in the population. For instance, our protocols inherently rely on the emergence of a signal passing through the population, whose period is Θ(log(n/#X)) rounds for most starting configurations. The design of phase clocks with tunable frequency may be of independent interest, notably in modeling biological networks. @InProceedings{STOC18p87, author = {Bartłomiej Dudek and Adrian Kosowski}, title = {Universal Protocols for Information Dissemination using Emergent Signals}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {87--99}, doi = {}, year = {2018}, } |
|
Dutta, Pranjal |
STOC '18: "Discovering the Roots: Uniform ..."
Discovering the Roots: Uniform Closure Results for Algebraic Classes under Factoring
Pranjal Dutta, Nitin Saxena, and Amit Sinhababu (Chennai Mathematical Institute, India; IIT Kanpur, India) Newton iteration (NI) is an almost 350 years old recursive formula that approximates a simple root of a polynomial quite rapidly. We generalize it to a matrix recurrence (allRootsNI) that approximates all the roots simultaneously. In this form, the process yields a better circuit complexity in the case when the number of roots r is small but the multiplicities are exponentially large. Our method sets up a linear system in r unknowns and iteratively builds the roots as formal power series. For an algebraic circuit f(x1,…,xn) of size s we prove that each factor has size at most a polynomial in: s and the degree of the squarefree part of f. Consequently, if f1 is a 2Ω(n)-hard polynomial then any nonzero multiple ∏i fiei is equally hard for arbitrary positive ei’s, assuming that ∑ideg(fi) is at most 2O(n). It is an old open question whether the class of poly(n)-sized formulas (resp. algebraic branching programs) is closed under factoring. We show that given a polynomial f of degree nO(1) and formula (resp. ABP) size nO(logn) we can find a similar size formula (resp. ABP) factor in randomized poly(nlogn)-time. Consequently, if determinant requires nΩ(logn) size formula, then the same can be said about any of its nonzero multiples. As part of our proofs, we identify a new property of multivariate polynomial factorization. We show that under a random linear transformation τ, f(τx) completely factors via power series roots. Moreover, the factorization adapts well to circuit complexity analysis. This with allRootsNI are the techniques that help us make progress towards the old open problems; supplementing the large body of classical results and concepts in algebraic circuit factorization (eg. Zassenhaus, J.NT 1969; Kaltofen, STOC 1985-7 & B'urgisser, FOCS 2001). @InProceedings{STOC18p1152, author = {Pranjal Dutta and Nitin Saxena and Amit Sinhababu}, title = {Discovering the Roots: Uniform Closure Results for Algebraic Classes under Factoring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1152--1165}, doi = {}, year = {2018}, } |
|
Dwork, Cynthia |
STOC '18: "Composable and Versatile Privacy ..."
Composable and Versatile Privacy via Truncated CDP
Mark Bun, Cynthia Dwork, Guy N. Rothblum, and Thomas Steinke (Princeton University, USA; Harvard University, USA; Weizmann Institute of Science, Israel; IBM Research, USA) We propose truncated concentrated differential privacy (tCDP), a refinement of differential privacy and of concentrated differential privacy. This new definition provides robust and efficient composition guarantees, supports powerful algorithmic techniques such as privacy amplification via sub-sampling, and enables more accurate statistical analyses. In particular, we show a central task for which the new definition enables exponential accuracy improvement. @InProceedings{STOC18p74, author = {Mark Bun and Cynthia Dwork and Guy N. Rothblum and Thomas Steinke}, title = {Composable and Versatile Privacy via Truncated CDP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {74--86}, doi = {}, year = {2018}, } |
|
Eden, Talya |
STOC '18: "On Approximating the Number ..."
On Approximating the Number of k-Cliques in Sublinear Time
Talya Eden, Dana Ron, and C. Seshadhri (Tel Aviv University, Israel; University of California at Santa Cruz, USA) We study the problem of approximating the number of k-cliques in a graph when given query access to the graph. We consider the standard query model for general graphs via (1) degree queries, (2) neighbor queries and (3) pair queries. Let n denote the number of vertices in the graph, m the number of edges, and Ck the number of k-cliques. We design an algorithm that outputs a (1+ε)-approximation (with high probability) for Ck, whose expected query complexity and running time are O(n/Ck1/k+mk/2/Ck )(logn, 1/ε,k). Hence, the complexity of the algorithm is sublinear in the size of the graph for Ck = ω(mk/2−1). Furthermore, we prove a lower bound showing that the query complexity of our algorithm is essentially optimal (up to the dependence on logn, 1/ε and k). The previous results in this vein are by Feige (SICOMP 06) and by Goldreich and Ron (RSA 08) for edge counting (k=2) and by Eden et al. (FOCS 2015) for triangle counting (k=3). Our result matches the complexities of these results. The previous result by Eden et al. hinges on a certain amortization technique that works only for triangle counting, and does not generalize for larger cliques. We obtain a general algorithm that works for any k≥ 3 by designing a procedure that samples each k-clique incident to a given set S of vertices with approximately equal probability. The primary difficulty is in finding cliques incident to purely high-degree vertices, since random sampling within neighbors has a low success probability. This is achieved by an algorithm that samples uniform random high degree vertices and a careful tradeoff between estimating cliques incident purely to high-degree vertices and those that include a low-degree vertex. @InProceedings{STOC18p722, author = {Talya Eden and Dana Ron and C. Seshadhri}, title = {On Approximating the Number of k-Cliques in Sublinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {722--734}, doi = {}, year = {2018}, } |
|
Efremenko, Klim |
STOC '18: "Interactive Coding over the ..."
Interactive Coding over the Noisy Broadcast Channel
Klim Efremenko, Gillat Kol, and Raghuvansh Saxena (Ben-Gurion University of the Negev, Israel; Princeton University, USA) A set of n players, each holding a private input bit, communicate over a noisy broadcast channel. Their mutual goal is for all players to learn all inputs. At each round one of the players broadcasts a bit to all the other players, and the bit received by each player is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? This problem was first suggested by El Gamal in 1984. In 1988, Gallager gave an elegant noise-resistant protocol requiring only O(n loglogn) rounds. The problem got resolved in 2005 by a seminal paper of Goyal, Kindler, and Saks, proving that Gallager’s protocol is essentially optimal. We revisit the above noisy broadcast problem and show that O(n) rounds suffice. This is possible due to a relaxation of the model assumed by the previous works. We no longer demand that exactly one player broadcasts in every round, but rather allow any number of players to broadcast. However, if it is not the case that exactly one player chooses to broadcast, each of the other players gets an adversely chosen bit. We generalized the above result and initiate the study of interactive coding over the noisy broadcast channel. We show that any interactive protocol that works over the noiseless broadcast channel can be simulated over our restrictive noisy broadcast model with constant blowup of the communication. Our results also establish that modern techniques for interactive coding can help us make progress on the classical problems. @InProceedings{STOC18p507, author = {Klim Efremenko and Gillat Kol and Raghuvansh Saxena}, title = {Interactive Coding over the Noisy Broadcast Channel}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {507--520}, doi = {}, year = {2018}, } |
|
El Alaoui, Ahmed |
STOC '18: "Tight Query Complexity Lower ..."
Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law
Max Simchowitz, Ahmed El Alaoui, and Benjamin Recht (University of California at Berkeley, USA) We prove a query complexity lower bound for approximating the top r dimensional eigenspace of a matrix. We consider an oracle model where, given a symmetric matrix M ∈ ℝd × d, an algorithm Alg is allowed to make T exact queries of the form w(i) = M v(i) for i in {1,...,T}, where v(i) is drawn from a distribution which depends arbitrarily on the past queries and measurements {v(j),w(i)}1 ≤ j ≤ i−1. We show that for every gap ∈ (0,1/2], there exists a distribution over matrices M for which 1) gapr(M) = Ω(gap) (where gapr(M) is the normalized gap between the r and r+1-st largest-magnitude eigenvector of M), and 2) any Alg which takes fewer than const × r logd/√gap queries fails (with overwhelming probability) to identity a matrix V ∈ ℝd × r with orthonormal columns for which ⟨ V, M V⟩ ≥ (1 − const × gap)∑i=1r λi(M). Our bound requires only that d is a small polynomial in 1/gap and r, and matches the upper bounds of Musco and Musco ’15. Moreover, it establishes a strict separation between convex optimization and “strict-saddle” non-convex optimization of which PCA is a canonical example: in the former, first-order methods can have dimension-free iteration complexity, whereas in PCA, the iteration complexity of gradient-based methods must necessarily grow with the dimension. Our argument proceeds via a reduction to estimating a rank-r spike in a deformed Wigner model M =W + λ U U⊤, where W is from the Gaussian Orthogonal Ensemble, U is uniform on the d × r-Stieffel manifold and λ > 1 governs the size of the perturbation. Surprisingly, this ubiquitous random matrix model witnesses the worst-case rate for eigenspace approximation, and the ‘accelerated’ gap−1/2 in the rate follows as a consequence of the correspendence between the asymptotic eigengap and the size of the perturbation λ, when λ is near the “phase transition” λ = 1. To verify that d need only be polynomial in gap−1 and r, we prove a finite sample convergence theorem for top eigenvalues of a deformed Wigner matrix, which may be of independent interest. We then lower bound the above estimation problem with a novel technique based on Fano-style data-processing inequalities with truncated likelihoods; the technique generalizes the Bayes-risk lower bound of Chen et al. ’16, and we believe it is particularly suited to lower bounds in adaptive settings like the one considered in this paper. @InProceedings{STOC18p1249, author = {Max Simchowitz and Ahmed El Alaoui and Benjamin Recht}, title = {Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1249--1259}, doi = {}, year = {2018}, } |
|
Emek, Yuval |
STOC '18: "Approximating Generalized ..."
Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency
Yuval Emek, Shay Kutten, Ron Lavi, and Yangguang Shi (Technion, Israel) In a generalized network design (GND) problem, a set of resources are assigned (non-exclusively) to multiple requests. Each request contributes its weight to the resources it uses and the total load on a resource is then translated to the cost it incurs via a resource specific cost function. Motivated by energy efficiency applications, recently, there is a growing interest in GND using cost functions that exhibit (dis)economies of scale ((D)oS), namely, cost functions that appear subadditive for small loads and superadditive for larger loads. The current paper advances the existing literature on approximation algorithms for GND problems with (D)oS cost functions in various aspects: (1) while the existing results are restricted to routing requests in undirected graphs, identifying the resources with the graph's edges, the current paper presents a generic approximation framework that yields approximation results for a much wider family of requests (including various types of Steiner tree and Steiner forest requests) in both directed and undirected graphs, where the resources can be identified with either the edges or the vertices; (2) while the existing results assume that a request contributes the same weight to each resource it uses, our approximation framework allows for unrelated weights, thus providing the first non-trivial approximation for the problem of scheduling unrelated parallel machines with (D)oS cost functions; (3) while most of the existing approximation algorithms are based on convex programming, our approximation framework is fully combinatorial and runs in strongly polynomial time; (4) the family of (D)oS cost functions considered in the current paper is more general than the one considered in the existing literature, providing a more accurate abstraction for practical energy conservation scenarios; and (5) we obtain the first approximation ratio for GND with (D)oS cost functions that depends only on the parameters of the resources' technology and does not grow with the number of resources, the number of requests, or their weights. The design of our approximation framework relies heavily on Roughgarden's smoothness toolbox (JACM 2015), thus demonstrating the possible usefulness of this toolbox in the area of approximation algorithms. @InProceedings{STOC18p598, author = {Yuval Emek and Shay Kutten and Ron Lavi and Yangguang Shi}, title = {Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {598--606}, doi = {}, year = {2018}, } |
|
Erickson, Jeff |
STOC '18: "Holiest Minimum-Cost Paths ..."
Holiest Minimum-Cost Paths and Flows in Surface Graphs
Jeff Erickson, Kyle Fox, and Luvsandondov Lkhamsuren (University of Illinois at Urbana-Champaign, USA; University of Texas at Dallas, USA; Airbnb, USA) Let G be an edge-weighted directed graph with n vertices embedded on an orientable surface of genus g. We describe a simple deterministic lexicographic perturbation scheme that guarantees uniqueness of minimum-cost flows and shortest paths in G. The perturbations take O(gn) time to compute. We use our perturbation scheme in a black box manner to derive a deterministic O(n loglogn) time algorithm for minimum cut in directed edge-weighted planar graphs and a deterministic O(g2 n logn) time proprocessing scheme for the multiple-source shortest paths problem of computing a shortest path oracle for all vertices lying on a common face of a surface embedded graph. The latter result yields faster deterministic near-linear time algorithms for a variety of problems in constant genus surface embedded graphs. Finally, we open the black box in order to generalize a recent linear-time algorithm for multiple-source shortest paths in unweighted undirected planar graphs to work in arbitrary orientable surfaces. Our algorithm runs in O(g2 n logg) time in this setting, and it can be used to give improved linear time algorithms for several problems in unweighted undirected surface embedded graphs of constant genus including the computation of minimum cuts, shortest topologically non-trivial cycles, and minimum homology bases. @InProceedings{STOC18p1319, author = {Jeff Erickson and Kyle Fox and Luvsandondov Lkhamsuren}, title = {Holiest Minimum-Cost Paths and Flows in Surface Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1319--1332}, doi = {}, year = {2018}, } |
|
Fawzi, Omar |
STOC '18: "Efficient Decoding of Random ..."
Efficient Decoding of Random Errors for Quantum Expander Codes
Omar Fawzi, Antoine Grospellier, and Anthony Leverrier (ENS Lyon, France; Inria, France) We show that quantum expander codes, a constant-rate family of quantum low-density parity check (LDPC) codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman’s construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of α-percolation: for a random subset E of vertices of a given graph, we consider the size of the largest connected α-subset of E, where X is an α-subset of E if |X ∩ E| ≥ α |X|. @InProceedings{STOC18p521, author = {Omar Fawzi and Antoine Grospellier and Anthony Leverrier}, title = {Efficient Decoding of Random Errors for Quantum Expander Codes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {521--534}, doi = {}, year = {2018}, } |
|
Filos-Ratsikas, Aris |
STOC '18: "Consensus Halving Is PPA-Complete ..."
Consensus Halving Is PPA-Complete
Aris Filos-Ratsikas and Paul W. Goldberg (University of Oxford, UK) We show that the computational problem Consensus Halving is PPA-Complete, the first PPA-Completeness result for a problem whose definition does not involve an explicit circuit. We also show that an approximate version of this problem is polynomial-time equivalent to Necklace Splitting, which establishes PPAD-hardness for Necklace Splitting and suggests that it is also PPA-Complete. @InProceedings{STOC18p51, author = {Aris Filos-Ratsikas and Paul W. Goldberg}, title = {Consensus Halving Is PPA-Complete}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {51--64}, doi = {}, year = {2018}, } |
|
Filtser, Arnold |
STOC '18: "Metric Embedding via Shortest ..."
Metric Embedding via Shortest Path Decompositions
Ittai Abraham, Arnold Filtser, Anupam Gupta, and Ofer Neiman (VMware, USA; Ben-Gurion University of the Negev, Israel; Carnegie Mellon University, USA) We study the problem of embedding weighted graphs of pathwidth k into ℓp spaces. Our main result is an O(kmin{1p,12})-distortion embedding. For p=1, this is a super-exponential improvement over the best previous bound of Lee and Sidiropoulos. Our distortion bound is asymptotically tight for any fixed p >1. Our result is obtained via a novel embedding technique that is based on low depth decompositions of a graph via shortest paths. The core new idea is that given a geodesic shortest path P, we can probabilistically embed all points into 2 dimensions with respect to P. For p>2 our embedding also implies improved distortion on bounded treewidth graphs (O((klogn)1p)). For asymptotically large p, our results also implies improved distortion on graphs excluding a minor. @InProceedings{STOC18p952, author = {Ittai Abraham and Arnold Filtser and Anupam Gupta and Ofer Neiman}, title = {Metric Embedding via Shortest Path Decompositions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {952--963}, doi = {}, year = {2018}, } |
|
Fineman, Jeremy T. |
STOC '18: "Nearly Work-Efficient Parallel ..."
Nearly Work-Efficient Parallel Algorithm for Digraph Reachability
Jeremy T. Fineman (Georgetown University, USA) One of the simplest problems on directed graphs is that of identifying the set of vertices reachable from a designated source vertex. This problem can be solved easily sequentially by performing a graph search, but efficient parallel algorithms have eluded researchers for decades. For sparse high-diameter graphs in particular, there is no known work-efficient parallel algorithm with nontrivial parallelism. This amounts to one of the most fundamental open questions in parallel graph algorithms: Is there a parallel algorithm for digraph reachability with nearly linear work? This paper shows that the answer is yes. This paper presents a randomized parallel algorithm for digraph reachability and related problems with expected work Õ(m) and span Õ(n2/3), and hence parallelism Ω(m/n2/3) = Ω(n1/3), on any graph with n vertices and m arcs. This is the first parallel algorithm having both nearly linear work and strongly sublinear span, i.e., span Õ(n1−є) for any constant є>0. The algorithm can be extended to produce a directed spanning tree, determine whether the graph is acyclic, topologically sort the strongly connected components of the graph, or produce a directed ear decomposition, all with work Õ(m) and span Õ(n2/3). The main technical contribution is an efficient Monte Carlo algorithm that, through the addition of Õ(n) shortcuts, reduces the diameter of the graph to Õ(n2/3) with high probability. While both sequential and parallel algorithms are known with those combinatorial properties, even the sequential algorithms are not efficient, having sequential runtime Ω(mnΩ(1)). This paper presents a surprisingly simple sequential algorithm that achieves the stated diameter reduction and runs in Õ(m) time. Parallelizing that algorithm yields the main result, but doing so involves overcoming several other challenges. @InProceedings{STOC18p457, author = {Jeremy T. Fineman}, title = {Nearly Work-Efficient Parallel Algorithm for Digraph Reachability}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {457--470}, doi = {}, year = {2018}, } |
|
Forbes, Michael A. |
STOC '18: "A PSPACE Construction of a ..."
A PSPACE Construction of a Hitting Set for the Closure of Small Algebraic Circuits
Michael A. Forbes and Amir Shpilka (University of Illinois at Urbana-Champaign, USA; Tel Aviv University, Israel) In this paper we study the complexity of constructing a hitting set for VP, the class of polynomials that can be infinitesimally approximated by polynomials that are computed by polynomial sized algebraic circuits, over the real or complex numbers. Specifically, we show that there is a PSPACE algorithm that given n,s,r in unary outputs a set of rational n-tuples of size poly(n,s,r), with poly(n,s,r) bit complexity, that hits all n-variate polynomials of degree r that are the limit of size s algebraic circuits. Previously it was known that a random set of this size is a hitting set, but a construction that is certified to work was only known in EXPSPACE (or EXPH assuming the generalized Riemann hypothesis). As a corollary we get that a host of other algebraic problems such as Noether Normalization Lemma, can also be solved in PSPACE deterministically, where earlier only randomized algorithms and EXPSPACE algorithms (or EXPH assuming the generalized Riemann hypothesis) were known. The proof relies on the new notion of a robust hitting set which is a set of inputs such that any nonzero polynomial that can be computed by a polynomial size algebraic circuit, evaluates to a not too small value on at least one element of the set. Proving the existence of such a robust hitting set is the main technical difficulty in the proof. Our proof uses anti-concentration results for polynomials, basic tools from algebraic geometry and the existential theory of the reals. @InProceedings{STOC18p1180, author = {Michael A. Forbes and Amir Shpilka}, title = {A PSPACE Construction of a Hitting Set for the Closure of Small Algebraic Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1180--1192}, doi = {}, year = {2018}, } |
|
Fox, Kyle |
STOC '18: "Holiest Minimum-Cost Paths ..."
Holiest Minimum-Cost Paths and Flows in Surface Graphs
Jeff Erickson, Kyle Fox, and Luvsandondov Lkhamsuren (University of Illinois at Urbana-Champaign, USA; University of Texas at Dallas, USA; Airbnb, USA) Let G be an edge-weighted directed graph with n vertices embedded on an orientable surface of genus g. We describe a simple deterministic lexicographic perturbation scheme that guarantees uniqueness of minimum-cost flows and shortest paths in G. The perturbations take O(gn) time to compute. We use our perturbation scheme in a black box manner to derive a deterministic O(n loglogn) time algorithm for minimum cut in directed edge-weighted planar graphs and a deterministic O(g2 n logn) time proprocessing scheme for the multiple-source shortest paths problem of computing a shortest path oracle for all vertices lying on a common face of a surface embedded graph. The latter result yields faster deterministic near-linear time algorithms for a variety of problems in constant genus surface embedded graphs. Finally, we open the black box in order to generalize a recent linear-time algorithm for multiple-source shortest paths in unweighted undirected planar graphs to work in arbitrary orientable surfaces. Our algorithm runs in O(g2 n logg) time in this setting, and it can be used to give improved linear time algorithms for several problems in unweighted undirected surface embedded graphs of constant genus including the computation of minimum cuts, shortest topologically non-trivial cycles, and minimum homology bases. @InProceedings{STOC18p1319, author = {Jeff Erickson and Kyle Fox and Luvsandondov Lkhamsuren}, title = {Holiest Minimum-Cost Paths and Flows in Surface Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1319--1332}, doi = {}, year = {2018}, } |
|
Franceschetti, Massimo |
STOC '18: "Shape of Diffusion and Size ..."
Shape of Diffusion and Size of Monochromatic Region of a Two-Dimensional Spin System
Hamed Omidvar and Massimo Franceschetti (University of California at San Diego, USA) We consider an agent-based distributed algorithm with exponentially distributed waiting times in which agents with binary states interact locally over a geometric graph, and based on this interaction and on the value of a common intolerance threshold τ, decide whether to change their states. This model is equivalent to an Asynchronous Cellular Automaton (ACA) with extended Moore neighborhoods, a zero-temperature Ising model with Glauber dynamics, or a Schelling model of self-organized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. We prove a shape theorem for the spread of the “affected” nodes during the process dynamics and show that in the steady state, for τ ∈ (τ*,1−τ*) ∖ {1/2}, where τ* ≈ 0.488, the size of the “mono-chromatic region” at the end of the process is at least exponential in the size of the local neighborhood of interaction with probability approaching one as N grows. Combined with previous results on the expected size of the monochromatic region that provide a matching upper bound, this implies that in the steady state the size of the monochromatic region of any agent is exponential with high probability for the mentioned interval of τ. The shape theorem is based on a novel concentration inequality for the spreading time, and provides a precise geometrical description of the process dynamics. The result on the size of the monochromatic region considerably extends our understanding of the steady state. Showing convergence with high probability, it rules out the possibility that only a small fraction of the nodes are eventually contained in large monochromatic regions, which was left open by previous works. @InProceedings{STOC18p100, author = {Hamed Omidvar and Massimo Franceschetti}, title = {Shape of Diffusion and Size of Monochromatic Region of a Two-Dimensional Spin System}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {100--113}, doi = {}, year = {2018}, } |
|
Franks, Cole |
STOC '18: "Operator Scaling with Specified ..."
Operator Scaling with Specified Marginals
Cole Franks (Rutgers University, USA) The completely positive maps, a generalization of the nonnegative matrices, are a well-studied class of maps from n× n matrices to m× m matrices. The existence of the operator analogues of doubly stochastic scalings of matrices, the study of which is known as operator scaling, is equivalent to a multitude of problems in computer science and mathematics such rational identity testing in non-commuting variables, noncommutative rank of symbolic matrices, and a basic problem in invariant theory (Garg et. al., 2016). We study operator scaling with specified marginals, which is the operator analogue of scaling matrices to specified row and column sums (or marginals). We characterize the operators which can be scaled to given marginals, much in the spirit of the Gurvits’ algorithmic characterization of the operators that can be scaled to doubly stochastic (Gurvits, 2004). Our algorithm, which is a modified version of Gurvits’ algorithm, produces approximate scalings in time poly(n,m) whenever scalings exist. A central ingredient in our analysis is a reduction from operator scaling with specified marginals to operator scaling in the doubly stochastic setting. Instances of operator scaling with specified marginals arise in diverse areas of study such as the Brascamp-Lieb inequalities, communication complexity, eigenvalues of sums of Hermitian matrices, and quantum information theory. Some of the known theorems in these areas, several of which had no algorithmic proof, are straightforward consequences of our characterization theorem. For instance, we obtain a simple algorithm to find, when it exists, a tuple of Hermitian matrices with given spectra whose sum has a given spectrum. We also prove new theorems such as a generalization of Forster’s theorem (Forster, 2002) concerning radial isotropic position. @InProceedings{STOC18p190, author = {Cole Franks}, title = {Operator Scaling with Specified Marginals}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {190--203}, doi = {}, year = {2018}, } |
|
Galanis, Andreas |
STOC '18: "Inapproximability of the Independent ..."
Inapproximability of the Independent Set Polynomial in the Complex Plane
Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, and Daniel Štefankovič (Rochester Institute of Technology, USA; University of Oxford, UK; University of Rochester, USA) We study the complexity of approximating the value of the independent set polynomial ZG(λ) of a graph G with maximum degree Δ when the activity λ is a complex number. When λ is real, the complexity picture is well-understood, and is captured by two real-valued thresholds λ* and λc, which depend on Δ and satisfy 0<λ*<λc. It is known that if λ is a real number in the interval (−λ*,λc) then there is an FPTAS for approximating ZG(λ) on graphs G with maximum degree at most Δ. On the other hand, if λ is a real number outside of the (closed) interval, then approximation is NP-hard. The key to establishing this picture was the interpretation of the thresholds λ* and λc on the Δ-regular tree. The ”occupation ratio” of a Δ-regular tree T is the contribution to ZT(λ) from independent sets containing the root of the tree, divided by ZT(λ) itself. This occupation ratio converges to a limit, as the height of the tree grows, if and only if λ∈ [−λ*,λc]. Unsurprisingly, the case where λ is complex is more challenging. It is known that there is an FPTAS when λ is a complex number with norm at most λ* and also when λ is in a small strip surrounding the real interval [0,λc). However, neither of these results is believed to fully capture the truth about when approximation is possible. Peters and Regts identified the values of λ for which the occupation ratio of the Δ-regular tree converges. These values carve a cardioid-shaped region ΛΔ in the complex plane, whose boundary includes the critical points −λ* and λc. Motivated by the picture in the real case, they asked whether ΛΔ marks the true approximability threshold for general complex values λ. Our main result shows that for every λ outside of ΛΔ, the problem of approximating ZG(λ) on graphs G with maximum degree at most Δ is indeed NP-hard. In fact, when λ is outside of ΛΔ and is not a positive real number, we give the stronger result that approximating ZG(λ) is actually #P-hard. Further, on the negative real axis, when λ<−λ*, we show that it is #P-hard to even decide whether ZG(λ)>0, resolving in the affirmative a conjecture of Harvey, Srivastava and Vondrak. Our proof techniques are based around tools from complex analysis — specifically the study of iterative multivariate rational maps. @InProceedings{STOC18p1234, author = {Ivona Bezáková and Andreas Galanis and Leslie Ann Goldberg and Daniel Štefankovič}, title = {Inapproximability of the Independent Set Polynomial in the Complex Plane}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1234--1240}, doi = {}, year = {2018}, } |
|
Garg, Ankit |
STOC '18: "A Matrix Expander Chernoff ..."
A Matrix Expander Chernoff Bound
Ankit Garg, Yin Tat Lee, Zhao Song, and Nikhil Srivastava (Microsoft Research, USA; University of Washington, USA; Harvard University, USA; University of Texas at Austin, USA; University of California at Berkeley, USA) We prove a Chernoff-type bound for sums of matrix-valued random variables sampled via a random walk on an expander, confirming a conjecture due to [Wigderson and Xiao 06]. Our proof is based on a new multi-matrix extension of the Golden-Thompson inequality which improves upon the inequality in [Sutter, Berta and Tomamichel 17], as well as an adaptation of an argument for the scalar case due to [Healy 08]. Our new multi-matrix Golden-Thompson inequality could be of independent interest. Secondarily, we also provide a generic reduction showing that any concentration inequality for vector-valued martingales implies a concentration inequality for the corresponding expander walk, with a weakening of parameters proportional to the squared mixing time. @InProceedings{STOC18p1102, author = {Ankit Garg and Yin Tat Lee and Zhao Song and Nikhil Srivastava}, title = {A Matrix Expander Chernoff Bound}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1102--1114}, doi = {}, year = {2018}, } STOC '18: "Operator Scaling via Geodesically ..." Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing Zeyuan Allen-Zhu, Ankit Garg, Yuanzhi Li, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; Princeton University, USA; University of Toronto, Canada; Institute for Advanced Study at Princeton, USA) We propose a new second-order method for geodesically convex optimization on the natural hyperbolic metric over positive definite matrices. We apply it to solve the operator scaling problem in time polynomial in the input size and logarithmic in the error. This is an exponential improvement over previous algorithms which were analyzed in the usual Euclidean, "commutative" metric (for which the above problem is not convex). Our method is general and applicable to other settings. As a consequence, we solve the equivalence problem for the left-right group action underlying the operator scaling problem. This yields a deterministic polynomial-time algorithm for a new class of Polynomial Identity Testing (PIT) problems, which was the original motivation for studying operator scaling. @InProceedings{STOC18p172, author = {Zeyuan Allen-Zhu and Ankit Garg and Yuanzhi Li and Rafael Oliveira and Avi Wigderson}, title = {Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {172--181}, doi = {}, year = {2018}, } STOC '18: "Monotone Circuit Lower Bounds ..." Monotone Circuit Lower Bounds from Resolution Ankit Garg, Mika Göös, Pritish Kamath, and Dmitry Sokolov (Microsoft Research, USA; Harvard University, USA; Massachusetts Institute of Technology, USA; KTH, Sweden) For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadget-composed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols—or, equivalently, that a monotone function associated with F has large monotone circuit complexity. Our result extends to monotone real circuits, which yields new lower bounds for the Cutting Planes proof system. @InProceedings{STOC18p902, author = {Ankit Garg and Mika Göös and Pritish Kamath and Dmitry Sokolov}, title = {Monotone Circuit Lower Bounds from Resolution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902--911}, doi = {}, year = {2018}, } |
|
Garg, Shashwat |
STOC '18: "The Gram-Schmidt Walk: A Cure ..."
The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues
Nikhil Bansal, Daniel Dadush, Shashwat Garg, and Shachar Lovett (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of California at San Diego, USA) An important result in discrepancy due to Banaszczyk states that for any set of n vectors in ℝm of ℓ2 norm at most 1 and any convex body K in ℝm of Gaussian measure at least half, there exists a ± 1 combination of these vectors which lies in 5K. This result implies the best known bounds for several problems in discrepancy. Banaszczyk’s proof of this result is non-constructive and an open problem has been to give an efficient algorithm to find such a ± 1 combination of the vectors. In this paper, we resolve this question and give an efficient randomized algorithm to find a ± 1 combination of the vectors which lies in cK for c>0 an absolute constant. This leads to new efficient algorithms for several problems in discrepancy theory. @InProceedings{STOC18p587, author = {Nikhil Bansal and Daniel Dadush and Shashwat Garg and Shachar Lovett}, title = {The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {587--597}, doi = {}, year = {2018}, } |
|
Garg, Sumegha |
STOC '18: "Extractor-Based Time-Space ..."
Extractor-Based Time-Space Lower Bounds for Learning
Sumegha Garg, Ran Raz, and Avishay Tal (Princeton University, USA; Stanford University, USA) A matrix M: A × X → {−1,1} corresponds to the following learning problem: An unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a1, b1), (a2, b2) …, where for every i, ai ∈ A is chosen uniformly at random and bi = M(ai,x). Assume that k, l, r are such that any submatrix of M of at least 2−k · |A| rows and at least 2−l · |X| columns, has a bias of at most 2−r. We show that any learning algorithm for the learning problem corresponding to M requires either a memory of size at least Ω(k · l ), or at least 2Ω(r) samples. The result holds even if the learner has an exponentially small success probability (of 2−Ω(r)). In particular, this shows that for a large class of learning problems, any learning algorithm requires either a memory of size at least Ω((log|X|) · (log|A|)) or an exponential number of samples, achieving a tight Ω((log|X|) · (log|A|)) lower bound on the size of the memory, rather than a bound of Ω(min{(log|X|)2,(log|A|)2}) obtained in previous works by Raz [FOCS’17] and Moshkovitz and Moshkovitz [ITCS’18]. Moreover, our result implies all previous memory-samples lower bounds, as well as a number of new applications. Our proof builds on the work of Raz [FOCS’17] that gave a general technique for proving memory samples lower bounds. @InProceedings{STOC18p990, author = {Sumegha Garg and Ran Raz and Avishay Tal}, title = {Extractor-Based Time-Space Lower Bounds for Learning}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {990--1002}, doi = {}, year = {2018}, } STOC '18: "Hitting Sets with Near-Optimal ..." Hitting Sets with Near-Optimal Error for Read-Once Branching Programs Mark Braverman, Gil Cohen, and Sumegha Garg (Princeton University, USA) Nisan (Combinatorica’92) constructed a pseudorandom generator for length n, width n read-once branching programs (ROBPs) with error ε and seed length O(log2n + logn · log(1/ε)). A major goal in complexity theory is to reduce the seed length, hopefully, to the optimal O(logn+log(1/ε)), or to construct improved hitting sets, as these would yield stronger derandomization of BPL and RL, respectively. In contrast to a successful line of work in restricted settings, no progress has been made for general, unrestricted, ROBPs. Indeed, Nisan’s construction is the best pseudorandom generator and, prior to this work, also the best hitting set for unrestricted ROBPs. In this work, we make the first improvement for the general case by constructing a hitting set with seed length O(log2n+log(1/ε)). That is, we decouple ε and n, and obtain near-optimal dependence on the former. The regime of parameters in which our construction strictly improves upon prior works, namely, log(1/ε) ≫ logn, is well-motivated by the work of Saks and Zhou (J.CSS’99) who use pseudorandom generators with error ε = 2−(logn)2 in their proof for BPL ⊆ L3/2. We further suggest a research program towards proving that BPL ⊆ L4/3 in which our result achieves one step. As our main technical tool, we introduce and construct a new type of primitive we call pseudorandom pseudo-distributions. Informally, this is a generalization of pseudorandom generators in which one may assign negative and unbounded weights to paths as opposed to working with probability distributions. We show that such a primitive yields hitting sets and, for derandomization purposes, can be used to derandomize two-sided error algorithms. @InProceedings{STOC18p353, author = {Mark Braverman and Gil Cohen and Sumegha Garg}, title = {Hitting Sets with Near-Optimal Error for Read-Once Branching Programs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {353--362}, doi = {}, year = {2018}, } |
|
Ghaffari, Mohsen |
STOC '18: "Improved Distributed Algorithms ..."
Improved Distributed Algorithms for Exact Shortest Paths
Mohsen Ghaffari and Jason Li (ETH Zurich, Switzerland; Carnegie Mellon University, USA) Computing shortest paths is one of the central problems in the theory of distributed computing. For the last few years, substantial progress has been made on the approximate single source shortest paths problem, culminating in an algorithm of Henzinger, Krinninger, and Nanongkai [STOC’16] which deterministically computes (1+o(1))-approximate shortest paths in Õ(D+√n) time, where D is the hop-diameter of the graph. Up to logarithmic factors, this time complexity is optimal, matching the lower bound of Elkin [STOC’04]. The question of exact shortest paths however saw no algorithmic progress for decades, until the recent breakthrough of Elkin [STOC’17], which established a sublinear-time algorithm for exact single source shortest paths on undirected graphs. Shortly after, Huang et al. [FOCS’17] provided improved algorithms for exact all pairs shortest paths problem on directed graphs. In this paper, we provide an alternative single-source shortest path algorithm with complexity Õ(n3/4D1/4). For polylogarithmic D, this improves on Elkin’s Õ(n5/6) bound and gets closer to the Ω(n1/2) lower bound of Elkin [STOC’04]. For larger values of D, we present an improved variant of our algorithm which achieves complexity Õ(max{ n3/4+o(1) , n3/4D1/6} + D ), and thus compares favorably with Elkin’s bound of Õ(max{ n5/6, n2/3D1/3} + D ) in essentially the entire range of parameters. This algorithm provides also a qualitative improvement, because it works for the more challenging case of directed graph (i.e., graphs where the two directions of an edge can have different weights), constituting the first sublinear-time algorithm for directed graphs. Our algorithm also extends to the case of exact r-source shortest paths, in which we provide the fastest algorithm for moderately small r and D, improving on those of Huang et al. @InProceedings{STOC18p431, author = {Mohsen Ghaffari and Jason Li}, title = {Improved Distributed Algorithms for Exact Shortest Paths}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {431--444}, doi = {}, year = {2018}, } STOC '18: "Deterministic Distributed ..." Deterministic Distributed Edge-Coloring with Fewer Colors Mohsen Ghaffari, Fabian Kuhn, Yannic Maus, and Jara Uitto (ETH Zurich, Switzerland; University of Freiburg, Germany) We present a deterministic distributed algorithm, in the LOCAL model, that computes a (1+o(1))Δ-edge-coloring in polylogarithmic-time, so long as the maximum degree Δ=Ω(logn). For smaller Δ, we give a polylogarithmic-time 3Δ/2-edge-coloring. These are the first deterministic algorithms to go below the natural barrier of 2Δ−1 colors, and they improve significantly on the recent polylogarithmic-time (2Δ−1)(1+o(1))-edge-coloring of Ghaffari and Su [SODA’17] and the (2Δ−1)-edge-coloring of Fischer, Ghaffari, and Kuhn [FOCS’17], positively answering the main open question of the latter. The key technical ingredient of our algorithm is a simple and novel gradual packing of judiciously chosen near-maximum matchings, each of which becomes one of the color classes. @InProceedings{STOC18p418, author = {Mohsen Ghaffari and Fabian Kuhn and Yannic Maus and Jara Uitto}, title = {Deterministic Distributed Edge-Coloring with Fewer Colors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {418--430}, doi = {}, year = {2018}, } |
|
Gharan, Shayan Oveis |
STOC '18: "A Simply Exponential Upper ..."
A Simply Exponential Upper Bound on the Maximum Number of Stable Matchings
Anna R. Karlin, Shayan Oveis Gharan, and Robbie Weber (University of Washington, USA) Stable matching is a classical combinatorial problem that has been the subject of intense theoretical and empirical study since its introduction in 1962 in a seminal paper by Gale and Shapley. In this paper, we provide a new upper bound on f(n), the maximum number of stable matchings that a stable matching instance with n men and n women can have. It has been a long-standing open problem to understand the asymptotic behavior of f(n) as n→∞, first posed by Donald Knuth in the 1970s. Until now the best lower bound was approximately 2.28n, and the best upper bound was 2nlogn− O(n). In this paper, we show that for all n, f(n) ≤ cn for some universal constant c. This matches the lower bound up to the base of the exponent. Our proof is based on a reduction to counting the number of downsets of a family of posets that we call “mixing”. The latter might be of independent interest. @InProceedings{STOC18p920, author = {Anna R. Karlin and Shayan Oveis Gharan and Robbie Weber}, title = {A Simply Exponential Upper Bound on the Maximum Number of Stable Matchings}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {920--925}, doi = {}, year = {2018}, } |
|
Ghosh, Sumanta |
STOC '18: "Bootstrapping Variables in ..."
Bootstrapping Variables in Algebraic Circuits
Manindra Agrawal, Sumanta Ghosh, and Nitin Saxena (IIT Kanpur, India) We show that for the blackbox polynomial identity testing (PIT) problem it suffices to study circuits that depend only on the first extremely few variables. One only need to consider size-s degree-s circuits that depend on the first log∘ c s variables (where c is a constant and we are composing c logarithms). Thus, hitting-set generator (hsg) manifests a bootstrapping behavior— a partial hsg against very few variables can be efficiently grown to a complete hsg. A boolean analog, or a pseudorandom generator property of this type, is unheard-of. Our idea is to use the partial hsg and its annihilator polynomial to efficiently bootstrap the hsg exponentially wrt variables. This is repeated c times in an efficient way. Pushing the envelope further we show that: (1) a quadratic-time blackbox PIT for 6913-variate degree-s size-s polynomials, will lead to a “near”-complete derandomization of PIT, and (2) a blackbox PIT for n-variate degree-s size-s circuits in snδ-time, for δ<1/2, will lead to a “near”-complete derandomization of PIT (in contrast, sn-time is trivial). Our second idea is to study depth-4 circuits that depend on constantly many variables. We show that a polynomial-time computable, O(s1.49)-degree hsg for trivariate depth-4 circuits bootstraps to a quasipolynomial time hsg for general poly-degree circuits, and implies a lower bound that is a bit stronger than Kabanets-Impagliazzo (STOC 2003). @InProceedings{STOC18p1166, author = {Manindra Agrawal and Sumanta Ghosh and Nitin Saxena}, title = {Bootstrapping Variables in Algebraic Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1166--1179}, doi = {}, year = {2018}, } Info |
|
Gishboliner, Lior |
STOC '18: "A Generalized Turán Problem ..."
A Generalized Turán Problem and Its Applications
Lior Gishboliner and Asaf Shapira (Tel Aviv University, Israel) Our first theorem in this paper is a hierarchy theorem for the query complexity of testing graph properties with 1-sided error; more precisely, we show that for every sufficiently fast-growing function f, there is a graph property whose 1-sided-error query complexity is precisely f(Θ(1/ε)). No result of this type was previously known for any f which is super-polynomial. Goldreich [ECCC 2005] asked to exhibit a graph property whose query complexity is 2Θ(1/ε). Our hierarchy theorem partially resolves this problem by exhibiting a property whose 1-sided-error query complexity is 2Θ(1/ε). We also use our hierarchy theorem in order to resolve a problem raised by the second author and Alon [STOC 2005] regarding testing relaxed versions of bipartiteness. Our second theorem states that for any function f there is a graph property whose 1-sided-error query complexity is f(Θ(1/ε)) while its 2-sided-error query complexity is only poly(1/ε). This is the first indication of the surprising power that 2-sided-error testing algorithms have over 1-sided-error ones, even when restricted to properties that are testable with 1-sided error. Again, no result of this type was previously known for any f that is super polynomial. The above theorems are derived from a graph theoretic result which we think is of independent interest, and might have further applications. Alon and Shikhelman [JCTB 2016] introduced the following generalized Turán problem: for fixed graphs H and T, and an integer n, what is the maximum number of copies of T, denoted by ex(n,T,H), that can appear in an n-vertex H-free graph? This problem received a lot of attention recently, with an emphasis on ex(n,C3,C2ℓ +1). Our third theorem in this paper gives tight bounds for ex(n,Ck,Cℓ) for all the remaining values of k and ℓ. @InProceedings{STOC18p760, author = {Lior Gishboliner and Asaf Shapira}, title = {A Generalized Turán Problem and Its Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {760--772}, doi = {}, year = {2018}, } |
|
Goldberg, Leslie Ann |
STOC '18: "Inapproximability of the Independent ..."
Inapproximability of the Independent Set Polynomial in the Complex Plane
Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, and Daniel Štefankovič (Rochester Institute of Technology, USA; University of Oxford, UK; University of Rochester, USA) We study the complexity of approximating the value of the independent set polynomial ZG(λ) of a graph G with maximum degree Δ when the activity λ is a complex number. When λ is real, the complexity picture is well-understood, and is captured by two real-valued thresholds λ* and λc, which depend on Δ and satisfy 0<λ*<λc. It is known that if λ is a real number in the interval (−λ*,λc) then there is an FPTAS for approximating ZG(λ) on graphs G with maximum degree at most Δ. On the other hand, if λ is a real number outside of the (closed) interval, then approximation is NP-hard. The key to establishing this picture was the interpretation of the thresholds λ* and λc on the Δ-regular tree. The ”occupation ratio” of a Δ-regular tree T is the contribution to ZT(λ) from independent sets containing the root of the tree, divided by ZT(λ) itself. This occupation ratio converges to a limit, as the height of the tree grows, if and only if λ∈ [−λ*,λc]. Unsurprisingly, the case where λ is complex is more challenging. It is known that there is an FPTAS when λ is a complex number with norm at most λ* and also when λ is in a small strip surrounding the real interval [0,λc). However, neither of these results is believed to fully capture the truth about when approximation is possible. Peters and Regts identified the values of λ for which the occupation ratio of the Δ-regular tree converges. These values carve a cardioid-shaped region ΛΔ in the complex plane, whose boundary includes the critical points −λ* and λc. Motivated by the picture in the real case, they asked whether ΛΔ marks the true approximability threshold for general complex values λ. Our main result shows that for every λ outside of ΛΔ, the problem of approximating ZG(λ) on graphs G with maximum degree at most Δ is indeed NP-hard. In fact, when λ is outside of ΛΔ and is not a positive real number, we give the stronger result that approximating ZG(λ) is actually #P-hard. Further, on the negative real axis, when λ<−λ*, we show that it is #P-hard to even decide whether ZG(λ)>0, resolving in the affirmative a conjecture of Harvey, Srivastava and Vondrak. Our proof techniques are based around tools from complex analysis — specifically the study of iterative multivariate rational maps. @InProceedings{STOC18p1234, author = {Ivona Bezáková and Andreas Galanis and Leslie Ann Goldberg and Daniel Štefankovič}, title = {Inapproximability of the Independent Set Polynomial in the Complex Plane}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1234--1240}, doi = {}, year = {2018}, } |
|
Goldberg, Paul W. |
STOC '18: "Consensus Halving Is PPA-Complete ..."
Consensus Halving Is PPA-Complete
Aris Filos-Ratsikas and Paul W. Goldberg (University of Oxford, UK) We show that the computational problem Consensus Halving is PPA-Complete, the first PPA-Completeness result for a problem whose definition does not involve an explicit circuit. We also show that an approximate version of this problem is polynomial-time equivalent to Necklace Splitting, which establishes PPAD-hardness for Necklace Splitting and suggests that it is also PPA-Complete. @InProceedings{STOC18p51, author = {Aris Filos-Ratsikas and Paul W. Goldberg}, title = {Consensus Halving Is PPA-Complete}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {51--64}, doi = {}, year = {2018}, } |
|
Gonczarowski, Yannai A. |
STOC '18: "Bounding the Menu-Size of ..."
Bounding the Menu-Size of Approximately Optimal Auctions via Optimal-Transport Duality
Yannai A. Gonczarowski (Hebrew University of Jerusalem, Israel; Microsoft Research, Israel) The question of the minimum menu-size for approximate (i.e., up-to-ε) Bayesian revenue maximization when selling two goods to an additive risk-neutral quasilinear buyer was introduced by Hart and Nisan [2013], who give an upper bound of O(1/ε4) for this problem. Using the optimal-transport duality framework of Daskalakis, Deckelbaum, and Tzamos [2013, 2015], we derive the first lower bound for this problem — of Ω(1/∜ε), even when the values for the two goods are drawn i.i.d. from ”nice” distributions, establishing how to reason about approximately optimal mechanisms via this duality framework. This bound implies, for any fixed number of goods, a tight bound of Θ(log1/ε) on the minimum deterministic communication complexity guaranteed to suffice for running some approximately revenue-maximizing mechanism, thereby completely resolving this problem. As a secondary result, we show that under standard economic assumptions on distributions, the above upper bound of Hart and Nisan [2013] can be strengthened to O(1/ε2). @InProceedings{STOC18p123, author = {Yannai A. Gonczarowski}, title = {Bounding the Menu-Size of Approximately Optimal Auctions via Optimal-Transport Duality}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {123--131}, doi = {}, year = {2018}, } |
|
Göös, Mika |
STOC '18: "Monotone Circuit Lower Bounds ..."
Monotone Circuit Lower Bounds from Resolution
Ankit Garg, Mika Göös, Pritish Kamath, and Dmitry Sokolov (Microsoft Research, USA; Harvard University, USA; Massachusetts Institute of Technology, USA; KTH, Sweden) For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadget-composed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols—or, equivalently, that a monotone function associated with F has large monotone circuit complexity. Our result extends to monotone real circuits, which yields new lower bounds for the Cutting Planes proof system. @InProceedings{STOC18p902, author = {Ankit Garg and Mika Göös and Pritish Kamath and Dmitry Sokolov}, title = {Monotone Circuit Lower Bounds from Resolution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902--911}, doi = {}, year = {2018}, } |
|
Goyal, Rishab |
STOC '18: "Collusion Resistant Traitor ..."
Collusion Resistant Traitor Tracing from Learning with Errors
Rishab Goyal, Venkata Koppula, and Brent Waters (University of Texas at Austin, USA) In this work we provide a traitor tracing construction with ciphertexts that grow polynomially in log(n) where n is the number of users and prove it secure under the Learning with Errors (LWE) assumption. This is the first traitor tracing scheme with such parameters provably secure from a standard assumption. In addition to achieving new traitor tracing results, we believe our techniques push forward the broader area of computing on encrypted data under standard assumptions. Notably, traitor tracing is substantially different problem from other cryptography primitives that have seen recent progress in LWE solutions. We achieve our results by first conceiving a novel approach to building traitor tracing that starts with a new form of Functional Encryption that we call Mixed FE. In a Mixed FE system the encryption algorithm is bimodal and works with either a public key or master secret key. Ciphertexts encrypted using the public key can only encrypt one type of functionality. On the other hand the secret key encryption can be used to encode many different types of programs, but is only secure as long as the attacker sees a bounded number of such ciphertexts. We first show how to combine Mixed FE with Attribute-Based Encryption to achieve traitor tracing. Second we build Mixed FE systems for polynomial sized branching programs (which corresponds to the complexity class logspace) by relying on the polynomial hardness of the LWE assumption with super-polynomial modulus-to-noise ratio. @InProceedings{STOC18p660, author = {Rishab Goyal and Venkata Koppula and Brent Waters}, title = {Collusion Resistant Traitor Tracing from Learning with Errors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {660--670}, doi = {}, year = {2018}, } |
|
Goyal, Vipul |
STOC '18: "Non-malleable Secret Sharing ..."
Non-malleable Secret Sharing
Vipul Goyal and Ashutosh Kumar (Carnegie Mellon University, USA; University of California at Los Angeles, USA) A number of works have focused on the setting where an adversary tampers with the shares of a secret sharing scheme. This includes literature on verifiable secret sharing, algebraic manipulation detection(AMD) codes, and, error correcting or detecting codes in general. In this work, we initiate a systematic study of what we call non-malleable secret sharing. Very roughly, the guarantee we seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is “destroyed” and the reconstruction outputs a string which is completely “unrelated” to the original secret. Recent exciting work on non-malleable codes in the split-state model led to constructions which can be seen as 2-out-of-2 non-malleable secret sharing schemes. These constructions have already found a number of applications in cryptography. We investigate the natural question of constructing t-out-of-n non-malleable secret sharing schemes. Such a secret sharing scheme ensures that only a set consisting of t or more shares can reconstruct the secret, and, additionally guarantees non-malleability under an attack where potentially every share maybe tampered with. Techniques used for obtaining split-state non-malleable codes (or 2-out-of-2 non-malleable secret sharing) are (in some form) based on two-source extractors and seem not to generalize to our setting. Our first result is the construction of a t-out-of-n non-malleable secret sharing scheme against an adversary who arbitrarily tampers each of the shares independently. Our construction is unconditional and features statistical non-malleability. As our main technical result, we present t-out-of-n non-malleable secret sharing scheme in a stronger adversarial model where an adversary may jointly tamper multiple shares. Our construction is unconditional and the adversary is allowed to jointly-tamper subsets of up to (t−1) shares. We believe that the techniques introduced in our construction may be of independent interest. Inspired by the well studied problem of perfectly secure message transmission introduced in the seminal work of Dolev et. al (J. of ACM’93), we also initiate the study of non-malleable message transmission. Non-malleable message transmission can be seen as a natural generalization in which the goal is to ensure that the receiver either receives the original message, or, the original message is essentially destroyed and the receiver receives an “unrelated” message, when the network is under the influence of an adversary who can byzantinely corrupt all the nodes in the network. As natural applications of our non-malleable secret sharing schemes, we propose constructions for non-malleable message transmission. @InProceedings{STOC18p685, author = {Vipul Goyal and Ashutosh Kumar}, title = {Non-malleable Secret Sharing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {685--698}, doi = {}, year = {2018}, } |
|
Grandoni, Fabrizio |
STOC '18: "Improved Approximation for ..."
Improved Approximation for Tree Augmentation: Saving by Rewiring
Fabrizio Grandoni, Christos Kalaitzis, and Rico Zenklusen (University of Lugano, Switzerland; ETH Zurich, Switzerland) The Tree Augmentation Problem (TAP) is a fundamental network design problem in which we are given a tree and a set of additional edges, also called links. The task is to find a set of links, of minimum size, whose addition to the tree leads to a 2-edge-connected graph. A long line of results on TAP culminated in the previously best known approximation guarantee of 1.5 achieved by a combinatorial approach due to Kortsarz and Nutov [ACM Transactions on Algorithms 2016], and also by an SDP-based approach by Cheriyan and Gao [Algorithmica 2017]. Moreover, an elegant LP-based (1.5+є)-approximation has also been found very recently by Fiorini, Groß, K'onemann, and Sanitá [SODA 2018]. In this paper, we show that an approximation factor below 1.5 can be achieved, by presenting a 1.458-approximation that is based on several new techniques. By extending prior results of Adjiashvili [SODA 2017], we first present a black-box reduction to a very structured type of instance, which played a crucial role in recent development on the problem, and which we call k-wide. Our main contribution is a new approximation algorithm for O(1)-wide tree instances with approximation guarantee strictly below 1.458, based on one of their fundamental properties: wide trees naturally decompose into smaller subtrees with a constant number of leaves. Previous approaches in similar settings rounded each subtree independently and simply combined the obtained solutions. We show that additionally, when starting with a well-chosen LP, the combined solution can be improved through a new “rewiring” technique, showing that one can replace some pairs of used links by a single link. We can rephrase the rewiring problem as a stochastic version of a matching problem, which may be of independent interest. By showing that large matchings can be obtained in this problem, we obtain that a significant number of rewirings are possible, thus leading to an approximation factor below 1.5. @InProceedings{STOC18p632, author = {Fabrizio Grandoni and Christos Kalaitzis and Rico Zenklusen}, title = {Improved Approximation for Tree Augmentation: Saving by Rewiring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {632--645}, doi = {}, year = {2018}, } STOC '18: "A (5/3 + ε)-Approximation ..." A (5/3 + ε)-Approximation for Unsplittable Flow on a Path: Placing Small Tasks into Boxes Fabrizio Grandoni, Tobias Mömke, Andreas Wiese, and Hang Zhou (University of Lugano, Switzerland; University of Bremen, Germany; Saarland University, Germany; University of Chile, Chile; École Polytechnique, France) In the unsplittable flow on a path problem (UFP) we are given a path with edge capacities and a collection of tasks. Each task is characterized by a subpath, a profit, and a demand. Our goal is to compute a maximum profit subset of tasks such that, for each edge e, the total demand of selected tasks that use e does not exceed the capacity of e. The current best polynomial-time approximation factor for this problem is 2+є for any constant є>0 [Anagostopoulos et al.-SODA 2014]. This is the best known factor even in the case of uniform edge capacities [Călinescu et al.-IPCO 2002, TALG 2011]. These results, likewise most prior work, are based on a partition of tasks into large and small depending on their ratio of demand to capacity over their respective edges: these algorithms invoke (1+є)-approximations for large and small tasks separately. The known techniques do not seem to be able to combine a big fraction of large and small tasks together (apart from some special cases and quasi-polynomial-time algorithms). The main contribution of this paper is to overcome this critical barrier. Namely, we present a polynomial-time algorithm that obtains roughly all profit from the optimal large tasks plus one third of the profit from the optimal small tasks. In combination with known results, this implies a polynomial-time (5/3+є)-approximation algorithm for UFP. Our algorithm is based on two main ingredients. First, we prove that there exist certain sub-optimal solutions where, roughly speaking, small tasks are packed into boxes. To prove that such solutions can yield high profit we introduce a horizontal slicing lemma which yields a novel geometric interpretation of certain solutions. The resulting boxed structure has polynomial complexity, hence cannot be guessed directly. Therefore, our second contribution is a dynamic program that guesses this structure (plus a packing of large and small tasks) on the fly, while losing at most one third of the profit of the remaining small tasks. @InProceedings{STOC18p607, author = {Fabrizio Grandoni and Tobias Mömke and Andreas Wiese and Hang Zhou}, title = {A (5/3 + ε)-Approximation for Unsplittable Flow on a Path: Placing Small Tasks into Boxes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {607--619}, doi = {}, year = {2018}, } |
|
Grospellier, Antoine |
STOC '18: "Efficient Decoding of Random ..."
Efficient Decoding of Random Errors for Quantum Expander Codes
Omar Fawzi, Antoine Grospellier, and Anthony Leverrier (ENS Lyon, France; Inria, France) We show that quantum expander codes, a constant-rate family of quantum low-density parity check (LDPC) codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman’s construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of α-percolation: for a random subset E of vertices of a given graph, we consider the size of the largest connected α-subset of E, where X is an α-subset of E if |X ∩ E| ≥ α |X|. @InProceedings{STOC18p521, author = {Omar Fawzi and Antoine Grospellier and Anthony Leverrier}, title = {Efficient Decoding of Random Errors for Quantum Expander Codes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {521--534}, doi = {}, year = {2018}, } |
|
Guo, Heng |
STOC '18: "Counting Hypergraph Colourings ..."
Counting Hypergraph Colourings in the Local Lemma Regime
Heng Guo, Chao Liao, Pinyan Lu, and Chihao Zhang (University of Edinburgh, UK; Shanghai Jiao Tong University, China; Shanghai University of Finance and Economics, China; Chinese University of Hong Kong, China) We give a fully polynomial-time approximation scheme (FPTAS) to count the number of q-colorings for k-uniform hypergraphs with maximum degree Δ if k≥ 28 and q > 315Δ14/k−14. We also obtain a polynomial-time almost uniform sampler if q>798Δ16/k−16/3. These are the first approximate counting and sampling algorithms in the regime q≪Δ (for large Δ and k) without any additional assumptions. Our method is based on the recent work of Moitra (STOC, 2017). One important contribution of ours is to remove the dependency of k and Δ in Moitra’s approach. @InProceedings{STOC18p926, author = {Heng Guo and Chao Liao and Pinyan Lu and Chihao Zhang}, title = {Counting Hypergraph Colourings in the Local Lemma Regime}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {926--939}, doi = {}, year = {2018}, } |
|
Gupta, Anupam |
STOC '18: "Metric Embedding via Shortest ..."
Metric Embedding via Shortest Path Decompositions
Ittai Abraham, Arnold Filtser, Anupam Gupta, and Ofer Neiman (VMware, USA; Ben-Gurion University of the Negev, Israel; Carnegie Mellon University, USA) We study the problem of embedding weighted graphs of pathwidth k into ℓp spaces. Our main result is an O(kmin{1p,12})-distortion embedding. For p=1, this is a super-exponential improvement over the best previous bound of Lee and Sidiropoulos. Our distortion bound is asymptotically tight for any fixed p >1. Our result is obtained via a novel embedding technique that is based on low depth decompositions of a graph via shortest paths. The core new idea is that given a geodesic shortest path P, we can probabilistically embed all points into 2 dimensions with respect to P. For p>2 our embedding also implies improved distortion on bounded treewidth graphs (O((klogn)1p)). For asymptotically large p, our results also implies improved distortion on graphs excluding a minor. @InProceedings{STOC18p952, author = {Ittai Abraham and Arnold Filtser and Anupam Gupta and Ofer Neiman}, title = {Metric Embedding via Shortest Path Decompositions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {952--963}, doi = {}, year = {2018}, } |
|
Guruswami, Venkatesan |
STOC '18: "General Strong Polarization ..."
General Strong Polarization
Jarosław Błasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, and Madhu Sudan (Harvard University, USA; Carnegie Mellon University, USA; SUNY Buffalo, USA) Arikan’s exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constant-sized) invertible matrix M, a family of polar codes can be associated with this matrix and its ability to approach capacity follows from the polarization of an associated [0,1]-bounded martingale, namely its convergence in the limit to either 0 or 1 with probability 1. Arikan showed appropriate polarization of the martingale associated with the matrix G2 = ( 10 11 ) to get capacity achieving codes. His analysis was later extended to all matrices M which satisfy an obvious necessary condition for polarization. While Arikan’s theorem does not guarantee that the codes achieve capacity at small blocklengths (specifically in length which is a polynomial in 1/є where є is the difference between the capacity of a channel and the rate of the code), it turns out that a “strong” analysis of the polarization of the underlying martingale would lead to such constructions. Indeed for the martingale associated with G2 such a strong polarization was shown in two independent works ([Guruswami and Xia, IEEE IT ’15] and [Hassani et al., IEEE IT’14]), thereby resolving a major theoretical challenge associated with the efficient attainment of Shannon capacity. In this work we extend the result above to cover martingales associated with all matrices that satisfy the necessary condition for (weak) polarization. In addition to being vastly more general, our proofs of strong polarization are (in our view) also much simpler and modular. Key to our proof is a notion of local polarization that only depends on the evolution of the martingale in a single time step. We show that local polarization always implies strong polarization. We then apply relatively simple reasoning about conditional entropies to prove local polarization in very general settings. Specifically, our result shows strong polarization over all prime fields and leads to efficient capacity-achieving source codes for compressing arbitrary i.i.d. sources, and capacity-achieving channel codes for arbitrary symmetric memoryless channels. @InProceedings{STOC18p485, author = {Jarosław Błasiok and Venkatesan Guruswami and Preetum Nakkiran and Atri Rudra and Madhu Sudan}, title = {General Strong Polarization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {485--492}, doi = {}, year = {2018}, } |
|
Haddock, Jamie |
STOC '18: "The Minimum Euclidean-Norm ..."
The Minimum Euclidean-Norm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm Is Exponential
Jesús A. De Loera, Jamie Haddock, and Luis Rademacher (University of California at Davis, USA) The complexity of Philip Wolfe’s method for the minimum Euclidean-norm point problem over a convex polytope has remained unknown since he proposed the method in 1974. We present the first example that Wolfe’s method takes exponential time. Additionally, we improve previous results to show that linear programming reduces in strongly-polynomial time to the minimum norm point problem over a simplex @InProceedings{STOC18p545, author = {Jesús A. De Loera and Jamie Haddock and Luis Rademacher}, title = {The Minimum Euclidean-Norm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm Is Exponential}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {545--553}, doi = {}, year = {2018}, } |
|
Haeupler, Bernhard |
STOC '18: "Synchronization Strings: Explicit ..."
Synchronization Strings: Explicit Constructions, Local Decoding, and Applications
Bernhard Haeupler and Amirbehshad Shahrasbi (Carnegie Mellon University, USA) This paper gives new results for synchronization strings, a powerful combinatorial object introduced by [Haeupler, Shahrasbi; STOC’17] that allows to efficiently deal with insertions and deletions in various communication problems: - We give a deterministic, linear time synchronization string construction, improving over an O(n5) time randomized construction. Independently of this work, a deterministic O(n log2 logn) time construction was proposed by Cheng, Li, and Wu. - We give a deterministic construction of an infinite synchronization string which outputs the first n symbols in O(n) time. Previously it was not known whether such a string was computable. - Both synchronization string constructions are highly explicit, i.e., the ith symbol can be deterministically computed in O(logi) time. - This paper also introduces a generalized notion we call long-distance synchronization strings. Such strings allow for local and very fast decoding. In particular only O(log3 n) time and access to logarithmically many symbols is required to decode any index. The paper also provides several applications for these improved synchronization strings: - For any δ < 1 and є > 0 we provide an insdel error correcting block code with rate 1 − δ − є which can correct any δ/3 fraction of insertion and deletion errors in O(n log3 n) time. This near linear computational efficiency is surprising given that we do not even know how to compute the (edit) distance between the decoding input and output in sub-quadratic time. - We show that local decodability implies that error correcting codes constructed with long-distance synchronization strings can not only efficiently recover from δ fraction of insdel errors but, similar to [Schulman, Zuckerman; TransInf’99], also from any O(δ / logn) fraction of block transpositions and block replications. These block corruptions allow arbitrarily long substrings to be swapped or replicated anywhere. - We show that highly explicitness and local decoding allow for infinite channel simulations with exponentially smaller memory and decoding time requirements. These simulations can then be used to give the first near linear time interactive coding scheme for insdel errors, similar to the result of [Brakerski, Naor; SODA’13] for Hamming errors. @InProceedings{STOC18p841, author = {Bernhard Haeupler and Amirbehshad Shahrasbi}, title = {Synchronization Strings: Explicit Constructions, Local Decoding, and Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {841--854}, doi = {}, year = {2018}, } STOC '18: "Explicit Binary Tree Codes ..." Explicit Binary Tree Codes with Polylogarithmic Size Alphabet Gil Cohen, Bernhard Haeupler, and Leonard J. Schulman (Princeton University, USA; Carnegie Mellon University, USA; California Institute of Technology, USA) This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size. We give an explicit binary tree code with constant distance and alphabet size poly(logn), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly(n). At the core of our construction is the first explicit tree code with constant rate and constant distance, though with non-constant arity - a result of independent interest. This construction adapts the polynomial interpolation framework to the online setting. @InProceedings{STOC18p535, author = {Gil Cohen and Bernhard Haeupler and Leonard J. Schulman}, title = {Explicit Binary Tree Codes with Polylogarithmic Size Alphabet}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {535--544}, doi = {}, year = {2018}, } |
|
Hajiaghayi, MohammadTaghi |
STOC '18: "Fast Algorithms for Knapsack ..."
Fast Algorithms for Knapsack via Convolution and Prediction
MohammadHossein Bateni, MohammadTaghi Hajiaghayi, Saeed Seddighin, and Cliff Stein (Google, USA; University of Maryland, USA; Columbia University, USA) The knapsack problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most well-known NP-hard problems. The goal is to pack a knapsack of size t with the maximum value from a collection of n items with given sizes and values. Recent evidence suggests that a classic O(nt) dynamic-programming solution for the knapsack problem might be the fastest in the worst case. In fact, solving the knapsack problem was shown to be computationally equivalent to the (min, +) convolution problem, which is thought to be facing a quadratic-time barrier. This hardness is in contrast to the more famous (+, ·) convolution (generally known as polynomial multiplication), that has an O(nlogn)-time solution via Fast Fourier Transform. Our main results are algorithms with near-linear running times (in terms of the size of the knapsack and the number of items) for the knapsack problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by , the running time of our algorithm is Õ((n+t)). If the item values are integers bounded by , our algorithm runs in time Õ(n+t). Best previously known running times were O(nt), O(n2) and O(n) (Pisinger, J. of Alg., 1999). At the core of our algorithms lies the prediction technique: Roughly speaking, this new technique enables us to compute the convolution of two vectors in time (n) when an approximation of the solution within an additive error of is available. Our results also improve the best known strongly polynomial time solutions for knapsack. In the limited size setting, when the items have multiplicities, the fastest strongly polynomial time algorithms for knapsack run in time O(n2 2) and O(n3 2) for the cases of infinite and given multiplicities, respectively. Our results improve both running times by a factor of (n max{1, n/}). @InProceedings{STOC18p1269, author = {MohammadHossein Bateni and MohammadTaghi Hajiaghayi and Saeed Seddighin and Cliff Stein}, title = {Fast Algorithms for Knapsack via Convolution and Prediction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1269--1282}, doi = {}, year = {2018}, } |
|
Hatami, Pooya |
STOC '18: "Improved Pseudorandomness ..."
Improved Pseudorandomness for Unordered Branching Programs through Local Monotonicity
Eshan Chattopadhyay, Pooya Hatami, Omer Reingold, and Avishay Tal (Cornell University, USA; Institute for Advanced Study at Princeton, USA; University of Texas at Austin, USA; Stanford University, USA) We present an explicit pseudorandom generator with seed length Õ((logn)w+1) for read-once, oblivious, width w branching programs that can read their input bits in any order. This improves upon the work of Impagliazzo, Meka and Zuckerman (FOCS’12) where they required seed length n1/2+o(1). A central ingredient in our work is the following bound that we prove on the Fourier spectrum of branching programs. For any width w read-once, oblivious branching program B:{0,1}n→ {0,1}, any k ∈ {1,…,n}, ∑S⊆[n]: |S|=k |B(S)| ≤ O(logn)wk. This settles a conjecture posed by Reingold, Steinke and Vadhan (RANDOM’13). Our analysis crucially uses a notion of local monotonicity on the edge labeling of the branching program. We carry critical parts of our proof under the assumption of local monotonicity and show how to deduce our results for unrestricted branching programs. @InProceedings{STOC18p363, author = {Eshan Chattopadhyay and Pooya Hatami and Omer Reingold and Avishay Tal}, title = {Improved Pseudorandomness for Unordered Branching Programs through Local Monotonicity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {363--375}, doi = {}, year = {2018}, } Info |
|
Hirvonen, Juho |
STOC '18: "New Classes of Distributed ..."
New Classes of Distributed Time Complexity
Alkida Balliu, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Dennis Olivetti, and Jukka Suomela (Aalto University, Finland; University of Freiburg, Germany) A number of recent papers – e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) – have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π in which a solution can be verified by checking all radius-O(1) neighbourhoods, and the question is what is the smallest T such that a solution can be computed so that each node chooses its own output based on its radius-T neighbourhood. Here T is the distributed time complexity of Π. The time complexity classes for deterministic algorithms in bounded-degree graphs that are known to exist by prior work are Θ(1), Θ(log* n), Θ(logn), Θ(n1/k), and Θ(n). It is also known that there are two gaps: one between ω(1) and o(loglog* n), and another between ω(log* n) and o(logn). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple – indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(logα n) for any α ≥ 1, 2Θ(logα n) for any α ≤ 1, and Θ(nα) for any α < 1/2 in the high end of the complexity spectrum, and Θ(logα log* n) for any α ≥ 1, 2Θ(logα log* n) for any α ≤ 1, and Θ((log* n)α) for any α ≤ 1 in the low end of the complexity spectrum; here α is a positive rational number. @InProceedings{STOC18p1307, author = {Alkida Balliu and Juho Hirvonen and Janne H. Korhonen and Tuomo Lempiäinen and Dennis Olivetti and Jukka Suomela}, title = {New Classes of Distributed Time Complexity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1307--1318}, doi = {}, year = {2018}, } |
|
Hopkins, Samuel B. |
STOC '18: "Mixture Models, Robustness, ..."
Mixture Models, Robustness, and Sum of Squares Proofs
Samuel B. Hopkins and Jerry Li (Cornell University, USA; Massachusetts Institute of Technology, USA) We use the Sum of Squares method to develop new efficient algorithms for learning well-separated mixtures of Gaussians and robust mean estimation, both in high dimensions, that substantially improve upon the statistical guarantees achieved by previous efficient algorithms. Our contributions are: Mixture models with separated means: We study mixtures of poly(k)-many k-dimensional distributions where the means of every pair of distributions are separated by at least kε. In the special case of spherical Gaussian mixtures, we give a kO(1/ε)-time algorithm that learns the means assuming separation at least kε, for any ε> 0. This is the first algorithm to improve on greedy (“single-linkage”) and spectral clustering, breaking a long-standing barrier for efficient algorithms at separation k1/4. Robust estimation: When an unknown (1−ε)-fraction of X1,…,Xn are chosen from a sub-Gaussian distribution with mean µ but the remaining points are chosen adversarially, we give an algorithm recovering µ to error ε1−1/t in time kO(t), so long as sub-Gaussian-ness up to O(t) moments can be certified by a Sum of Squares proof. This is the first polynomial-time algorithm with guarantees approaching the information-theoretic limit for non-Gaussian distributions. Previous algorithms could not achieve error better than ε1/2. As a corollary, we achieve similar results for robust covariance estimation. Both of these results are based on a unified technique. Inspired by recent algorithms of Diakonikolas et al. in robust statistics, we devise an SDP based on the Sum of Squares method for the following setting: given X1,…,Xn ∈ ℝk for large k and n = poly(k) with the promise that a subset of X1,…,Xn were sampled from a probability distribution with bounded moments, recover some information about that distribution. @InProceedings{STOC18p1021, author = {Samuel B. Hopkins and Jerry Li}, title = {Mixture Models, Robustness, and Sum of Squares Proofs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1021--1034}, doi = {}, year = {2018}, } Info |
|
Hoy, Darrell |
STOC '18: "A Tighter Welfare Guarantee ..."
A Tighter Welfare Guarantee for First-Price Auctions
Darrell Hoy, Samuel Taggart, and Zihe Wang (Tremor Technologies, USA; Oberlin College, USA; Shanghai University of Finance and Economics, China) This paper proves that the welfare of the first price auction in Bayes-Nash equilibrium is at least a .743-fraction of the welfare of the optimal mechanism assuming agents’ values are independently distributed. The previous best bound was 1−1/e≈.63, derived using smoothness, the standard technique for reasoning about welfare of games in equilibrium. In the worst known example, the first price auction achieves a ≈.869-fraction of the optimal welfare, far better than the theoretical guarantee. Despite this large gap, it was unclear whether the 1−1/e bound was tight. We prove that it is not. Our analysis eschews smoothness, and instead uses the independence assumption on agents’ value distributions to give a more careful accounting of the welfare contribution of agents who win despite not having the highest value. @InProceedings{STOC18p132, author = {Darrell Hoy and Samuel Taggart and Zihe Wang}, title = {A Tighter Welfare Guarantee for First-Price Auctions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {132--137}, doi = {}, year = {2018}, } Info |
|
Huang, Zhiyi |
STOC '18: "How to Match When All Vertices ..."
How to Match When All Vertices Arrive Online
Zhiyi Huang, Ning Kang, Zhihao Gavin Tang, Xiaowei Wu, Yuhao Zhang, and Xue Zhu (University of Hong Kong, China) We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. Each vertex has a deadline that is after all its neighbors’ arrivals. If a vertex remains unmatched until its deadline, the algorithm must then irrevocably either match it to an unmatched neighbor, or leave it unmatched. The model generalizes the existing one-sided online model and is motivated by applications including ride-sharing platforms, real-estate agency, etc. We show that the Ranking algorithm by Karp et al. (STOC 1990) is 0.5211-competitive in our fully online model for general graphs. Our analysis brings a novel charging mechanic into the randomized primal dual technique by Devanur et al. (SODA 2013), allowing a vertex other than the two endpoints of a matched edge to share the gain. To our knowledge, this is the first analysis of Ranking that beats 0.5 on general graphs in an online matching problem, a first step towards solving the open problem by Karp et al. (STOC 1990) about the optimality of Ranking on general graphs. If the graph is bipartite, we show that the competitive ratio of Ranking is between 0.5541 and 0.5671. Finally, we prove that the fully online model is strictly harder than the previous model as no online algorithm can be 0.6317 < 1−1/e-competitive in our model even for bipartite graphs. @InProceedings{STOC18p17, author = {Zhiyi Huang and Ning Kang and Zhihao Gavin Tang and Xiaowei Wu and Yuhao Zhang and Xue Zhu}, title = {How to Match When All Vertices Arrive Online}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17--29}, doi = {}, year = {2018}, } |
|
Huiberts, Sophie |
STOC '18: "A Friendly Smoothed Analysis ..."
A Friendly Smoothed Analysis of the Simplex Method
Daniel Dadush and Sophie Huiberts (CWI, Netherlands) Explaining the excellent practical performance of the simplex method for linear programming has been a major topic of research for over 50 years. One of the most successful frameworks for understanding the simplex method was given by Spielman and Teng (JACM ‘04), who the developed the notion of smoothed analysis. Starting from an arbitrary linear program with d variables and n constraints, Spielman and Teng analyzed the expected runtime over random perturbations of the LP (smoothed LP), where variance σ Gaussian noise is added to the LP data. In particular, they gave a two-stage shadow vertex simplex algorithm which uses an expected O(n86 d55 σ−30) number of simplex pivots to solve the smoothed LP. Their analysis and runtime was substantially improved by SpielmanDeshpande (FOCS ‘05) and later Vershynin (SICOMP ‘09). The fastest current algorithm, due to Vershynin, solves the smoothed LP using an expected O(d3 log3 n σ−4 + d9log7 n) number of pivots, improving the dependence on n from polynomial to logarithmic. While the original proof of SpielmanTeng has now been substantially simplified, the resulting analyses are still quite long and complex and the parameter dependencies far from optimal. In this work, we make substantial progress on this front, providing an improved and simpler analysis of shadow simplex methods, where our main algorithm requires an expected O(d2 √logn σ−2 + d5 log3/2 n) number of simplex pivots. We obtain our results via an improved shadow bound, key to earlier analyses as well, combined with algorithmic techniques of Borgwardt (ZOR ‘82) and Vershynin. As an added bonus, our analysis is completely modular, allowing us to obtain non-trivial bounds for perturbations beyond Gaussians, such as Laplace perturbations. @InProceedings{STOC18p390, author = {Daniel Dadush and Sophie Huiberts}, title = {A Friendly Smoothed Analysis of the Simplex Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {390--403}, doi = {}, year = {2018}, } |
|
Husfeldt, Thore |
STOC '18: "Extensor-Coding ..."
Extensor-Coding
Cornelius Brand, Holger Dell, and Thore Husfeldt (Saarland University, Germany; M2CI, Germany; Lund University, Sweden; IT University of Copenhagen, Denmark) We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time ε−2 4k(n+m) poly(k). The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2k·poly(n) time algorithm to find a k-path in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect k-multilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants. @InProceedings{STOC18p151, author = {Cornelius Brand and Holger Dell and Thore Husfeldt}, title = {Extensor-Coding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {151--164}, doi = {}, year = {2018}, } |
|
Ikenmeyer, Christian |
STOC '18: "Generalized Matrix Completion ..."
Generalized Matrix Completion and Algebraic Natural Proofs
Markus Bläser, Christian Ikenmeyer, Gorav Jindal, and Vladimir Lysikov (Saarland University, Germany; Max Planck Institute for Informatics, Germany) Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P#P ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case. @InProceedings{STOC18p1193, author = {Markus Bläser and Christian Ikenmeyer and Gorav Jindal and Vladimir Lysikov}, title = {Generalized Matrix Completion and Algebraic Natural Proofs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1193--1206}, doi = {}, year = {2018}, } STOC '18: "On the Complexity of Hazard-Free ..." On the Complexity of Hazard-Free Circuits Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah (Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Newcastle University, UK; IIT Hyderabad, India) The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems. @InProceedings{STOC18p878, author = {Christian Ikenmeyer and Balagopal Komarath and Christoph Lenzen and Vladimir Lysikov and Andrey Mokhov and Karteek Sreenivasaiah}, title = {On the Complexity of Hazard-Free Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {878--889}, doi = {}, year = {2018}, } |
|
Im, Sungjin |
STOC '18: "Online Load Balancing on Related ..."
Online Load Balancing on Related Machines
Sungjin Im, Nathaniel Kell, Debmalya Panigrahi, and Maryam Shadloo (University of California at Merced, USA; Duke University, USA) In this paper, we consider the problem of assigning jobs online to machines with non-uniform speeds (also called related machines) so to optimize a given norm of the machine loads. A long line of work, starting with the seminal work of Graham in the 1960s, has led to tight competitive ratios for all ℓq norms for two scenarios: the special case of identical machines (uniform machine speeds) and the more general setting of unrelated machines (jobs have arbitrary processing times on machines). For non-uniform machine speeds, however, the only known result was a constant competitive competitive ratio for the makespan (ℓ∞) norm, via the so-called slowest-fit algorithm (Aspnes, Azar, Fiat, Plotkin, and Waarts, JACM ’97). Our first result in this paper is to obtain the first constant-competitive algorithm for scheduling on related machines for any arbitrary ℓq norm. Recent literature has further expanded the scope of this problem to vector scheduling, to capture multi-dimensional resource requirements in applications such as data centers. As in the scalar case, tight bounds are known for vector scheduling on identical and unrelated machines. Our second set of results is to give tight competitive ratios for vector scheduling on related machines for the makespan and all ℓq norms. No previous bounds were known, even for the makespan norm, for related machines. We employ a convex relaxation of the ℓq-norm objective and use a continuous greedy algorithm to solve this convex program online. To round the fractional solution, we then use a novel restructuring of the instance that we call machine smoothing. This is a generic tool that reduces a problem on related machines to a set of problem instances on identical machines, and we hope it will be useful in other settings with non-uniform machine speeds as well. @InProceedings{STOC18p30, author = {Sungjin Im and Nathaniel Kell and Debmalya Panigrahi and Maryam Shadloo}, title = {Online Load Balancing on Related Machines}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {30--43}, doi = {}, year = {2018}, } |
|
Jindal, Gorav |
STOC '18: "Generalized Matrix Completion ..."
Generalized Matrix Completion and Algebraic Natural Proofs
Markus Bläser, Christian Ikenmeyer, Gorav Jindal, and Vladimir Lysikov (Saarland University, Germany; Max Planck Institute for Informatics, Germany) Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P#P ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case. @InProceedings{STOC18p1193, author = {Markus Bläser and Christian Ikenmeyer and Gorav Jindal and Vladimir Lysikov}, title = {Generalized Matrix Completion and Algebraic Natural Proofs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1193--1206}, doi = {}, year = {2018}, } |
|
Kakade, Sham |
STOC '18: "Prediction with a Short Memory ..."
Prediction with a Short Memory
Vatsal Sharan, Sham Kakade, Percy Liang, and Gregory Valiant (Stanford University, USA; University of Washington, USA) We consider the problem of predicting the next observation given a sequence of past observations, and consider the extent to which accurate prediction requires complex algorithms that explicitly leverage long-range dependencies. Perhaps surprisingly, our positive results show that for a broad class of sequences, there is an algorithm that predicts well on average, and bases its predictions only on the most recent few observation together with a set of simple summary statistics of the past observations. Specifically, we show that for any distribution over observations, if the mutual information between past observations and future observations is upper bounded by I, then a simple Markov model over the most recent I/є observations obtains expected KL error є—and hence ℓ1 error √є—with respect to the optimal predictor that has access to the entire past and knows the data generating distribution. For a Hidden Markov Model with n hidden states, I is bounded by logn, a quantity that does not depend on the mixing time, and we show that the trivial prediction algorithm based on the empirical frequencies of length O(logn/є) windows of observations achieves this error, provided the length of the sequence is dΩ(logn/є), where d is the size of the observation alphabet. We also establish that this result cannot be improved upon, even for the class of HMMs, in the following two senses: First, for HMMs with n hidden states, a window length of logn/є is information-theoretically necessary to achieve expected KL error є, or ℓ1 error √є. Second, the dΘ(logn/є) samples required to accurately estimate the Markov model when observations are drawn from an alphabet of size d is necessary for any computationally tractable learning/prediction algorithm, assuming the hardness of strongly refuting a certain class of CSPs. @InProceedings{STOC18p1074, author = {Vatsal Sharan and Sham Kakade and Percy Liang and Gregory Valiant}, title = {Prediction with a Short Memory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1074--1087}, doi = {}, year = {2018}, } |
|
Kalai, Yael Tauman |
STOC '18: "Multi-collision Resistance: ..."
Multi-collision Resistance: A Paradigm for Keyless Hash Functions
Nir Bitansky, Yael Tauman Kalai, and Omer Paneth (Tel Aviv University, Israel; Microsoft, USA; Massachusetts Institute of Technology, USA) We introduce a new notion of multi-collision resistance for keyless hash functions. This is a natural relaxation of collision resistance where it is hard to find multiple inputs with the same hash in the following sense. The number of colliding inputs that a polynomial-time non-uniform adversary can find is not much larger than its advice. We discuss potential candidates for this notion and study its applications. Assuming the existence of such hash functions, we resolve the long-standing question of the round complexity of zero knowledge protocols --- we construct a 3-message zero knowledge argument against arbitrary polynomial-size non-uniform adversaries. We also improve the round complexity in several other central applications, including a 3-message succinct argument of knowledge for NP, a 4-message zero-knowledge proof, and a 5-message public-coin zero-knowledge argument. Our techniques can also be applied in the keyed setting, where we match the round complexity of known protocols while relaxing the underlying assumption from collision-resistance to keyed multi-collision resistance. The core technical contribution behind our results is a domain extension transformation from multi-collision-resistant hash functions for a fixed input length to ones with an arbitrary input length and a local opening property. The transformation is based on a combination of classical domain extension techniques, together with new information-theoretic tools. In particular, we define and construct a new variant of list-recoverable codes, which may be of independent interest. @InProceedings{STOC18p671, author = {Nir Bitansky and Yael Tauman Kalai and Omer Paneth}, title = {Multi-collision Resistance: A Paradigm for Keyless Hash Functions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {671--684}, doi = {}, year = {2018}, } STOC '18: "Succinct Delegation for Low-Space ..." Succinct Delegation for Low-Space Non-deterministic Computation Saikrishna Badrinarayanan, Yael Tauman Kalai, Dakshita Khurana, Amit Sahai, and Daniel Wichs (University of California at Los Angeles, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA; Northeastern University, USA) We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in non-deterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)). Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a non-interactive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs. Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on sub-exponential LWE, for many natural languages believed to be outside of P. @InProceedings{STOC18p709, author = {Saikrishna Badrinarayanan and Yael Tauman Kalai and Dakshita Khurana and Amit Sahai and Daniel Wichs}, title = {Succinct Delegation for Low-Space Non-deterministic Computation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {709--721}, doi = {}, year = {2018}, } Info |
|
Kalaitzis, Christos |
STOC '18: "Improved Approximation for ..."
Improved Approximation for Tree Augmentation: Saving by Rewiring
Fabrizio Grandoni, Christos Kalaitzis, and Rico Zenklusen (University of Lugano, Switzerland; ETH Zurich, Switzerland) The Tree Augmentation Problem (TAP) is a fundamental network design problem in which we are given a tree and a set of additional edges, also called links. The task is to find a set of links, of minimum size, whose addition to the tree leads to a 2-edge-connected graph. A long line of results on TAP culminated in the previously best known approximation guarantee of 1.5 achieved by a combinatorial approach due to Kortsarz and Nutov [ACM Transactions on Algorithms 2016], and also by an SDP-based approach by Cheriyan and Gao [Algorithmica 2017]. Moreover, an elegant LP-based (1.5+є)-approximation has also been found very recently by Fiorini, Groß, K'onemann, and Sanitá [SODA 2018]. In this paper, we show that an approximation factor below 1.5 can be achieved, by presenting a 1.458-approximation that is based on several new techniques. By extending prior results of Adjiashvili [SODA 2017], we first present a black-box reduction to a very structured type of instance, which played a crucial role in recent development on the problem, and which we call k-wide. Our main contribution is a new approximation algorithm for O(1)-wide tree instances with approximation guarantee strictly below 1.458, based on one of their fundamental properties: wide trees naturally decompose into smaller subtrees with a constant number of leaves. Previous approaches in similar settings rounded each subtree independently and simply combined the obtained solutions. We show that additionally, when starting with a well-chosen LP, the combined solution can be improved through a new “rewiring” technique, showing that one can replace some pairs of used links by a single link. We can rephrase the rewiring problem as a stochastic version of a matching problem, which may be of independent interest. By showing that large matchings can be obtained in this problem, we obtain that a significant number of rewirings are possible, thus leading to an approximation factor below 1.5. @InProceedings{STOC18p632, author = {Fabrizio Grandoni and Christos Kalaitzis and Rico Zenklusen}, title = {Improved Approximation for Tree Augmentation: Saving by Rewiring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {632--645}, doi = {}, year = {2018}, } |
|
Kamath, Pritish |
STOC '18: "Monotone Circuit Lower Bounds ..."
Monotone Circuit Lower Bounds from Resolution
Ankit Garg, Mika Göös, Pritish Kamath, and Dmitry Sokolov (Microsoft Research, USA; Harvard University, USA; Massachusetts Institute of Technology, USA; KTH, Sweden) For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadget-composed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols—or, equivalently, that a monotone function associated with F has large monotone circuit complexity. Our result extends to monotone real circuits, which yields new lower bounds for the Cutting Planes proof system. @InProceedings{STOC18p902, author = {Ankit Garg and Mika Göös and Pritish Kamath and Dmitry Sokolov}, title = {Monotone Circuit Lower Bounds from Resolution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902--911}, doi = {}, year = {2018}, } |
|
Kamma, Lior |
STOC '18: "Tight Cell Probe Bounds for ..."
Tight Cell Probe Bounds for Succinct Boolean Matrix-Vector Multiplication
Diptarka Chakraborty, Lior Kamma, and Kasper Green Larsen (Charles University in Prague, Czechia; Aarhus University, Denmark) The conjectured hardness of Boolean matrix-vector multiplication has been used with great success to prove conditional lower bounds for numerous important data structure problems, see Henzinger et al. [STOC’15]. In recent work, Larsen and Williams [SODA’17] attacked the problem from the upper bound side and gave a surprising cell probe data structure (that is, we only charge for memory accesses, while computation is free). Their cell probe data structure answers queries in Õ(n7/4) time and is succinct in the sense that it stores the input matrix in read-only memory, plus an additional Õ(n7/4) bits on the side. In this paper, we essentially settle the cell probe complexity of succinct Boolean matrix-vector multiplication. We present a new cell probe data structure with query time Õ(n3/2) storing just Õ(n3/2) bits on the side. We then complement our data structure with a lower bound showing that any data structure storing r bits on the side, with n < r < n2 must have query time t satisfying t r = Ω(n3). For r ≤ n, any data structure must have t = Ω(n2). Since lower bounds in the cell probe model also apply to classic word-RAM data structures, the lower bounds naturally carry over. We also prove similar lower bounds for matrix-vector multiplication over F2. @InProceedings{STOC18p1297, author = {Diptarka Chakraborty and Lior Kamma and Kasper Green Larsen}, title = {Tight Cell Probe Bounds for Succinct Boolean Matrix-Vector Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1297--1306}, doi = {}, year = {2018}, } |
|
Kane, Daniel M. |
STOC '18: "List-Decodable Robust Mean ..."
List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians
Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (University of Southern California, USA; University of California at San Diego, USA) We study the problem of list-decodable (robust) Gaussian mean estimation and the related problem of learning mixtures of separated spherical Gaussians. In the former problem, we are given a set T of points in n with the promise that an α-fraction of points in T, where 0< α < 1/2, are drawn from an unknown mean identity covariance Gaussian G, and no assumptions are made about the remaining points. The goal is to output a small list of candidate vectors with the guarantee that at least one of the candidates is close to the mean of G. In the latter problem, we are given samples from a k-mixture of spherical Gaussians on n and the goal is to estimate the unknown model parameters up to small accuracy. We develop a set of techniques that yield new efficient algorithms with significantly improved guarantees for these problems. Specifically, our main contributions are as follows: List-Decodable Mean Estimation. Fix any d ∈ + and 0< α <1/2. We design an algorithm with sample complexity Od ((nd/α)) and runtime Od ((n/α)d) that outputs a list of O(1/α) many candidate vectors such that with high probability one of the candidates is within ℓ2-distance Od(α−1/(2d)) from the mean of G. The only previous algorithm for this problem achieved error Õ(α−1/2) under second moment conditions. For d = O(1/), where >0 is a constant, our algorithm runs in polynomial time and achieves error O(α). For d = Θ(log(1/α)), our algorithm runs in time (n/α)O(log(1/α)) and achieves error O(log3/2(1/α)), almost matching the information-theoretically optimal bound of Θ(log1/2(1/α)) that we establish. We also give a Statistical Query (SQ) lower bound suggesting that the complexity of our algorithm is qualitatively close to best possible. Learning Mixtures of Spherical Gaussians. We give a learning algorithm for mixtures of spherical Gaussians, with unknown spherical covariances, that succeeds under significantly weaker separation assumptions compared to prior work. For the prototypical case of a uniform k-mixture of identity covariance Gaussians we obtain the following: For any >0, if the pairwise separation between the means is at least Ω(k+√log(1/δ)), our algorithm learns the unknown parameters within accuracy δ with sample complexity and running time (n, 1/δ, (k/)1/). Moreover, our algorithm is robust to a small dimension-independent fraction of corrupted data. The previously best known polynomial time algorithm required separation at least k1/4 (k/δ). Finally, our algorithm works under separation of Õ(log3/2(k)+√log(1/δ)) with sample complexity and running time (n, 1/δ, klogk). This bound is close to the information-theoretically minimum separation of Ω(√logk). Our main technical contribution is a new technique, using degree-d multivariate polynomials, to remove outliers from high-dimensional datasets where the majority of the points are corrupted. @InProceedings{STOC18p1047, author = {Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart}, title = {List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1047--1060}, doi = {}, year = {2018}, } STOC '18: "Near-Optimal Linear Decision ..." Near-Optimal Linear Decision Trees for k-SUM and Related Problems Daniel M. Kane, Shachar Lovett, and Shay Moran (University of California at San Diego, USA; Institute for Advanced Study at Princeton, USA) We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the k-SUM problem on n elements using O(n log2 n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two k-subsets; when viewed as linear queries, comparison queries are 2k-sparse and have only {−1,0,1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSET-SUM problem, both with optimal number of queries, up to poly-logarithmic terms. Our constructions are based on the notion of “inference dimension”, recently introduced by the authors in the context of active classification with comparison queries. This can be viewed as another contribution to the fruitful link between machine learning and discrete geometry, which goes back to the discovery of the VC dimension. @InProceedings{STOC18p554, author = {Daniel M. Kane and Shachar Lovett and Shay Moran}, title = {Near-Optimal Linear Decision Trees for k-SUM and Related Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {554--563}, doi = {}, year = {2018}, } STOC '18: "Learning Geometric Concepts ..." Learning Geometric Concepts with Nasty Noise Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (University of Southern California, USA; University of California at San Diego, USA) We study the efficient learnability of geometric concept classes — specifically, low-degree polynomial threshold functions (PTFs) and intersections of halfspaces — when a fraction of the training data is adversarially corrupted. We give the first polynomial-time PAC learning algorithms for these concept classes with dimension-independent error guarantees in the presence of nasty noise under the Gaussian distribution. In the nasty noise model, an omniscient adversary can arbitrarily corrupt a small fraction of both the unlabeled data points and their labels. This model generalizes well-studied noise models, including the malicious noise model and the agnostic (adversarial label noise) model. Prior to our work, the only concept class for which efficient malicious learning algorithms were known was the class of origin-centered halfspaces. At the core of our results is an efficient algorithm to approximate the low-degree Chow-parameters of any bounded function in the presence of nasty noise. Our robust approximation algorithm for the Chow parameters provides near-optimal error guarantees for a range of distribution families satisfying mild concentration bounds and moment conditions. At the technical level, this algorithm employs an iterative “spectral” technique for outlier detection and removal inspired by recent work in robust unsupervised learning, which makes essential use of low-degree multivariate polynomials. Our robust learning algorithm for low-degree PTFs provides dimension-independent error guarantees for a class of tame distributions, including Gaussians and, more generally, any logconcave distribution with (approximately) known low-degree moments. For LTFs under the Gaussian distribution, using a refinement of the localization technique, we give a polynomial-time algorithm that achieves a near-optimal error of O(є), where є is the noise rate. Our robust learning algorithm for intersections of halfspaces proceeds by projecting down to an appropriate low-dimensional subspace. Its correctness makes essential use of a novel robust inverse independence lemma that is of independent interest. @InProceedings{STOC18p1061, author = {Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart}, title = {Learning Geometric Concepts with Nasty Noise}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1061--1073}, doi = {}, year = {2018}, } STOC '18: "Testing Conditional Independence ..." Testing Conditional Independence of Discrete Distributions Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (Stanford University, USA; University of Southern California, USA; University of California at San Diego, USA) We study the problem of testing *conditional independence* for discrete distributions. Specifically, given samples from a discrete random variable (X, Y, Z) on domain [ℓ1]×[ℓ2] × [n], we want to distinguish, with probability at least 2/3, between the case that X and Y are conditionally independent given Z from the case that (X, Y, Z) is є-far, in ℓ1-distance, from every distribution that has this property. Conditional independence is a concept of central importance in probability and statistics with important applications in various scientific domains. As such, the statistical task of testing conditional independence has been extensively studied in various forms within the statistics and econometrics community for nearly a century. Perhaps surprisingly, this problem has not been previously considered in the framework of distribution property testing and in particular no tester with *sublinear* sample complexity is known, even for the important special case that the domains of X and Y are binary. The main algorithmic result of this work is the first conditional independence tester with sublinear sample complexity. To complement our upper bound, we prove information-theoretic lower bounds establishing that the sample complexity of our algorithm is optimal, up to constant factors, for a number of settings. Specifically, for the prototypical setting when ℓ1, ℓ2 = O(1), we show that the sample complexity of testing conditional independence (upper bound and matching lower bound) is O⎛ ⎜ ⎝max⎛ ⎜ ⎝√n /є2,min⎛ ⎝n7/8/є,n6/7/є8/7⎞ ⎠⎞ ⎟ ⎠ ⎞ ⎟ ⎠ . We also achieve sublinear sample complexity for the general case of arbitrary ℓ1, ℓ2, and n. To obtain our tester, we employ a variety of tools, including (1) a subtle weighted adaptation of the ”flattening” technique, and (2) the design and analysis of an optimal (unbiased) estimator for the following statistical problem of independent interest: Given a degree-d polynomial Q∶ℝn → ℝ and sample access to a distribution p over [n], estimate Q(p1, …, pn) up to small additive error. Obtaining tight variance analyses for specific estimators of this form has been a major technical hurdle in distribution testing. As an important contribution of this work, we develop a general theory providing tight variance bounds for *all* such estimators. @InProceedings{STOC18p735, author = {Clément L. Canonne and Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart}, title = {Testing Conditional Independence of Discrete Distributions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {735--748}, doi = {}, year = {2018}, } |
|
Kang, Ning |
STOC '18: "How to Match When All Vertices ..."
How to Match When All Vertices Arrive Online
Zhiyi Huang, Ning Kang, Zhihao Gavin Tang, Xiaowei Wu, Yuhao Zhang, and Xue Zhu (University of Hong Kong, China) We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. Each vertex has a deadline that is after all its neighbors’ arrivals. If a vertex remains unmatched until its deadline, the algorithm must then irrevocably either match it to an unmatched neighbor, or leave it unmatched. The model generalizes the existing one-sided online model and is motivated by applications including ride-sharing platforms, real-estate agency, etc. We show that the Ranking algorithm by Karp et al. (STOC 1990) is 0.5211-competitive in our fully online model for general graphs. Our analysis brings a novel charging mechanic into the randomized primal dual technique by Devanur et al. (SODA 2013), allowing a vertex other than the two endpoints of a matched edge to share the gain. To our knowledge, this is the first analysis of Ranking that beats 0.5 on general graphs in an online matching problem, a first step towards solving the open problem by Karp et al. (STOC 1990) about the optimality of Ranking on general graphs. If the graph is bipartite, we show that the competitive ratio of Ranking is between 0.5541 and 0.5671. Finally, we prove that the fully online model is strictly harder than the previous model as no online algorithm can be 0.6317 < 1−1/e-competitive in our model even for bipartite graphs. @InProceedings{STOC18p17, author = {Zhiyi Huang and Ning Kang and Zhihao Gavin Tang and Xiaowei Wu and Yuhao Zhang and Xue Zhu}, title = {How to Match When All Vertices Arrive Online}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17--29}, doi = {}, year = {2018}, } |
|
Karlin, Anna R. |
STOC '18: "A Simply Exponential Upper ..."
A Simply Exponential Upper Bound on the Maximum Number of Stable Matchings
Anna R. Karlin, Shayan Oveis Gharan, and Robbie Weber (University of Washington, USA) Stable matching is a classical combinatorial problem that has been the subject of intense theoretical and empirical study since its introduction in 1962 in a seminal paper by Gale and Shapley. In this paper, we provide a new upper bound on f(n), the maximum number of stable matchings that a stable matching instance with n men and n women can have. It has been a long-standing open problem to understand the asymptotic behavior of f(n) as n→∞, first posed by Donald Knuth in the 1970s. Until now the best lower bound was approximately 2.28n, and the best upper bound was 2nlogn− O(n). In this paper, we show that for all n, f(n) ≤ cn for some universal constant c. This matches the lower bound up to the base of the exponent. Our proof is based on a reduction to counting the number of downsets of a family of posets that we call “mixing”. The latter might be of independent interest. @InProceedings{STOC18p920, author = {Anna R. Karlin and Shayan Oveis Gharan and Robbie Weber}, title = {A Simply Exponential Upper Bound on the Maximum Number of Stable Matchings}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {920--925}, doi = {}, year = {2018}, } |
|
Kaufman, Tali |
STOC '18: "Construction of New Local ..."
Construction of New Local Spectral High Dimensional Expanders
Tali Kaufman and Izhar Oppenheim (Bar-Ilan University, Israel; Ben-Gurion University of the Negev, Israel) High dimensional expanders is a vibrant emerging field of study. Nevertheless, the only known construction of bounded degree high dimensional expanders is based on Ramanujan complexes, whereas one dimensional bounded degree expanders are abundant. In this work we construct new families of bounded degree high dimensional expanders obeying the local spectral expansion property. A property that implies, geometric overlapping, fast mixing of high dimensional random walks, agreement testing and agreement expansion. The construction also yields new families of expander graphs. The construction is quite elementary and it is presented in a self contained manner; This is in contrary to the highly involved construction of the Ramanujan complexes. The construction is also strongly symmetric; The symmetry of the construction could be used, for example, to obtain good symmetric LDPC codes that were previously based on Ramanujan graphs. @InProceedings{STOC18p773, author = {Tali Kaufman and Izhar Oppenheim}, title = {Construction of New Local Spectral High Dimensional Expanders}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {773--786}, doi = {}, year = {2018}, } |
|
Kell, Nathaniel |
STOC '18: "Online Load Balancing on Related ..."
Online Load Balancing on Related Machines
Sungjin Im, Nathaniel Kell, Debmalya Panigrahi, and Maryam Shadloo (University of California at Merced, USA; Duke University, USA) In this paper, we consider the problem of assigning jobs online to machines with non-uniform speeds (also called related machines) so to optimize a given norm of the machine loads. A long line of work, starting with the seminal work of Graham in the 1960s, has led to tight competitive ratios for all ℓq norms for two scenarios: the special case of identical machines (uniform machine speeds) and the more general setting of unrelated machines (jobs have arbitrary processing times on machines). For non-uniform machine speeds, however, the only known result was a constant competitive competitive ratio for the makespan (ℓ∞) norm, via the so-called slowest-fit algorithm (Aspnes, Azar, Fiat, Plotkin, and Waarts, JACM ’97). Our first result in this paper is to obtain the first constant-competitive algorithm for scheduling on related machines for any arbitrary ℓq norm. Recent literature has further expanded the scope of this problem to vector scheduling, to capture multi-dimensional resource requirements in applications such as data centers. As in the scalar case, tight bounds are known for vector scheduling on identical and unrelated machines. Our second set of results is to give tight competitive ratios for vector scheduling on related machines for the makespan and all ℓq norms. No previous bounds were known, even for the makespan norm, for related machines. We employ a convex relaxation of the ℓq-norm objective and use a continuous greedy algorithm to solve this convex program online. To round the fractional solution, we then use a novel restructuring of the instance that we call machine smoothing. This is a generic tool that reduces a problem on related machines to a set of problem instances on identical machines, and we hope it will be useful in other settings with non-uniform machine speeds as well. @InProceedings{STOC18p30, author = {Sungjin Im and Nathaniel Kell and Debmalya Panigrahi and Maryam Shadloo}, title = {Online Load Balancing on Related Machines}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {30--43}, doi = {}, year = {2018}, } |
|
Kempa, Dominik |
STOC '18: "At the Roots of Dictionary ..."
At the Roots of Dictionary Compression: String Attractors
Dominik Kempa and Nicola Prezza (University of Helsinki, Finland; DTU, Denmark; University of Pisa, Italy) A well-known fact in the field of lossless text compression is that high-order entropy is a weak model when the input contains long repetitions. Motivated by this fact, decades of research have generated myriads of so-called dictionary compressors: algorithms able to reduce the text’s size by exploiting its repetitiveness. Lempel-Ziv 77 is one of the most successful and well-known tools of this kind, followed by straight-line programs, run-length Burrows-Wheeler transform, macro schemes, collage systems, and the compact directed acyclic word graph. In this paper, we show that these techniques are different solutions to the same, elegant, combinatorial problem: to find a small set of positions capturing all distinct text’s substrings. We call such a set a string attractor. We first show reductions between dictionary compressors and string attractors. This gives the approximation ratios of dictionary compressors with respect to the smallest string attractor and allows us to uncover new asymptotic relations between the output sizes of different dictionary compressors. We then show that the k-attractor problem — deciding whether a text has a size-t set of positions capturing all substrings of length at most k — is NP-complete for k≥ 3. This, in particular, includes the full string attractor problem. We provide several approximation techniques for the smallest k-attractor, show that the problem is APX-complete for constant k, and give strong inapproximability results. To conclude, we provide matching lower and upper bounds for the random access problem on string attractors. The upper bound is proved by showing a data structure supporting queries in optimal time. Our data structure is universal: by our reductions to string attractors, it supports random access on any dictionary-compression scheme. In particular, it matches the lower bound also on LZ77, straight-line programs, collage systems, and macro schemes, and therefore essentially closes (at once) the random access problem for all these compressors. @InProceedings{STOC18p827, author = {Dominik Kempa and Nicola Prezza}, title = {At the Roots of Dictionary Compression: String Attractors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {827--840}, doi = {}, year = {2018}, } |
|
Khot, Subhash |
STOC '18: "Towards a Proof of the 2-to-1 ..."
Towards a Proof of the 2-to-1 Games Conjecture?
Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We present a polynomial time reduction from gap-3LIN to label cover with 2-to-1 constraints. In the “yes” case the fraction of satisfied constraints is at least 1 −ε, and in the “no” case we show that this fraction is at most ε, assuming a certain (new) combinatorial hypothesis on the Grassmann graph. In other words, we describe a combinatorial hypothesis that implies the 2-to-1 conjecture with imperfect completeness. The companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] makes some progress towards proving this hypothesis. Our work builds on earlier work by a subset of the authors [Khot, Minzer and Safra, STOC 2017] where a slightly different hypothesis was used to obtain hardness of approximating vertex cover to within factor of √2−ε. The most important implication of this work is (assuming the hypothesis) an NP-hardness gap of 1/2−ε vs. ε for unique games. In addition, we derive optimal NP-hardness for approximating the max-cut-gain problem, NP-hardness of coloring an almost 4-colorable graph with any constant number of colors, and the same √2−ε NP-hardness for approximate vertex cover that was already obtained based on a slightly different hypothesis. Recent progress towards proving our hypothesis [Barak, Kothari and Steurer, ECCC TR18-077], [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] directly implies some new unconditional NP-hardness results. These include new points of NP-hardness for unique games and for 2-to-1 and 2-to-2 games. More recently, the full version of our hypothesis was proven [Khot, Minzer and Safra, ECCC TR18-006]. @InProceedings{STOC18p376, author = {Irit Dinur and Subhash Khot and Guy Kindler and Dor Minzer and Muli Safra}, title = {Towards a Proof of the 2-to-1 Games Conjecture?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {376--389}, doi = {}, year = {2018}, } STOC '18: "On Non-optimally Expanding ..." On Non-optimally Expanding Sets in Grassmann Graphs Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We study the structure of non-expanding sets in the Grassmann graph. We put forth a hypothesis stating that every small set whose expansion is smaller than 1−δ must be correlated with one of a specified list of sets which are isomorphic to smaller Grassmann graphs. We develop a framework of Fourier analysis for analyzing functions over the Grassmann graph, and prove that our hypothesis holds for all sets whose expansion is below 7/8. In the companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018], the authors show that a linearity agreement hypothesis implies an NP-hardness gap of 1/2− vs for unique games and other inapproximability results. In [Barak, Kothari and Steurer, ECCC TR18-077], the authors show that the hypothesis in this work implies the linearity agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018]. Combined with our main theorem here this proves a version of the linearity agreement hypothesis with certain specific parameters. Short of proving the entire hypothesis, this nevertheless suffices for getting new unconditional NP hardness gaps for label cover with 2-to-1 and unique constraints. Our Expansion Hypothesis has been subsequently proved in its full form [Khot, Minzer and Safra, ECCC TR18-006] thereby proving the agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] and completing the proof of the 2-to-1 Games Conjecture (albeit with imperfect completeness). @InProceedings{STOC18p940, author = {Irit Dinur and Subhash Khot and Guy Kindler and Dor Minzer and Muli Safra}, title = {On Non-optimally Expanding Sets in Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {940--951}, doi = {}, year = {2018}, } |
|
Khurana, Dakshita |
STOC '18: "Succinct Delegation for Low-Space ..."
Succinct Delegation for Low-Space Non-deterministic Computation
Saikrishna Badrinarayanan, Yael Tauman Kalai, Dakshita Khurana, Amit Sahai, and Daniel Wichs (University of California at Los Angeles, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA; Northeastern University, USA) We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in non-deterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)). Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a non-interactive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs. Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on sub-exponential LWE, for many natural languages believed to be outside of P. @InProceedings{STOC18p709, author = {Saikrishna Badrinarayanan and Yael Tauman Kalai and Dakshita Khurana and Amit Sahai and Daniel Wichs}, title = {Succinct Delegation for Low-Space Non-deterministic Computation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {709--721}, doi = {}, year = {2018}, } Info |
|
Kim, David H. K. |
STOC '18: "Almost Polynomial Hardness ..."
Almost Polynomial Hardness of Node-Disjoint Paths in Grids
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical Node-Disjoint Paths (NDP) problem, we are given an n-vertex graph G=(V,E), and a collection M={(s1,t1),…,(sk,tk)} of pairs of its vertices, called source-destination, or demand pairs. The goal is to route as many of the demand pairs as possible, where to route a pair we need to select a path connecting it, so that all selected paths are disjoint in their vertices. The best current algorithm for NDP achieves an O(√n)-approximation, while, until recently, the best negative result was a factor Ω(log1/2−єn)-hardness of approximation, for any constant є, unless NP ⊆ ZPTIME(npoly logn). In a recent work, the authors have shown an improved 2Ω(√logn)-hardness of approximation for NDP, unless NP⊆ DTIME(nO(logn)), even if the underlying graph is a subgraph of a grid graph, and all source vertices lie on the boundary of the grid. Unfortunately, this result does not extend to grid graphs. The approximability of the NDP problem on grid graphs has remained a tantalizing open question, with the best current upper bound of Õ(n1/4), and the best current lower bound of APX-hardness. In a recent work, the authors showed a 2Õ(√logn)-approximation algorithm for NDP in grid graphs, if all source vertices lie on the boundary of the grid – a result that can be seen as suggesting that a sub-polynomial approximation may be achievable for NDP in grids. In this paper we show that this is unlikely to be the case, and come close to resolving the approximability of NDP in general, and of NDP in grids in particular. Our main result is that NDP is 2Ω(log1−є n)-hard to approximate for any constant є, assuming that NP⊈RTIME(npoly logn), and that it is nΩ (1/(loglogn)2)-hard to approximate, assuming that for some constant δ>0, NP ⊈RTIME(2nδ). These results hold even for grid graphs and wall graphs, and extend to the closely related Edge-Disjoint Paths problem, even in wall graphs. Our hardness proof performs a reduction from the 3COL(5) problem to NDP, using a new graph partitioning problem as a proxy. Unlike the more standard approach of employing Karp reductions to prove hardness of approximation, our proof is a Cook-type reduction, where, given an input instance of 3COL(5), we produce a large number of instances of NDP, and apply an approximation algorithm for NDP to each of them. The construction of each new instance of NDP crucially depends on the solutions to the previous instances that were found by the approximation algorithm. @InProceedings{STOC18p1220, author = {Julia Chuzhoy and David H. K. Kim and Rachit Nimavat}, title = {Almost Polynomial Hardness of Node-Disjoint Paths in Grids}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1220--1233}, doi = {}, year = {2018}, } |
|
Kindler, Guy |
STOC '18: "Towards a Proof of the 2-to-1 ..."
Towards a Proof of the 2-to-1 Games Conjecture?
Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We present a polynomial time reduction from gap-3LIN to label cover with 2-to-1 constraints. In the “yes” case the fraction of satisfied constraints is at least 1 −ε, and in the “no” case we show that this fraction is at most ε, assuming a certain (new) combinatorial hypothesis on the Grassmann graph. In other words, we describe a combinatorial hypothesis that implies the 2-to-1 conjecture with imperfect completeness. The companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] makes some progress towards proving this hypothesis. Our work builds on earlier work by a subset of the authors [Khot, Minzer and Safra, STOC 2017] where a slightly different hypothesis was used to obtain hardness of approximating vertex cover to within factor of √2−ε. The most important implication of this work is (assuming the hypothesis) an NP-hardness gap of 1/2−ε vs. ε for unique games. In addition, we derive optimal NP-hardness for approximating the max-cut-gain problem, NP-hardness of coloring an almost 4-colorable graph with any constant number of colors, and the same √2−ε NP-hardness for approximate vertex cover that was already obtained based on a slightly different hypothesis. Recent progress towards proving our hypothesis [Barak, Kothari and Steurer, ECCC TR18-077], [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] directly implies some new unconditional NP-hardness results. These include new points of NP-hardness for unique games and for 2-to-1 and 2-to-2 games. More recently, the full version of our hypothesis was proven [Khot, Minzer and Safra, ECCC TR18-006]. @InProceedings{STOC18p376, author = {Irit Dinur and Subhash Khot and Guy Kindler and Dor Minzer and Muli Safra}, title = {Towards a Proof of the 2-to-1 Games Conjecture?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {376--389}, doi = {}, year = {2018}, } STOC '18: "On Non-optimally Expanding ..." On Non-optimally Expanding Sets in Grassmann Graphs Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We study the structure of non-expanding sets in the Grassmann graph. We put forth a hypothesis stating that every small set whose expansion is smaller than 1−δ must be correlated with one of a specified list of sets which are isomorphic to smaller Grassmann graphs. We develop a framework of Fourier analysis for analyzing functions over the Grassmann graph, and prove that our hypothesis holds for all sets whose expansion is below 7/8. In the companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018], the authors show that a linearity agreement hypothesis implies an NP-hardness gap of 1/2− vs for unique games and other inapproximability results. In [Barak, Kothari and Steurer, ECCC TR18-077], the authors show that the hypothesis in this work implies the linearity agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018]. Combined with our main theorem here this proves a version of the linearity agreement hypothesis with certain specific parameters. Short of proving the entire hypothesis, this nevertheless suffices for getting new unconditional NP hardness gaps for label cover with 2-to-1 and unique constraints. Our Expansion Hypothesis has been subsequently proved in its full form [Khot, Minzer and Safra, ECCC TR18-006] thereby proving the agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] and completing the proof of the 2-to-1 Games Conjecture (albeit with imperfect completeness). @InProceedings{STOC18p940, author = {Irit Dinur and Subhash Khot and Guy Kindler and Dor Minzer and Muli Safra}, title = {On Non-optimally Expanding Sets in Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {940--951}, doi = {}, year = {2018}, } |
|
Kisfaludi-Bak, Sándor |
STOC '18: "A Framework for ETH-Tight ..."
A Framework for ETH-Tight Algorithms and Lower Bounds in Geometric Intersection Graphs
Mark de Berg, Hans L. Bodlaender, Sándor Kisfaludi-Bak, Dániel Marx, and Tom C. van der Zanden (Eindhoven University of Technology, Netherlands; Utrecht University, Netherlands; Hungarian Academy of Sciences, Hungary) We give an algorithmic and lower-bound framework that facilitates the construction of subexponential algorithms and matching conditional complexity bounds. It can be applied to a wide range of geometric intersection graphs (intersections of similarly sized fat objects), yielding algorithms with running time 2O(n1−1/d) for any fixed dimension d≥ 2 for many well known graph problems, including Independent Set, r-Dominating Set for constant r, and Steiner Tree. For most problems, we get improved running times compared to prior work; in some cases, we give the first known subexponential algorithm in geometric intersection graphs. Additionally, most of the obtained algorithms work on the graph itself, i.e., do not require any geometric information. Our algorithmic framework is based on a weighted separator theorem and various treewidth techniques. The lower-bound framework is based on a constructive embedding of graphs into d-dimensional grids, and it allows us to derive matching 2Ω(n1−1/d) lower bounds under the Exponential Time Hypothesis even in the much more restricted class of d-dimensional induced grid graphs. @InProceedings{STOC18p574, author = {Mark de Berg and Hans L. Bodlaender and Sándor Kisfaludi-Bak and Dániel Marx and Tom C. van der Zanden}, title = {A Framework for ETH-Tight Algorithms and Lower Bounds in Geometric Intersection Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {574--586}, doi = {}, year = {2018}, } |
|
Kol, Gillat |
STOC '18: "Interactive Coding over the ..."
Interactive Coding over the Noisy Broadcast Channel
Klim Efremenko, Gillat Kol, and Raghuvansh Saxena (Ben-Gurion University of the Negev, Israel; Princeton University, USA) A set of n players, each holding a private input bit, communicate over a noisy broadcast channel. Their mutual goal is for all players to learn all inputs. At each round one of the players broadcasts a bit to all the other players, and the bit received by each player is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? This problem was first suggested by El Gamal in 1984. In 1988, Gallager gave an elegant noise-resistant protocol requiring only O(n loglogn) rounds. The problem got resolved in 2005 by a seminal paper of Goyal, Kindler, and Saks, proving that Gallager’s protocol is essentially optimal. We revisit the above noisy broadcast problem and show that O(n) rounds suffice. This is possible due to a relaxation of the model assumed by the previous works. We no longer demand that exactly one player broadcasts in every round, but rather allow any number of players to broadcast. However, if it is not the case that exactly one player chooses to broadcast, each of the other players gets an adversely chosen bit. We generalized the above result and initiate the study of interactive coding over the noisy broadcast channel. We show that any interactive protocol that works over the noiseless broadcast channel can be simulated over our restrictive noisy broadcast model with constant blowup of the communication. Our results also establish that modern techniques for interactive coding can help us make progress on the classical problems. @InProceedings{STOC18p507, author = {Klim Efremenko and Gillat Kol and Raghuvansh Saxena}, title = {Interactive Coding over the Noisy Broadcast Channel}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {507--520}, doi = {}, year = {2018}, } STOC '18: "Interactive Compression to ..." Interactive Compression to External Information Mark Braverman and Gillat Kol (Princeton University, USA) We describe a new way of compressing two-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants’ private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) · loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost. @InProceedings{STOC18p964, author = {Mark Braverman and Gillat Kol}, title = {Interactive Compression to External Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {964--977}, doi = {}, year = {2018}, } |
|
Komarath, Balagopal |
STOC '18: "On the Complexity of Hazard-Free ..."
On the Complexity of Hazard-Free Circuits
Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah (Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Newcastle University, UK; IIT Hyderabad, India) The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems. @InProceedings{STOC18p878, author = {Christian Ikenmeyer and Balagopal Komarath and Christoph Lenzen and Vladimir Lysikov and Andrey Mokhov and Karteek Sreenivasaiah}, title = {On the Complexity of Hazard-Free Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {878--889}, doi = {}, year = {2018}, } |
|
Koppula, Venkata |
STOC '18: "Collusion Resistant Traitor ..."
Collusion Resistant Traitor Tracing from Learning with Errors
Rishab Goyal, Venkata Koppula, and Brent Waters (University of Texas at Austin, USA) In this work we provide a traitor tracing construction with ciphertexts that grow polynomially in log(n) where n is the number of users and prove it secure under the Learning with Errors (LWE) assumption. This is the first traitor tracing scheme with such parameters provably secure from a standard assumption. In addition to achieving new traitor tracing results, we believe our techniques push forward the broader area of computing on encrypted data under standard assumptions. Notably, traitor tracing is substantially different problem from other cryptography primitives that have seen recent progress in LWE solutions. We achieve our results by first conceiving a novel approach to building traitor tracing that starts with a new form of Functional Encryption that we call Mixed FE. In a Mixed FE system the encryption algorithm is bimodal and works with either a public key or master secret key. Ciphertexts encrypted using the public key can only encrypt one type of functionality. On the other hand the secret key encryption can be used to encode many different types of programs, but is only secure as long as the attacker sees a bounded number of such ciphertexts. We first show how to combine Mixed FE with Attribute-Based Encryption to achieve traitor tracing. Second we build Mixed FE systems for polynomial sized branching programs (which corresponds to the complexity class logspace) by relying on the polynomial hardness of the LWE assumption with super-polynomial modulus-to-noise ratio. @InProceedings{STOC18p660, author = {Rishab Goyal and Venkata Koppula and Brent Waters}, title = {Collusion Resistant Traitor Tracing from Learning with Errors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {660--670}, doi = {}, year = {2018}, } |
|
Korhonen, Janne H. |
STOC '18: "New Classes of Distributed ..."
New Classes of Distributed Time Complexity
Alkida Balliu, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Dennis Olivetti, and Jukka Suomela (Aalto University, Finland; University of Freiburg, Germany) A number of recent papers – e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) – have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π in which a solution can be verified by checking all radius-O(1) neighbourhoods, and the question is what is the smallest T such that a solution can be computed so that each node chooses its own output based on its radius-T neighbourhood. Here T is the distributed time complexity of Π. The time complexity classes for deterministic algorithms in bounded-degree graphs that are known to exist by prior work are Θ(1), Θ(log* n), Θ(logn), Θ(n1/k), and Θ(n). It is also known that there are two gaps: one between ω(1) and o(loglog* n), and another between ω(log* n) and o(logn). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple – indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(logα n) for any α ≥ 1, 2Θ(logα n) for any α ≤ 1, and Θ(nα) for any α < 1/2 in the high end of the complexity spectrum, and Θ(logα log* n) for any α ≥ 1, 2Θ(logα log* n) for any α ≤ 1, and Θ((log* n)α) for any α ≤ 1 in the low end of the complexity spectrum; here α is a positive rational number. @InProceedings{STOC18p1307, author = {Alkida Balliu and Juho Hirvonen and Janne H. Korhonen and Tuomo Lempiäinen and Dennis Olivetti and Jukka Suomela}, title = {New Classes of Distributed Time Complexity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1307--1318}, doi = {}, year = {2018}, } |
|
Kosowski, Adrian |
STOC '18: "Universal Protocols for Information ..."
Universal Protocols for Information Dissemination using Emergent Signals
Bartłomiej Dudek and Adrian Kosowski (University of Wrocław, Poland; Inria, France) We consider a population of n agents which communicate with each other in a decentralized manner, through random pairwise interactions. One or more agents in the population may act as authoritative sources of information, and the objective of the remaining agents is to obtain information from or about these source agents. We study two basic tasks: broadcasting, in which the agents are to learn the bit-state of an authoritative source which is present in the population, and source detection, in which the agents are required to decide if at least one source agent is present in the population or not. We focus on designing protocols which meet two natural conditions: (1) universality, i.e., independence of population size, and (2) rapid convergence to a correct global state after a reconfiguration, such as a change in the state of a source agent. Our main positive result is to show that both of these constraints can be met. For both the broadcasting problem and the source detection problem, we obtain solutions with an expected convergence time of O(logn), from any starting configuration. The solution to broadcasting is exact, which means that all agents reach the state broadcast by the source, while the solution to source detection admits one-sided error on a ε-fraction of the population (which is unavoidable for this problem). Both protocols are easy to implement in practice and are self-stabilizing, in the sense that the stated bounds on convergence time hold starting from any possible initial configuration of the system. Our protocols exploit the properties of self-organizing oscillatory dynamics. On the hardness side, our main structural insight is to prove that any protocol which meets the constraints of universality and of rapid convergence after reconfiguration must display a form of non-stationary behavior (of which oscillatory dynamics are an example). We also observe that the periodicity of the oscillatory behavior of the protocol, when present, must necessarily depend on the number #X of source agents present in the population. For instance, our protocols inherently rely on the emergence of a signal passing through the population, whose period is Θ(log(n/#X)) rounds for most starting configurations. The design of phase clocks with tunable frequency may be of independent interest, notably in modeling biological networks. @InProceedings{STOC18p87, author = {Bartłomiej Dudek and Adrian Kosowski}, title = {Universal Protocols for Information Dissemination using Emergent Signals}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {87--99}, doi = {}, year = {2018}, } |
|
Kothari, Pravesh K. |
STOC '18: "Robust Moment Estimation and ..."
Robust Moment Estimation and Improved Clustering via Sum of Squares
Pravesh K. Kothari, Jacob Steinhardt, and David Steurer (Princeton University, USA; Institute for Advanced Study at Princeton, USA; Stanford University, USA; ETH Zurich, Switzerland) We develop efficient algorithms for estimating low-degree moments of unknown distributions in the presence of adversarial outliers and design a new family of convex relaxations for k-means clustering based on sum-of-squares method. As an immediate corollary, for any γ > 0, we obtain an efficient algorithm for learning the means of a mixture of k arbitrary distributions in d in time dO(1/γ) so long as the means have separation Ω(kγ). This in particular yields an algorithm for learning Gaussian mixtures with separation Ω(kγ), thus partially resolving an open problem of Regev and Vijayaraghavan regev2017learning. The guarantees of our robust estimation algorithms improve in many cases significantly over the best previous ones, obtained in the recent works of [, , ]. We also show that the guarantees of our algorithms match information-theoretic lower-bounds for the class of distributions we consider. These improved guarantees allow us to give improved algorithms for independent component analysis and learning mixtures of Gaussians in the presence of outliers. We also show a sharp upper bound on the sum-of-squares norms for moment tensors of any distribution that satisfies the Poincare inequality. The Poincare inequality is a central inequality in probability theory, and a large class of distributions satisfy it including Gaussians, product distributions, strongly log-concave distributions, and any sum or uniformly continuous transformation of such distributions. As a consequence, this yields that all of the above algorithmic improvements hold for distributions satisfying the Poincare inequality. @InProceedings{STOC18p1035, author = {Pravesh K. Kothari and Jacob Steinhardt and David Steurer}, title = {Robust Moment Estimation and Improved Clustering via Sum of Squares}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1035--1046}, doi = {}, year = {2018}, } STOC '18: "Sum-of-Squares Meets Nash: ..." Sum-of-Squares Meets Nash: Lower Bounds for Finding Any Equilibrium Pravesh K. Kothari and Ruta Mehta (Princeton University, USA; Institute for Advanced Study at Princeton, USA; University of Illinois at Urbana-Champaign, USA) Computing Nash equilibrium (NE) in two-player game is a central question in algorithmic game theory. The main motivation of this work is to understand the power of sum-of-squares method in computing equilibria, both exact and approximate. Previous works in this context have focused on hardness of approximating “best” equilibria with respect to some natural quality measure on equilibria such as social welfare. Such results, however, do not directly relate to the complexity of the problem of finding any equilibrium. In this work, we propose a framework of roundings for the sum-of-squares algorithm (and convex relaxations in general) applicable to finding approximate/exact equilbria in two player bimatrix games. Specifically, we define the notion of oblivious roundings with verification oracle (OV). These are algorithms that can access a solution to the degree d SoS relaxation to construct a list of candidate (partial) solutions and invoke a verification oracle to check if a candidate in the list gives an (exact or approximate) equilibrium. This framework captures most known approximation algorithms in combinatorial optimization including the celebrated semi-definite programming based algorithms for Max-Cut, Constraint-Satisfaction Problems, and the recent works on SoS relaxations for Unique Games/Small-Set Expansion, Best Separable State, and many problems in unsupervised machine learning. Our main results are strong unconditional lower bounds in this framework. Specifically, we show that for є = Θ(1/poly(n)), there’s no algorithm that uses a o(n)-degree SoS relaxation to construct a 2o(n)-size list of candidates and obtain an є-approximate NE. For some constant є, we show a similar result for degree o(log(n)) SoS relaxation and list size no(log(n)). Our results can be seen as an unconditional confirmation, in our restricted algorithmic framework, of the recent Exponential Time Hypothesis for PPAD. Our proof strategy involves constructing a family of games that all share a common sum-of-squares solution but every (approximate) equilibrium of any game is far from every equilibrium of any other game in the family (in either player’s strategy). Along the way, we strengthen the classical unconditional lower bound against enumerative algorithms for finding approximate equilibria due to Daskalakis-Papadimitriou and the classical hardness of computing equilibria due to Gilbow-Zemel. @InProceedings{STOC18p1241, author = {Pravesh K. Kothari and Ruta Mehta}, title = {Sum-of-Squares Meets Nash: Lower Bounds for Finding Any Equilibrium}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1241--1248}, doi = {}, year = {2018}, } |
|
Kothari, Robin |
STOC '18: "The Polynomial Method Strikes ..."
The Polynomial Method Strikes Back: Tight Quantum Query Bounds via Dual Polynomials
Mark Bun, Robin Kothari, and Justin Thaler (Princeton University, USA; Microsoft Research, USA; Georgetown University, USA) The approximate degree of a Boolean function f is the least degree of a real polynomial that approximates f pointwise to error at most 1/3. The approximate degree of f is known to be a lower bound on the quantum query complexity of f (Beals et al., FOCS 1998 and J. ACM 2001). We resolve or nearly resolve the approximate degree and quantum query complexities of several basic functions. Specifically, we show the following: *k-distinctness: For any constant k, the approximate degree and quantum query complexity of the k-distinctness function is Ω(n3/4−1/(2k)). This is nearly tight for large k, as Belovs (FOCS 2012) has shown that for any constant k, the approximate degree and quantum query complexity of k-distinctness is O(n3/4−1/(2k+2−4)). *Image Size Testing: The approximate degree and quantum query complexity of testing the size of the image of a function [n] → [n] is Ω(n1/2). This proves a conjecture of Ambainis et al. (SODA 2016), and it implies tight lower bounds on the approximate degree and quantum query complexity of the following natural problems. **k-junta testing: A tight Ω(k1/2) lower bound for k-junta testing, answering the main open question of Ambainis et al. (SODA 2016). **Statistical Distance from Uniform: A tight Ω(n1/2) lower bound for approximating the statistical distance from uniform of a distribution, answering the main question left open by Bravyi et al. (STACS 2010 and IEEE Trans. Inf. Theory 2011). **Shannon entropy: A tight Ω(n1/2) lower bound for approximating Shannon entropy up to a certain additive constant, answering a question of Li and Wu (2017). *Surjectivity: The approximate degree of the Surjectivity function is Ω(n3/4). The best prior lower bound was Ω(n2/3). Our result matches an upper bound of Õ(n3/4) due to Sherstov, which we reprove using different techniques. The quantum query complexity of this function is known to be Θ(n) (Beame and Machmouchi, Quantum Inf. Comput. 2012 and Sherstov, FOCS 2015). Our upper bound for Surjectivity introduces new techniques for approximating Boolean functions by low-degree polynomials. Our lower bounds are proved by significantly refining techniques recently introduced by Bun and Thaler (FOCS 2017). @InProceedings{STOC18p297, author = {Mark Bun and Robin Kothari and Justin Thaler}, title = {The Polynomial Method Strikes Back: Tight Quantum Query Bounds via Dual Polynomials}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {297--310}, doi = {}, year = {2018}, } |
|
Koucký, Michal |
STOC '18: "Simulation Beats Richness: ..."
Simulation Beats Richness: New Data-Structure Lower Bounds
Arkadev Chattopadhyay, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay (TIFR, India; Charles University in Prague, Czechia; INESC TEC, Portugal; University of Porto, Portugal; KTH, Sweden) We develop a new technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is the first simulation theorem in the asymmetric setting, where Alice gets a p × n matrix x over F2 and Bob gets a vector y ∈ F2n. Alice and Bob need to evaluate f(x· y) for a Boolean function f: {0,1}p → {0,1}. Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C· n for Alice and C for Bob, if and only if there exists a deterministic/randomized *parity decision tree* of cost Θ(C) for evaluating f. As applications of this technique, we obtain the following results: 1. The first strong lower-bounds against randomized data-structure schemes for the Vector-Matrix-Vector product problem over F2. Moreover, our method yields strong lower bounds even when the data-structure scheme has tiny advantage over random guessing. 2. The first lower bounds against randomized data-structures schemes for two natural Boolean variants of Orthogonal Vector Counting. 3. We construct an asymmetric communication problem and obtain a deterministic lower-bound for it which is provably better than any lower-bound that may be obtained by the classical Richness Method of Miltersen et al. (STOC ’95). This seems to be the first known limitation of the Richness Method in the context of proving deterministic lower bounds. @InProceedings{STOC18p1013, author = {Arkadev Chattopadhyay and Michal Koucký and Bruno Loff and Sagnik Mukhopadhyay}, title = {Simulation Beats Richness: New Data-Structure Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1013--1020}, doi = {}, year = {2018}, } |
|
Kozma, László |
STOC '18: "Smooth Heaps and a Dual View ..."
Smooth Heaps and a Dual View of Self-Adjusting Data Structures
László Kozma and Thatchaphol Saranurak (Eindhoven University of Technology, Netherlands; KTH, Sweden) We present a new connection between self-adjusting binary search trees (BSTs) and heaps, two fundamental, extensively studied, and practically relevant families of data structures (Allen,Munro, 1978; Sleator, Tarjan, 1983; Fredman, Sedgewick, Sleator, Tarjan, 1986; Wilber, 1989; Fredman, 1999; Iacono, Özkan, 2014). Roughly speaking, we map an arbitrary heap algorithm within a broad and natural model, to a corresponding BST algorithm with the same cost on a dual sequence of operations (i.e. the same sequence with the roles of time and key-space switched). This is the first general transformation between the two families of data structures. There is a rich theory of dynamic optimality for BSTs (i.e. the theory of competitiveness between BST algorithms). The lack of an analogous theory for heaps has been noted in the literature (e.g. Pettie; 2005, 2008). Through our connection, we transfer all instance-specific lower bounds known for BSTs to a general model of heaps, initiating a theory of dynamic optimality for heaps. On the algorithmic side, we obtain a new, simple and efficient heap algorithm, which we call the smooth heap. We show the smooth heap to be the heap-counterpart of Greedy, the BST algorithm with the strongest proven and conjectured properties from the literature, conjectured to be instance-optimal (Lucas, 1988; Munro, 2000; Demaine et al., 2009). Assuming the optimality of Greedy, the smooth heap is also optimal within our model of heap algorithms. Intriguingly, the smooth heap, although derived from a non-practical BST algorithm, is simple and easy to implement (e.g. it stores no auxiliary data besides the keys and tree pointers). It can be seen as a variation on the popular pairing heap data structure, extending it with a “power-of-two-choices” type of heuristic. For the smooth heap we obtain instance-specific upper bounds, with applications in adaptive sorting, and we see it as a promising candidate for the long-standing question of a simpler alternative to Fibonacci heaps. @InProceedings{STOC18p801, author = {László Kozma and Thatchaphol Saranurak}, title = {Smooth Heaps and a Dual View of Self-Adjusting Data Structures}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {801--814}, doi = {}, year = {2018}, } |
|
Krishnaswamy, Ravishankar |
STOC '18: "Constant Approximation for ..."
Constant Approximation for k-Median and k-Means with Outliers via Iterative Rounding
Ravishankar Krishnaswamy, Shi Li, and Sai Sandeep (Microsoft Research, India; SUNY Buffalo, USA) In this paper, we present a new iterative rounding framework for many clustering problems. Using this, we obtain an (α1 + є ≤ 7.081 + є)-approximation algorithm for k-median with outliers, greatly improving upon the large implicit constant approximation ratio of Chen. For k-means with outliers, we give an (α2+є ≤ 53.002 + є)-approximation, which is the first O(1)-approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give α1- and (α1 + є)-approximation algorithms for matroid and knapsack median problems respectively, improving upon the previous best approximations ratios of 8 due to Swamy and 17.46 due to Byrka et al. The natural LP relaxation for the k-median/k-means with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almost-integral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almost-integral and fully-integral solutions. Then, using a pre-processing procedure, we show how to convert an almost-integral solution to a fully-integral solution losing only a constant-factor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any є > 0. @InProceedings{STOC18p646, author = {Ravishankar Krishnaswamy and Shi Li and Sai Sandeep}, title = {Constant Approximation for k-Median and k-Means with Outliers via Iterative Rounding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {646--659}, doi = {}, year = {2018}, } |
|
Kuhn, Fabian |
STOC '18: "Deterministic Distributed ..."
Deterministic Distributed Edge-Coloring with Fewer Colors
Mohsen Ghaffari, Fabian Kuhn, Yannic Maus, and Jara Uitto (ETH Zurich, Switzerland; University of Freiburg, Germany) We present a deterministic distributed algorithm, in the LOCAL model, that computes a (1+o(1))Δ-edge-coloring in polylogarithmic-time, so long as the maximum degree Δ=Ω(logn). For smaller Δ, we give a polylogarithmic-time 3Δ/2-edge-coloring. These are the first deterministic algorithms to go below the natural barrier of 2Δ−1 colors, and they improve significantly on the recent polylogarithmic-time (2Δ−1)(1+o(1))-edge-coloring of Ghaffari and Su [SODA’17] and the (2Δ−1)-edge-coloring of Fischer, Ghaffari, and Kuhn [FOCS’17], positively answering the main open question of the latter. The key technical ingredient of our algorithm is a simple and novel gradual packing of judiciously chosen near-maximum matchings, each of which becomes one of the color classes. @InProceedings{STOC18p418, author = {Mohsen Ghaffari and Fabian Kuhn and Yannic Maus and Jara Uitto}, title = {Deterministic Distributed Edge-Coloring with Fewer Colors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {418--430}, doi = {}, year = {2018}, } |
|
Kumar, Ashutosh |
STOC '18: "Non-malleable Secret Sharing ..."
Non-malleable Secret Sharing
Vipul Goyal and Ashutosh Kumar (Carnegie Mellon University, USA; University of California at Los Angeles, USA) A number of works have focused on the setting where an adversary tampers with the shares of a secret sharing scheme. This includes literature on verifiable secret sharing, algebraic manipulation detection(AMD) codes, and, error correcting or detecting codes in general. In this work, we initiate a systematic study of what we call non-malleable secret sharing. Very roughly, the guarantee we seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is “destroyed” and the reconstruction outputs a string which is completely “unrelated” to the original secret. Recent exciting work on non-malleable codes in the split-state model led to constructions which can be seen as 2-out-of-2 non-malleable secret sharing schemes. These constructions have already found a number of applications in cryptography. We investigate the natural question of constructing t-out-of-n non-malleable secret sharing schemes. Such a secret sharing scheme ensures that only a set consisting of t or more shares can reconstruct the secret, and, additionally guarantees non-malleability under an attack where potentially every share maybe tampered with. Techniques used for obtaining split-state non-malleable codes (or 2-out-of-2 non-malleable secret sharing) are (in some form) based on two-source extractors and seem not to generalize to our setting. Our first result is the construction of a t-out-of-n non-malleable secret sharing scheme against an adversary who arbitrarily tampers each of the shares independently. Our construction is unconditional and features statistical non-malleability. As our main technical result, we present t-out-of-n non-malleable secret sharing scheme in a stronger adversarial model where an adversary may jointly tamper multiple shares. Our construction is unconditional and the adversary is allowed to jointly-tamper subsets of up to (t−1) shares. We believe that the techniques introduced in our construction may be of independent interest. Inspired by the well studied problem of perfectly secure message transmission introduced in the seminal work of Dolev et. al (J. of ACM’93), we also initiate the study of non-malleable message transmission. Non-malleable message transmission can be seen as a natural generalization in which the goal is to ensure that the receiver either receives the original message, or, the original message is essentially destroyed and the receiver receives an “unrelated” message, when the network is under the influence of an adversary who can byzantinely corrupt all the nodes in the network. As natural applications of our non-malleable secret sharing schemes, we propose constructions for non-malleable message transmission. @InProceedings{STOC18p685, author = {Vipul Goyal and Ashutosh Kumar}, title = {Non-malleable Secret Sharing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {685--698}, doi = {}, year = {2018}, } |
|
Kutten, Shay |
STOC '18: "Approximating Generalized ..."
Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency
Yuval Emek, Shay Kutten, Ron Lavi, and Yangguang Shi (Technion, Israel) In a generalized network design (GND) problem, a set of resources are assigned (non-exclusively) to multiple requests. Each request contributes its weight to the resources it uses and the total load on a resource is then translated to the cost it incurs via a resource specific cost function. Motivated by energy efficiency applications, recently, there is a growing interest in GND using cost functions that exhibit (dis)economies of scale ((D)oS), namely, cost functions that appear subadditive for small loads and superadditive for larger loads. The current paper advances the existing literature on approximation algorithms for GND problems with (D)oS cost functions in various aspects: (1) while the existing results are restricted to routing requests in undirected graphs, identifying the resources with the graph's edges, the current paper presents a generic approximation framework that yields approximation results for a much wider family of requests (including various types of Steiner tree and Steiner forest requests) in both directed and undirected graphs, where the resources can be identified with either the edges or the vertices; (2) while the existing results assume that a request contributes the same weight to each resource it uses, our approximation framework allows for unrelated weights, thus providing the first non-trivial approximation for the problem of scheduling unrelated parallel machines with (D)oS cost functions; (3) while most of the existing approximation algorithms are based on convex programming, our approximation framework is fully combinatorial and runs in strongly polynomial time; (4) the family of (D)oS cost functions considered in the current paper is more general than the one considered in the existing literature, providing a more accurate abstraction for practical energy conservation scenarios; and (5) we obtain the first approximation ratio for GND with (D)oS cost functions that depends only on the parameters of the resources' technology and does not grow with the number of resources, the number of requests, or their weights. The design of our approximation framework relies heavily on Roughgarden's smoothness toolbox (JACM 2015), thus demonstrating the possible usefulness of this toolbox in the area of approximation algorithms. @InProceedings{STOC18p598, author = {Yuval Emek and Shay Kutten and Ron Lavi and Yangguang Shi}, title = {Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {598--606}, doi = {}, year = {2018}, } |
|
Kwok, Tsz Chiu |
STOC '18: "The Paulsen Problem, Continuous ..."
The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis
Tsz Chiu Kwok, Lap Chi Lau, Yin Tat Lee, and Akshay Ramachandran (University of Waterloo, Canada; University of Washington, USA) The Paulsen problem is a basic open problem in operator theory: Given vectors u1, …, un ∈ ℝd that are є-nearly satisfying the Parseval’s condition and the equal norm condition, is it close to a set of vectors v1, …, vn ∈ ℝd that exactly satisfy the Parseval’s condition and the equal norm condition? Given u1, …, un, the squared distance (to the set of exact solutions) is defined as infv ∑i=1n || ui − vi ||22 where the infimum is over the set of exact solutions. Previous results show that the squared distance of any є-nearly solution is at most O(poly(d,n,є)) and there are є-nearly solutions with squared distance at least Ω(d є). The fundamental open question is whether the squared distance can be independent of the number of vectors n. We answer this question affirmatively by proving that the squared distance of any є-nearly solution is O(d13/2 є). Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any є-nearly solution is O(d2 n є). Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an є-nearly solution is O(d5/2 є) when n is large enough and є is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyzing the operator scaling algorithm. @InProceedings{STOC18p182, author = {Tsz Chiu Kwok and Lap Chi Lau and Yin Tat Lee and Akshay Ramachandran}, title = {The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {182--189}, doi = {}, year = {2018}, } |
|
Kyng, Rasmus |
STOC '18: "Incomplete Nested Dissection ..."
Incomplete Nested Dissection
Rasmus Kyng, Richard Peng, Robert Schwieterman, and Peng Zhang (Simons Institute for the Theory of Computing Berkeley, USA; Georgia Tech, USA) We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC’07, Shklarski-Toledo SIMAX’08]. Given a 3-dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O(k1/3 n5/3 log(1 / є)). This asymptotically improves the running time O(n2) by Nested Dissection for all k ≪ n. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n1/44. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination. @InProceedings{STOC18p404, author = {Rasmus Kyng and Richard Peng and Robert Schwieterman and Peng Zhang}, title = {Incomplete Nested Dissection}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {404--417}, doi = {}, year = {2018}, } |
|
Łącki, Jakub |
STOC '18: "Round Compression for Parallel ..."
Round Compression for Parallel Matching Algorithms
Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (University of Warwick, UK; Google Research, USA; Massachusetts Institute of Technology, USA; EPFL, Switzerland; IBM Research, USA; University of Warsaw, Poland) For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem—one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n1+Ω(1) memory, this problem can also be solved 2-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+є)-approximate maximum matching, for any fixed constant є>0, in O((loglogn)2) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of the MPC model, as compared to the PRAM model. Firstly, we use vertex–based graph partitioning, instead of the edge–based approaches that were utilized so far. Secondly, we develop a technique of round compression. This technique enables one to take a (distributed) algorithm that computes an O(1)-approximation of maximum matching in O(logn) independent PRAM phases and implement a super-constant number of these phases in only a constant number of MPC rounds. @InProceedings{STOC18p471, author = {Artur Czumaj and Jakub Łącki and Aleksander Mądry and Slobodan Mitrović and Krzysztof Onak and Piotr Sankowski}, title = {Round Compression for Parallel Matching Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {471--484}, doi = {}, year = {2018}, } |
|
Laekhanukit, Bundit |
STOC '18: "On the Parameterized Complexity ..."
On the Parameterized Complexity of Approximating Dominating Set
Karthik C. S., Bundit Laekhanukit, and Pasin Manurangsi (Weizmann Institute of Science, Israel; Max Planck Institute for Informatics, Germany; Shanghai University of Finance and Economics, China; University of California at Berkeley, USA) We study the parameterized complexity of approximating the k-Dominating Set (domset) problem where an integer k and a graph G on n vertices are given as input, and the goal is to find a dominating set of size at most F(k) · k whenever the graph G has a dominating set of size k. When such an algorithm runs in time T(k)poly(n) (i.e., FPT-time) for some computable function T, it is said to be an F(k)-FPT-approximation algorithm for k-domset. Whether such an algorithm exists is listed in the seminal book of Downey and Fellows (2013) as one of the ”most infamous” open problems in Parameterized Complexity. This work gives an almost complete answer to this question by showing the non-existence of such an algorithm under W[1]≠FPT and further providing tighter running time lower bounds under stronger hypotheses. Specifically, we prove the following for every computable functions T, F and every constant ε > 0: (i) Assuming W[1]≠FPT, there is no F(k)-FPT-approximation algorithm for k-domset, (ii) Assuming the Exponential Time Hypothesis (ETH), there is no F(k)-approximation algorithm for k-domset that runs in T(k)no(k) time, (iii) Assuming the Strong Exponential Time Hypothesis (SETH), for every integer k ≥ 2, there is no F(k)-approximation algorithm for k-domset that runs in T(k)nk − ε time, (iv) Assuming the k-sum Hypothesis, for every integer k ≥ 3, there is no F(k)-approximation algorithm for k-domset that runs in T(k) n⌈ k/2 ⌉ − ε time. Previously, only constant ratio FPT-approximation algorithms were ruled out under W[1]≠FPT and (log1/4 − ε k)-FPT-approximation algorithms were ruled out under ETH [Chen and Lin, FOCS 2016]. Recently, the non-existence of an F(k)-FPT-approximation algorithm for any function F was shown under gapETH [Chalermsook et al., FOCS 2017]. Note that, to the best of our knowledge, no running time lower bound of the form nδ k for any absolute constant δ > 0 was known before even for any constant factor inapproximation ratio. Our results are obtained by establishing a connection between communication complexity and hardness of approximation, generalizing the ideas from a recent breakthrough work of Abboud et al. [FOCS 2017]. Specifically, we show that to prove hardness of approximation of a certain parameterized variant of the label cover problem, it suffices to devise a specific protocol for a communication problem that depends on which hypothesis we rely on. Each of these communication problems turns out to be either a well studied problem or a variant of one; this allows us to easily apply known techniques to solve them. @InProceedings{STOC18p1283, author = {Karthik C. S. and Bundit Laekhanukit and Pasin Manurangsi}, title = {On the Parameterized Complexity of Approximating Dominating Set}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1283--1296}, doi = {}, year = {2018}, } |
|
Lapinskas, John |
STOC '18: "Fine-Grained Reductions from ..."
Fine-Grained Reductions from Approximate Counting to Decision
Holger Dell and John Lapinskas (Saarland University, Germany; M2CI, Germany; University of Oxford, UK) In this paper, we introduce a general framework for fine-grained reductions of approximate counting problems to their decision versions. (Thus we use an oracle that decides whether any witness exists to multiplicatively approximate the number of witnesses with minimal overhead.) This mirrors a foundational result of Sipser (STOC 1983) and Stockmeyer (SICOMP 1985) in the polynomial-time setting, and a similar result of Müller (IWPEC 2006) in the FPT setting. Using our framework, we obtain such reductions for some of the most important problems in fine-grained complexity: the Orthogonal Vectors problem, 3SUM, and the Negative-Weight Triangle problem (which is closely related to All-Pairs Shortest Path). While all these problems have simple algorithms over which it is conjectured that no polynomial improvement is possible, our reductions would remain interesting even if these conjectures were proved; they have only polylogarithmic overhead, and can therefore be applied to subpolynomial improvements such as the n3/exp(Θ(√logn))-time algorithm for the Negative-Weight Triangle problem due to Williams (STOC 2014). Our framework is also general enough to apply to versions of the problems for which more efficient algorithms are known. For example, the Orthogonal Vectors problem over GF(m)d for constant m can be solved in time n·poly(d) by a result of Williams and Yu (SODA 2014); our result implies that we can approximately count the number of orthogonal pairs with essentially the same running time. We also provide a fine-grained reduction from approximate #SAT to SAT. Suppose the Strong Exponential Time Hypothesis (SETH) is false, so that for some 1<c<2 and all k there is an O(cn)-time algorithm for #k-SAT. Then we prove that for all k, there is an O((c+o(1))n)-time algorithm for approximate #k-SAT. In particular, our result implies that the Exponential Time Hypothesis (ETH) is equivalent to the seemingly-weaker statement that there is no algorithm to approximate #3-SAT to within a factor of 1+ε in time 2o(n)/ε2 (taking ε > 0 as part of the input). A full version of this paper containing detailed proofs is available at https://arxiv.org/abs/1707.04609. @InProceedings{STOC18p281, author = {Holger Dell and John Lapinskas}, title = {Fine-Grained Reductions from Approximate Counting to Decision}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {281--288}, doi = {}, year = {2018}, } |
|
Larsen, Kasper Green |
STOC '18: "Tight Cell Probe Bounds for ..."
Tight Cell Probe Bounds for Succinct Boolean Matrix-Vector Multiplication
Diptarka Chakraborty, Lior Kamma, and Kasper Green Larsen (Charles University in Prague, Czechia; Aarhus University, Denmark) The conjectured hardness of Boolean matrix-vector multiplication has been used with great success to prove conditional lower bounds for numerous important data structure problems, see Henzinger et al. [STOC’15]. In recent work, Larsen and Williams [SODA’17] attacked the problem from the upper bound side and gave a surprising cell probe data structure (that is, we only charge for memory accesses, while computation is free). Their cell probe data structure answers queries in Õ(n7/4) time and is succinct in the sense that it stores the input matrix in read-only memory, plus an additional Õ(n7/4) bits on the side. In this paper, we essentially settle the cell probe complexity of succinct Boolean matrix-vector multiplication. We present a new cell probe data structure with query time Õ(n3/2) storing just Õ(n3/2) bits on the side. We then complement our data structure with a lower bound showing that any data structure storing r bits on the side, with n < r < n2 must have query time t satisfying t r = Ω(n3). For r ≤ n, any data structure must have t = Ω(n2). Since lower bounds in the cell probe model also apply to classic word-RAM data structures, the lower bounds naturally carry over. We also prove similar lower bounds for matrix-vector multiplication over F2. @InProceedings{STOC18p1297, author = {Diptarka Chakraborty and Lior Kamma and Kasper Green Larsen}, title = {Tight Cell Probe Bounds for Succinct Boolean Matrix-Vector Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1297--1306}, doi = {}, year = {2018}, } STOC '18: "Crossing the Logarithmic Barrier ..." Crossing the Logarithmic Barrier for Dynamic Boolean Data Structure Lower Bounds Kasper Green Larsen, Omri Weinstein, and Huacheng Yu (Aarhus University, Denmark; Columbia University, USA; Harvard University, USA) This paper proves the first super-logarithmic lower bounds on the cell probe complexity of dynamic boolean (a.k.a. decision) data structure problems, a long-standing milestone in data structure lower bounds. We introduce a new approach and use it to prove a Ω(log1.5 n) lower bound on the operational time of a wide range of boolean data structure problems, most notably, on the query time of dynamic range counting over F2 ([]). Proving an ω(lgn) lower bound for this problem was explicitly posed as one of five important open problems in the late Mihai Pǎtraşcu’s obituary []. This result also implies the first ω(lgn) lower bound for the classical 2D range counting problem, one of the most fundamental data structure problems in computational geometry and spatial databases. We derive similar lower bounds for boolean versions of dynamic polynomial evaluation and 2D rectangle stabbing, and for the (non-boolean) problems of range selection and range median. Our technical centerpiece is a new way of “weakly” simulating dynamic data structures using efficient one-way communication protocols with small advantage over random guessing. This simulation involves a surprising excursion to low-degree (Chebyshev) polynomials which may be of independent interest, and offers an entirely new algorithmic angle on the “cell sampling” method of Panigrahy et al. []. @InProceedings{STOC18p978, author = {Kasper Green Larsen and Omri Weinstein and Huacheng Yu}, title = {Crossing the Logarithmic Barrier for Dynamic Boolean Data Structure Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {978--989}, doi = {}, year = {2018}, } |
|
Lau, Lap Chi |
STOC '18: "The Paulsen Problem, Continuous ..."
The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis
Tsz Chiu Kwok, Lap Chi Lau, Yin Tat Lee, and Akshay Ramachandran (University of Waterloo, Canada; University of Washington, USA) The Paulsen problem is a basic open problem in operator theory: Given vectors u1, …, un ∈ ℝd that are є-nearly satisfying the Parseval’s condition and the equal norm condition, is it close to a set of vectors v1, …, vn ∈ ℝd that exactly satisfy the Parseval’s condition and the equal norm condition? Given u1, …, un, the squared distance (to the set of exact solutions) is defined as infv ∑i=1n || ui − vi ||22 where the infimum is over the set of exact solutions. Previous results show that the squared distance of any є-nearly solution is at most O(poly(d,n,є)) and there are є-nearly solutions with squared distance at least Ω(d є). The fundamental open question is whether the squared distance can be independent of the number of vectors n. We answer this question affirmatively by proving that the squared distance of any є-nearly solution is O(d13/2 є). Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any є-nearly solution is O(d2 n є). Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an є-nearly solution is O(d5/2 є) when n is large enough and є is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyzing the operator scaling algorithm. @InProceedings{STOC18p182, author = {Tsz Chiu Kwok and Lap Chi Lau and Yin Tat Lee and Akshay Ramachandran}, title = {The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {182--189}, doi = {}, year = {2018}, } |
|
Lauria, Massimo |
STOC '18: "Clique Is Hard on Average ..."
Clique Is Hard on Average for Regular Resolution
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Alexander Razborov (Universitat Politècnica de Catalunya, Spain; KTH, Sweden; Sapienza University of Rome, Italy; University of Chicago, USA; Russian Academy of Sciences, Russia) We prove that for k ≪ n1/4 regular resolution requires length nΩ(k) to establish that an Erdos-Renyi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional nΩ(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs. @InProceedings{STOC18p866, author = {Albert Atserias and Ilario Bonacina and Susanna F. de Rezende and Massimo Lauria and Jakob Nordström and Alexander Razborov}, title = {Clique Is Hard on Average for Regular Resolution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {866--877}, doi = {}, year = {2018}, } |
|
Lavi, Ron |
STOC '18: "Approximating Generalized ..."
Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency
Yuval Emek, Shay Kutten, Ron Lavi, and Yangguang Shi (Technion, Israel) In a generalized network design (GND) problem, a set of resources are assigned (non-exclusively) to multiple requests. Each request contributes its weight to the resources it uses and the total load on a resource is then translated to the cost it incurs via a resource specific cost function. Motivated by energy efficiency applications, recently, there is a growing interest in GND using cost functions that exhibit (dis)economies of scale ((D)oS), namely, cost functions that appear subadditive for small loads and superadditive for larger loads. The current paper advances the existing literature on approximation algorithms for GND problems with (D)oS cost functions in various aspects: (1) while the existing results are restricted to routing requests in undirected graphs, identifying the resources with the graph's edges, the current paper presents a generic approximation framework that yields approximation results for a much wider family of requests (including various types of Steiner tree and Steiner forest requests) in both directed and undirected graphs, where the resources can be identified with either the edges or the vertices; (2) while the existing results assume that a request contributes the same weight to each resource it uses, our approximation framework allows for unrelated weights, thus providing the first non-trivial approximation for the problem of scheduling unrelated parallel machines with (D)oS cost functions; (3) while most of the existing approximation algorithms are based on convex programming, our approximation framework is fully combinatorial and runs in strongly polynomial time; (4) the family of (D)oS cost functions considered in the current paper is more general than the one considered in the existing literature, providing a more accurate abstraction for practical energy conservation scenarios; and (5) we obtain the first approximation ratio for GND with (D)oS cost functions that depends only on the parameters of the resources' technology and does not grow with the number of resources, the number of requests, or their weights. The design of our approximation framework relies heavily on Roughgarden's smoothness toolbox (JACM 2015), thus demonstrating the possible usefulness of this toolbox in the area of approximation algorithms. @InProceedings{STOC18p598, author = {Yuval Emek and Shay Kutten and Ron Lavi and Yangguang Shi}, title = {Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {598--606}, doi = {}, year = {2018}, } |
|
Lee, James R. |
STOC '18: "k-Server via Multiscale Entropic ..."
k-Server via Multiscale Entropic Regularization
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Mądry (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA) We present an O((logk)2)-competitive randomized algorithm for the k-server problem on hierarchically separated trees (HSTs). This is the first o(k)-competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)2 logn)-competitive algorithm on any n-point metric space. We give a new dynamic HST embedding that yields an O((logk)3 logΔ)-competitive algorithm on any metric space where the ratio of the largest to smallest non-zero distance is at most Δ. @InProceedings{STOC18p3, author = {Sébastien Bubeck and Michael B. Cohen and Yin Tat Lee and James R. Lee and Aleksander Mądry}, title = {k-Server via Multiscale Entropic Regularization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3--16}, doi = {}, year = {2018}, } |
|
Lee, Yin Tat |
STOC '18: "A Matrix Expander Chernoff ..."
A Matrix Expander Chernoff Bound
Ankit Garg, Yin Tat Lee, Zhao Song, and Nikhil Srivastava (Microsoft Research, USA; University of Washington, USA; Harvard University, USA; University of Texas at Austin, USA; University of California at Berkeley, USA) We prove a Chernoff-type bound for sums of matrix-valued random variables sampled via a random walk on an expander, confirming a conjecture due to [Wigderson and Xiao 06]. Our proof is based on a new multi-matrix extension of the Golden-Thompson inequality which improves upon the inequality in [Sutter, Berta and Tomamichel 17], as well as an adaptation of an argument for the scalar case due to [Healy 08]. Our new multi-matrix Golden-Thompson inequality could be of independent interest. Secondarily, we also provide a generic reduction showing that any concentration inequality for vector-valued martingales implies a concentration inequality for the corresponding expander walk, with a weakening of parameters proportional to the squared mixing time. @InProceedings{STOC18p1102, author = {Ankit Garg and Yin Tat Lee and Zhao Song and Nikhil Srivastava}, title = {A Matrix Expander Chernoff Bound}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1102--1114}, doi = {}, year = {2018}, } STOC '18: "k-Server via Multiscale Entropic ..." k-Server via Multiscale Entropic Regularization Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Mądry (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA) We present an O((logk)2)-competitive randomized algorithm for the k-server problem on hierarchically separated trees (HSTs). This is the first o(k)-competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)2 logn)-competitive algorithm on any n-point metric space. We give a new dynamic HST embedding that yields an O((logk)3 logΔ)-competitive algorithm on any metric space where the ratio of the largest to smallest non-zero distance is at most Δ. @InProceedings{STOC18p3, author = {Sébastien Bubeck and Michael B. Cohen and Yin Tat Lee and James R. Lee and Aleksander Mądry}, title = {k-Server via Multiscale Entropic Regularization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3--16}, doi = {}, year = {2018}, } STOC '18: "Convergence Rate of Riemannian ..." Convergence Rate of Riemannian Hamiltonian Monte Carlo and Faster Polytope Volume Computation Yin Tat Lee and Santosh S. Vempala (University of Washington, USA; Microsoft Research, USA; Georgia Tech, USA) We give the first rigorous proof of the convergence of Riemannian Hamiltonian Monte Carlo, a general (and practical) method for sampling Gibbs distributions. Our analysis shows that the rate of convergence is bounded in terms of natural smoothness parameters of an associated Riemannian manifold. We then apply the method with the manifold defined by the log barrier function to the problems of (1) uniformly sampling a polytope and (2) computing its volume, the latter by extending Gaussian cooling to the manifold setting. In both cases, the total number of steps needed is O*(mn2/3), improving the state of the art. A key ingredient of our analysis is a proof of an analog of the KLS conjecture for Gibbs distributions over manifolds. @InProceedings{STOC18p1115, author = {Yin Tat Lee and Santosh S. Vempala}, title = {Convergence Rate of Riemannian Hamiltonian Monte Carlo and Faster Polytope Volume Computation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1115--1121}, doi = {}, year = {2018}, } STOC '18: "Stochastic Localization + ..." Stochastic Localization + Stieltjes Barrier = Tight Bound for Log-Sobolev Yin Tat Lee and Santosh S. Vempala (University of Washington, USA; Microsoft Research, USA; Georgia Tech, USA) Logarithmic Sobolev inequalities are a powerful way to estimate the rate of convergence of Markov chains and to derive concentration inequalities on distributions. We prove that the log-Sobolev constant of any isotropic logconcave density in Rn with support of diameter D is Ω(1/D), resolving a question posed by Frieze and Kannan in 1997. This is asymptotically the best possible estimate and improves on the previous bound of Ω(1/D2) by Kannan-Lovász-Montenegro. It follows that for any isotropic logconcave density, the ball walk with step size δ=Θ(1/√n) mixes in O*(n2D) proper steps from any starting point. This improves on the previous best bound of O*(n2D2) and is also asymptotically tight. The new bound leads to the following refined large deviation inequality for an L-Lipschitz function g over an isotropic logconcave density p: for any t>0, Prx∼ p⎛ ⎝⎪ ⎪g(x)−ḡ⎪ ⎪≥ c· L· t⎞ ⎠≤exp(−t2t+√n ) where ḡ is the median or mean of g for x∼ p; this improves on previous bounds by Paouris and by Guedon-Milman. Our main proof is based on stochastic localization together with a Stieltjes-type barrier function. @InProceedings{STOC18p1122, author = {Yin Tat Lee and Santosh S. Vempala}, title = {Stochastic Localization + Stieltjes Barrier = Tight Bound for Log-Sobolev}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1122--1129}, doi = {}, year = {2018}, } STOC '18: "The Paulsen Problem, Continuous ..." The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis Tsz Chiu Kwok, Lap Chi Lau, Yin Tat Lee, and Akshay Ramachandran (University of Waterloo, Canada; University of Washington, USA) The Paulsen problem is a basic open problem in operator theory: Given vectors u1, …, un ∈ ℝd that are є-nearly satisfying the Parseval’s condition and the equal norm condition, is it close to a set of vectors v1, …, vn ∈ ℝd that exactly satisfy the Parseval’s condition and the equal norm condition? Given u1, …, un, the squared distance (to the set of exact solutions) is defined as infv ∑i=1n || ui − vi ||22 where the infimum is over the set of exact solutions. Previous results show that the squared distance of any є-nearly solution is at most O(poly(d,n,є)) and there are є-nearly solutions with squared distance at least Ω(d є). The fundamental open question is whether the squared distance can be independent of the number of vectors n. We answer this question affirmatively by proving that the squared distance of any є-nearly solution is O(d13/2 є). Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any є-nearly solution is O(d2 n є). Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an є-nearly solution is O(d5/2 є) when n is large enough and є is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyzing the operator scaling algorithm. @InProceedings{STOC18p182, author = {Tsz Chiu Kwok and Lap Chi Lau and Yin Tat Lee and Akshay Ramachandran}, title = {The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {182--189}, doi = {}, year = {2018}, } STOC '18: "An Homotopy Method for lp ..." An Homotopy Method for lp Regression Provably beyond Self-Concordance and in Input-Sparsity Time Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, and Yuanzhi Li (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA; Princeton University, USA) We consider the problem of linear regression where the ℓ2n norm loss (i.e., the usual least squares loss) is replaced by the ℓpn norm. We show how to solve such problems up to machine precision in Õp(n|1/2 − 1/p|) (dense) matrix-vector products and Õp(1) matrix inversions, or alternatively in Õp(n|1/2 − 1/p|) calls to a (sparse) linear system solver. This improves the state of the art for any p∉{1,2,+∞}. Furthermore we also propose a randomized algorithm solving such problems in input sparsity time, i.e., Õp(N + poly(d)) where N is the size of the input and d is the number of variables. Such a result was only known for p=2. Finally we prove that these results lie outside the scope of the Nesterov-Nemirovski’s theory of interior point methods by showing that any symmetric self-concordant barrier on the ℓpn unit ball has self-concordance parameter Ω(n). @InProceedings{STOC18p1130, author = {Sébastien Bubeck and Michael B. Cohen and Yin Tat Lee and Yuanzhi Li}, title = {An Homotopy Method for l<sub>p</sub> Regression Provably beyond Self-Concordance and in Input-Sparsity Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1130--1137}, doi = {}, year = {2018}, } |
|
Lempiäinen, Tuomo |
STOC '18: "New Classes of Distributed ..."
New Classes of Distributed Time Complexity
Alkida Balliu, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Dennis Olivetti, and Jukka Suomela (Aalto University, Finland; University of Freiburg, Germany) A number of recent papers – e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) – have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π in which a solution can be verified by checking all radius-O(1) neighbourhoods, and the question is what is the smallest T such that a solution can be computed so that each node chooses its own output based on its radius-T neighbourhood. Here T is the distributed time complexity of Π. The time complexity classes for deterministic algorithms in bounded-degree graphs that are known to exist by prior work are Θ(1), Θ(log* n), Θ(logn), Θ(n1/k), and Θ(n). It is also known that there are two gaps: one between ω(1) and o(loglog* n), and another between ω(log* n) and o(logn). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple – indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(logα n) for any α ≥ 1, 2Θ(logα n) for any α ≤ 1, and Θ(nα) for any α < 1/2 in the high end of the complexity spectrum, and Θ(logα log* n) for any α ≥ 1, 2Θ(logα log* n) for any α ≤ 1, and Θ((log* n)α) for any α ≤ 1 in the low end of the complexity spectrum; here α is a positive rational number. @InProceedings{STOC18p1307, author = {Alkida Balliu and Juho Hirvonen and Janne H. Korhonen and Tuomo Lempiäinen and Dennis Olivetti and Jukka Suomela}, title = {New Classes of Distributed Time Complexity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1307--1318}, doi = {}, year = {2018}, } |
|
Lenzen, Christoph |
STOC '18: "On the Complexity of Hazard-Free ..."
On the Complexity of Hazard-Free Circuits
Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah (Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Newcastle University, UK; IIT Hyderabad, India) The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems. @InProceedings{STOC18p878, author = {Christian Ikenmeyer and Balagopal Komarath and Christoph Lenzen and Vladimir Lysikov and Andrey Mokhov and Karteek Sreenivasaiah}, title = {On the Complexity of Hazard-Free Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {878--889}, doi = {}, year = {2018}, } |
|
Leung, Debbie |
STOC '18: "Capacity Approaching Coding ..."
Capacity Approaching Coding for Low Noise Interactive Quantum Communication
Debbie Leung, Ashwin Nayak, Ala Shayeghi, Dave Touchette, Penghui Yao, and Nengkun Yu (University of Waterloo, Canada; Perimeter Institute Waterloo, Canada; Nanjing University, China; University of Technology Sydney, Australia) We consider the problem of implementing two-party interactive quantum communication over noisy channels, a necessary endeavor if we wish to fully reap quantum advantages for communication. For an arbitrary protocol with n messages, designed for noiseless qudit channels (where d is arbitrary), our main result is a simulation method that fails with probability less than 2−Θ (nє) and uses a qudit channel n (1 + Θ (√є)) times, of which an є fraction can be corrupted adversarially. The simulation is thus capacity achieving to leading order, and we conjecture that it is optimal up to a constant factor in the √є term. Furthermore, the simulation is in a model that does not require pre-shared resources such as randomness or entanglement between the communicating parties. Perhaps surprisingly, this outperforms the best known overhead of 1 + O(√є loglog1/є) in the corresponding classical model, which is also conjectured to be optimal [Haeupler, FOCS’14]. Our work also improves over the best previously known quantum result where the overhead is a non-explicit large constant [Brassard et al., FOCS’14] for low є. @InProceedings{STOC18p339, author = {Debbie Leung and Ashwin Nayak and Ala Shayeghi and Dave Touchette and Penghui Yao and Nengkun Yu}, title = {Capacity Approaching Coding for Low Noise Interactive Quantum Communication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {339--352}, doi = {}, year = {2018}, } |
|
Leverrier, Anthony |
STOC '18: "Efficient Decoding of Random ..."
Efficient Decoding of Random Errors for Quantum Expander Codes
Omar Fawzi, Antoine Grospellier, and Anthony Leverrier (ENS Lyon, France; Inria, France) We show that quantum expander codes, a constant-rate family of quantum low-density parity check (LDPC) codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman’s construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of α-percolation: for a random subset E of vertices of a given graph, we consider the size of the largest connected α-subset of E, where X is an α-subset of E if |X ∩ E| ≥ α |X|. @InProceedings{STOC18p521, author = {Omar Fawzi and Antoine Grospellier and Anthony Leverrier}, title = {Efficient Decoding of Random Errors for Quantum Expander Codes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {521--534}, doi = {}, year = {2018}, } |
|
Li, Jason |
STOC '18: "Improved Distributed Algorithms ..."
Improved Distributed Algorithms for Exact Shortest Paths
Mohsen Ghaffari and Jason Li (ETH Zurich, Switzerland; Carnegie Mellon University, USA) Computing shortest paths is one of the central problems in the theory of distributed computing. For the last few years, substantial progress has been made on the approximate single source shortest paths problem, culminating in an algorithm of Henzinger, Krinninger, and Nanongkai [STOC’16] which deterministically computes (1+o(1))-approximate shortest paths in Õ(D+√n) time, where D is the hop-diameter of the graph. Up to logarithmic factors, this time complexity is optimal, matching the lower bound of Elkin [STOC’04]. The question of exact shortest paths however saw no algorithmic progress for decades, until the recent breakthrough of Elkin [STOC’17], which established a sublinear-time algorithm for exact single source shortest paths on undirected graphs. Shortly after, Huang et al. [FOCS’17] provided improved algorithms for exact all pairs shortest paths problem on directed graphs. In this paper, we provide an alternative single-source shortest path algorithm with complexity Õ(n3/4D1/4). For polylogarithmic D, this improves on Elkin’s Õ(n5/6) bound and gets closer to the Ω(n1/2) lower bound of Elkin [STOC’04]. For larger values of D, we present an improved variant of our algorithm which achieves complexity Õ(max{ n3/4+o(1) , n3/4D1/6} + D ), and thus compares favorably with Elkin’s bound of Õ(max{ n5/6, n2/3D1/3} + D ) in essentially the entire range of parameters. This algorithm provides also a qualitative improvement, because it works for the more challenging case of directed graph (i.e., graphs where the two directions of an edge can have different weights), constituting the first sublinear-time algorithm for directed graphs. Our algorithm also extends to the case of exact r-source shortest paths, in which we provide the fastest algorithm for moderately small r and D, improving on those of Huang et al. @InProceedings{STOC18p431, author = {Mohsen Ghaffari and Jason Li}, title = {Improved Distributed Algorithms for Exact Shortest Paths}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {431--444}, doi = {}, year = {2018}, } |
|
Li, Jerry |
STOC '18: "Mixture Models, Robustness, ..."
Mixture Models, Robustness, and Sum of Squares Proofs
Samuel B. Hopkins and Jerry Li (Cornell University, USA; Massachusetts Institute of Technology, USA) We use the Sum of Squares method to develop new efficient algorithms for learning well-separated mixtures of Gaussians and robust mean estimation, both in high dimensions, that substantially improve upon the statistical guarantees achieved by previous efficient algorithms. Our contributions are: Mixture models with separated means: We study mixtures of poly(k)-many k-dimensional distributions where the means of every pair of distributions are separated by at least kε. In the special case of spherical Gaussian mixtures, we give a kO(1/ε)-time algorithm that learns the means assuming separation at least kε, for any ε> 0. This is the first algorithm to improve on greedy (“single-linkage”) and spectral clustering, breaking a long-standing barrier for efficient algorithms at separation k1/4. Robust estimation: When an unknown (1−ε)-fraction of X1,…,Xn are chosen from a sub-Gaussian distribution with mean µ but the remaining points are chosen adversarially, we give an algorithm recovering µ to error ε1−1/t in time kO(t), so long as sub-Gaussian-ness up to O(t) moments can be certified by a Sum of Squares proof. This is the first polynomial-time algorithm with guarantees approaching the information-theoretic limit for non-Gaussian distributions. Previous algorithms could not achieve error better than ε1/2. As a corollary, we achieve similar results for robust covariance estimation. Both of these results are based on a unified technique. Inspired by recent algorithms of Diakonikolas et al. in robust statistics, we devise an SDP based on the Sum of Squares method for the following setting: given X1,…,Xn ∈ ℝk for large k and n = poly(k) with the promise that a subset of X1,…,Xn were sampled from a probability distribution with bounded moments, recover some information about that distribution. @InProceedings{STOC18p1021, author = {Samuel B. Hopkins and Jerry Li}, title = {Mixture Models, Robustness, and Sum of Squares Proofs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1021--1034}, doi = {}, year = {2018}, } Info |
|
Li, Shi |
STOC '18: "Constant Approximation for ..."
Constant Approximation for k-Median and k-Means with Outliers via Iterative Rounding
Ravishankar Krishnaswamy, Shi Li, and Sai Sandeep (Microsoft Research, India; SUNY Buffalo, USA) In this paper, we present a new iterative rounding framework for many clustering problems. Using this, we obtain an (α1 + є ≤ 7.081 + є)-approximation algorithm for k-median with outliers, greatly improving upon the large implicit constant approximation ratio of Chen. For k-means with outliers, we give an (α2+є ≤ 53.002 + є)-approximation, which is the first O(1)-approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give α1- and (α1 + є)-approximation algorithms for matroid and knapsack median problems respectively, improving upon the previous best approximations ratios of 8 due to Swamy and 17.46 due to Byrka et al. The natural LP relaxation for the k-median/k-means with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almost-integral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almost-integral and fully-integral solutions. Then, using a pre-processing procedure, we show how to convert an almost-integral solution to a fully-integral solution losing only a constant-factor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any є > 0. @InProceedings{STOC18p646, author = {Ravishankar Krishnaswamy and Shi Li and Sai Sandeep}, title = {Constant Approximation for k-Median and k-Means with Outliers via Iterative Rounding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {646--659}, doi = {}, year = {2018}, } |
|
Li, Wenzheng |
STOC '18: "An Optimal Distributed (Δ+1)-Coloring ..."
An Optimal Distributed (Δ+1)-Coloring Algorithm?
Yi-Jun Chang, Wenzheng Li, and Seth Pettie (University of Michigan, USA; Tsinghua University, China) Vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. In this paper we present a new algorithm for (Δ+1)-list coloring in the randomized LOCAL model running in O(log∗n + Detd(poly logn)) time, where Detd(n′) is the deterministic complexity of (deg+1)-list coloring (v’s palette has size deg(v)+1) on n′-vertex graphs. This improves upon a previous randomized algorithm of Harris, Schneider, and Su (STOC 2016). with complexity O(√logΔ + loglogn + Detd(poly logn)), and (when Δ is sufficiently large) is much faster than the best known deterministic algorithm of Fraigniaud, Heinrich, and Kosowski (FOCS 2016), with complexity O(√Δlog2.5Δ + log* n). Our algorithm appears to be optimal. It matches the Ω(log∗n) randomized lower bound, due to Naor (SIDMA 1991) and sort of matches the Ω(Det(poly logn)) randomized lower bound due to Chang, Kopelowitz, and Pettie (FOCS 2016), where Det is the deterministic complexity of (Δ+1)-list coloring. The best known upper bounds on Detd(n′) and Det(n′) are both 2O(√logn′) by Panconesi and Srinivasan (Journal of Algorithms 1996), and it is quite plausible that the complexities of both problems are the same, asymptotically. @InProceedings{STOC18p445, author = {Yi-Jun Chang and Wenzheng Li and Seth Pettie}, title = {An Optimal Distributed (Δ+1)-Coloring Algorithm?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {445--456}, doi = {}, year = {2018}, } |
|
Li, Yuanzhi |
STOC '18: "Operator Scaling via Geodesically ..."
Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing
Zeyuan Allen-Zhu, Ankit Garg, Yuanzhi Li, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; Princeton University, USA; University of Toronto, Canada; Institute for Advanced Study at Princeton, USA) We propose a new second-order method for geodesically convex optimization on the natural hyperbolic metric over positive definite matrices. We apply it to solve the operator scaling problem in time polynomial in the input size and logarithmic in the error. This is an exponential improvement over previous algorithms which were analyzed in the usual Euclidean, "commutative" metric (for which the above problem is not convex). Our method is general and applicable to other settings. As a consequence, we solve the equivalence problem for the left-right group action underlying the operator scaling problem. This yields a deterministic polynomial-time algorithm for a new class of Polynomial Identity Testing (PIT) problems, which was the original motivation for studying operator scaling. @InProceedings{STOC18p172, author = {Zeyuan Allen-Zhu and Ankit Garg and Yuanzhi Li and Rafael Oliveira and Avi Wigderson}, title = {Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {172--181}, doi = {}, year = {2018}, } STOC '18: "An Homotopy Method for lp ..." An Homotopy Method for lp Regression Provably beyond Self-Concordance and in Input-Sparsity Time Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, and Yuanzhi Li (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA; Princeton University, USA) We consider the problem of linear regression where the ℓ2n norm loss (i.e., the usual least squares loss) is replaced by the ℓpn norm. We show how to solve such problems up to machine precision in Õp(n|1/2 − 1/p|) (dense) matrix-vector products and Õp(1) matrix inversions, or alternatively in Õp(n|1/2 − 1/p|) calls to a (sparse) linear system solver. This improves the state of the art for any p∉{1,2,+∞}. Furthermore we also propose a randomized algorithm solving such problems in input sparsity time, i.e., Õp(N + poly(d)) where N is the size of the input and d is the number of variables. Such a result was only known for p=2. Finally we prove that these results lie outside the scope of the Nesterov-Nemirovski’s theory of interior point methods by showing that any symmetric self-concordant barrier on the ℓpn unit ball has self-concordance parameter Ω(n). @InProceedings{STOC18p1130, author = {Sébastien Bubeck and Michael B. Cohen and Yin Tat Lee and Yuanzhi Li}, title = {An Homotopy Method for l<sub>p</sub> Regression Provably beyond Self-Concordance and in Input-Sparsity Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1130--1137}, doi = {}, year = {2018}, } |
|
Liang, Percy |
STOC '18: "Prediction with a Short Memory ..."
Prediction with a Short Memory
Vatsal Sharan, Sham Kakade, Percy Liang, and Gregory Valiant (Stanford University, USA; University of Washington, USA) We consider the problem of predicting the next observation given a sequence of past observations, and consider the extent to which accurate prediction requires complex algorithms that explicitly leverage long-range dependencies. Perhaps surprisingly, our positive results show that for a broad class of sequences, there is an algorithm that predicts well on average, and bases its predictions only on the most recent few observation together with a set of simple summary statistics of the past observations. Specifically, we show that for any distribution over observations, if the mutual information between past observations and future observations is upper bounded by I, then a simple Markov model over the most recent I/є observations obtains expected KL error є—and hence ℓ1 error √є—with respect to the optimal predictor that has access to the entire past and knows the data generating distribution. For a Hidden Markov Model with n hidden states, I is bounded by logn, a quantity that does not depend on the mixing time, and we show that the trivial prediction algorithm based on the empirical frequencies of length O(logn/є) windows of observations achieves this error, provided the length of the sequence is dΩ(logn/є), where d is the size of the observation alphabet. We also establish that this result cannot be improved upon, even for the class of HMMs, in the following two senses: First, for HMMs with n hidden states, a window length of logn/є is information-theoretically necessary to achieve expected KL error є, or ℓ1 error √є. Second, the dΘ(logn/є) samples required to accurately estimate the Markov model when observations are drawn from an alphabet of size d is necessary for any computationally tractable learning/prediction algorithm, assuming the hardness of strongly refuting a certain class of CSPs. @InProceedings{STOC18p1074, author = {Vatsal Sharan and Sham Kakade and Percy Liang and Gregory Valiant}, title = {Prediction with a Short Memory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1074--1087}, doi = {}, year = {2018}, } |
|
Liao, Chao |
STOC '18: "Counting Hypergraph Colourings ..."
Counting Hypergraph Colourings in the Local Lemma Regime
Heng Guo, Chao Liao, Pinyan Lu, and Chihao Zhang (University of Edinburgh, UK; Shanghai Jiao Tong University, China; Shanghai University of Finance and Economics, China; Chinese University of Hong Kong, China) We give a fully polynomial-time approximation scheme (FPTAS) to count the number of q-colorings for k-uniform hypergraphs with maximum degree Δ if k≥ 28 and q > 315Δ14/k−14. We also obtain a polynomial-time almost uniform sampler if q>798Δ16/k−16/3. These are the first approximate counting and sampling algorithms in the regime q≪Δ (for large Δ and k) without any additional assumptions. Our method is based on the recent work of Moitra (STOC, 2017). One important contribution of ours is to remove the dependency of k and Δ in Moitra’s approach. @InProceedings{STOC18p926, author = {Heng Guo and Chao Liao and Pinyan Lu and Chihao Zhang}, title = {Counting Hypergraph Colourings in the Local Lemma Regime}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {926--939}, doi = {}, year = {2018}, } |
|
Liu, Tianren |
STOC '18: "Breaking the Circuit-Size ..."
Breaking the Circuit-Size Barrier in Secret Sharing
Tianren Liu and Vinod Vaikuntanathan (Massachusetts Institute of Technology, USA) We study secret sharing schemes for general (non-threshold) access structures. A general secret sharing scheme for n parties is associated to a monotone function F:{0,1}n→{0,1}. In such a scheme, a dealer distributes shares of a secret s among n parties. Any subset of parties T ⊆ [n] should be able to put together their shares and reconstruct the secret s if F(T)=1, and should have no information about s if F(T)=0. One of the major long-standing questions in information-theoretic cryptography is to minimize the (total) size of the shares in a secret-sharing scheme for arbitrary monotone functions F. There is a large gap between lower and upper bounds for secret sharing. The best known scheme for general F has shares of size 2n−o(n), but the best lower bound is Ω(n2/logn). Indeed, the exponential share size is a direct result of the fact that in all known secret-sharing schemes, the share size grows with the size of a circuit (or formula, or monotone span program) for F. Indeed, several researchers have suggested the existence of a representation size barrier which implies that the right answer is closer to the upper bound, namely, 2n−o(n). In this work, we overcome this barrier by constructing a secret sharing scheme for any access structure with shares of size 20.994n and a linear secret sharing scheme for any access structure with shares of size 20.999n. As a contribution of independent interest, we also construct a secret sharing scheme with shares of size 2Õ(√n) for 2n n/2 monotone access structures, out of a total of 2n n/2· (1+O(logn/n)) of them. Our construction builds on recent works that construct better protocols for the conditional disclosure of secrets (CDS) problem. @InProceedings{STOC18p699, author = {Tianren Liu and Vinod Vaikuntanathan}, title = {Breaking the Circuit-Size Barrier in Secret Sharing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {699--708}, doi = {}, year = {2018}, } |
|
Liu, Zhengyang |
STOC '18: "Distribution-Free Junta Testing ..."
Distribution-Free Junta Testing
Zhengyang Liu, Xi Chen, Rocco A. Servedio, Ying Sheng, and Jinyu Xie (Shanghai Jiao Tong University, China; Columbia University, USA) We study the problem of testing whether an unknown n-variable Boolean function is a k-junta in the distribution-free property testing model, where the distance between functions is measured with respect to an arbitrary and unknown probability distribution over {0,1}n. Our first main result is that distribution-free k-junta testing can be performed, with one-sided error, by an adaptive algorithm that uses Õ(k2)/є queries (independent of n). Complementing this, our second main result is a lower bound showing that any non-adaptive distribution-free k-junta testing algorithm must make Ω(2k/3) queries even to test to accuracy є=1/3. These bounds establish that while the optimal query complexity of non-adaptive k-junta testing is 2Θ(k), for adaptive testing it is poly(k), and thus show that adaptivity provides an exponential improvement in the distribution-free query complexity of testing juntas. @InProceedings{STOC18p749, author = {Zhengyang Liu and Xi Chen and Rocco A. Servedio and Ying Sheng and Jinyu Xie}, title = {Distribution-Free Junta Testing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {749--759}, doi = {}, year = {2018}, } |
|
Lkhamsuren, Luvsandondov |
STOC '18: "Holiest Minimum-Cost Paths ..."
Holiest Minimum-Cost Paths and Flows in Surface Graphs
Jeff Erickson, Kyle Fox, and Luvsandondov Lkhamsuren (University of Illinois at Urbana-Champaign, USA; University of Texas at Dallas, USA; Airbnb, USA) Let G be an edge-weighted directed graph with n vertices embedded on an orientable surface of genus g. We describe a simple deterministic lexicographic perturbation scheme that guarantees uniqueness of minimum-cost flows and shortest paths in G. The perturbations take O(gn) time to compute. We use our perturbation scheme in a black box manner to derive a deterministic O(n loglogn) time algorithm for minimum cut in directed edge-weighted planar graphs and a deterministic O(g2 n logn) time proprocessing scheme for the multiple-source shortest paths problem of computing a shortest path oracle for all vertices lying on a common face of a surface embedded graph. The latter result yields faster deterministic near-linear time algorithms for a variety of problems in constant genus surface embedded graphs. Finally, we open the black box in order to generalize a recent linear-time algorithm for multiple-source shortest paths in unweighted undirected planar graphs to work in arbitrary orientable surfaces. Our algorithm runs in O(g2 n logg) time in this setting, and it can be used to give improved linear time algorithms for several problems in unweighted undirected surface embedded graphs of constant genus including the computation of minimum cuts, shortest topologically non-trivial cycles, and minimum homology bases. @InProceedings{STOC18p1319, author = {Jeff Erickson and Kyle Fox and Luvsandondov Lkhamsuren}, title = {Holiest Minimum-Cost Paths and Flows in Surface Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1319--1332}, doi = {}, year = {2018}, } |
|
Lo, Irene |
STOC '18: "Dynamic Matching in School ..."
Dynamic Matching in School Choice: Efficient Seat Reassignment after Late Cancellations (Invited Talk)
Irene Lo (Columbia University, USA) In the school choice market, where scarce public school seats are assigned to students, a key issue is how to reassign seats that are vacated after an initial round of centralized assignment. Every year around 10% of students assigned a seat in the NYC public high school system eventually do not use it, and their vacated seats can be reassigned. Practical solutions to the reassignment problem must be simple to implement, truthful and efficient. I propose and axiomatically justify a class of reassignment mechanisms, the Per- muted Lottery Deferred Acceptance (PLDA) mechanisms, which generalize the commonly used Deferred Acceptance (DA) school choice mechanism to a two-round setting and retain its desirable in- centive and efficiency properties. I also provide guidance to school districts as to how to choose the appropriate mechanism in this class for their setting. Centralized admissions are typically conducted in a single round using Deferred Acceptance, with a lottery used to break ties in each school’s prioritization of students. Our proposed PLDA mechanisms reassign vacated seats using a second round of DA with a lottery based on a suitable permutation of the first-round lottery numbers. I demonstrate that under a natural order condition on aggregate student demand for schools, the second-round tie-breaking lottery can be correlated arbitrarily with that of the first round without affecting allocative welfare. I also show how the identifying char- acteristic of PLDA mechanisms, their permutation, can be chosen to control reallocation. vacated after the initial round are reassigned using decentralized waitlists that create significant student movement after the start of the school year, which is costly for both students and schools. I show that reversing the lottery order between rounds minimizes reassignment among all PLDA mechanisms, allowing us to alleviate costly student movement between schools without affecting the ef- ficiency of the final allocation. In a setting without school priorities, I also characterize PLDA mechanisms as the class of mechanisms that provide students with a guarantee at their first-round assign- ment, respect school priorities, and are strategy-proof, constrained Pareto efficient, and satisfy some mild symmetry properties. Finally, I provide simulations of the performance of different PLDA mecha- nisms in the presence of school priorities. All simulated PLDAs have similar allocative efficiency, while the PLDA based on reversing the tie-breaking lottery between rounds minimizes the number of reassigned students. These results support our theoretical findings. This is based on joint work with Itai Feigenbaum, Yash Kanoria, and Jay Sethuraman. @InProceedings{STOC18p1, author = {Irene Lo}, title = {Dynamic Matching in School Choice: Efficient Seat Reassignment after Late Cancellations (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1--1}, doi = {}, year = {2018}, } |
|
Loff, Bruno |
STOC '18: "Simulation Beats Richness: ..."
Simulation Beats Richness: New Data-Structure Lower Bounds
Arkadev Chattopadhyay, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay (TIFR, India; Charles University in Prague, Czechia; INESC TEC, Portugal; University of Porto, Portugal; KTH, Sweden) We develop a new technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is the first simulation theorem in the asymmetric setting, where Alice gets a p × n matrix x over F2 and Bob gets a vector y ∈ F2n. Alice and Bob need to evaluate f(x· y) for a Boolean function f: {0,1}p → {0,1}. Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C· n for Alice and C for Bob, if and only if there exists a deterministic/randomized *parity decision tree* of cost Θ(C) for evaluating f. As applications of this technique, we obtain the following results: 1. The first strong lower-bounds against randomized data-structure schemes for the Vector-Matrix-Vector product problem over F2. Moreover, our method yields strong lower bounds even when the data-structure scheme has tiny advantage over random guessing. 2. The first lower bounds against randomized data-structures schemes for two natural Boolean variants of Orthogonal Vector Counting. 3. We construct an asymmetric communication problem and obtain a deterministic lower-bound for it which is provably better than any lower-bound that may be obtained by the classical Richness Method of Miltersen et al. (STOC ’95). This seems to be the first known limitation of the Richness Method in the context of proving deterministic lower bounds. @InProceedings{STOC18p1013, author = {Arkadev Chattopadhyay and Michal Koucký and Bruno Loff and Sagnik Mukhopadhyay}, title = {Simulation Beats Richness: New Data-Structure Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1013--1020}, doi = {}, year = {2018}, } |
|
Lovett, Shachar |
STOC '18: "The Gram-Schmidt Walk: A Cure ..."
The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues
Nikhil Bansal, Daniel Dadush, Shashwat Garg, and Shachar Lovett (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of California at San Diego, USA) An important result in discrepancy due to Banaszczyk states that for any set of n vectors in ℝm of ℓ2 norm at most 1 and any convex body K in ℝm of Gaussian measure at least half, there exists a ± 1 combination of these vectors which lies in 5K. This result implies the best known bounds for several problems in discrepancy. Banaszczyk’s proof of this result is non-constructive and an open problem has been to give an efficient algorithm to find such a ± 1 combination of the vectors. In this paper, we resolve this question and give an efficient randomized algorithm to find a ± 1 combination of the vectors which lies in cK for c>0 an absolute constant. This leads to new efficient algorithms for several problems in discrepancy theory. @InProceedings{STOC18p587, author = {Nikhil Bansal and Daniel Dadush and Shashwat Garg and Shachar Lovett}, title = {The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {587--597}, doi = {}, year = {2018}, } STOC '18: "Near-Optimal Linear Decision ..." Near-Optimal Linear Decision Trees for k-SUM and Related Problems Daniel M. Kane, Shachar Lovett, and Shay Moran (University of California at San Diego, USA; Institute for Advanced Study at Princeton, USA) We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the k-SUM problem on n elements using O(n log2 n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two k-subsets; when viewed as linear queries, comparison queries are 2k-sparse and have only {−1,0,1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSET-SUM problem, both with optimal number of queries, up to poly-logarithmic terms. Our constructions are based on the notion of “inference dimension”, recently introduced by the authors in the context of active classification with comparison queries. This can be viewed as another contribution to the fruitful link between machine learning and discrete geometry, which goes back to the discovery of the VC dimension. @InProceedings{STOC18p554, author = {Daniel M. Kane and Shachar Lovett and Shay Moran}, title = {Near-Optimal Linear Decision Trees for k-SUM and Related Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {554--563}, doi = {}, year = {2018}, } |
|
Lu, Pinyan |
STOC '18: "Counting Hypergraph Colourings ..."
Counting Hypergraph Colourings in the Local Lemma Regime
Heng Guo, Chao Liao, Pinyan Lu, and Chihao Zhang (University of Edinburgh, UK; Shanghai Jiao Tong University, China; Shanghai University of Finance and Economics, China; Chinese University of Hong Kong, China) We give a fully polynomial-time approximation scheme (FPTAS) to count the number of q-colorings for k-uniform hypergraphs with maximum degree Δ if k≥ 28 and q > 315Δ14/k−14. We also obtain a polynomial-time almost uniform sampler if q>798Δ16/k−16/3. These are the first approximate counting and sampling algorithms in the regime q≪Δ (for large Δ and k) without any additional assumptions. Our method is based on the recent work of Moitra (STOC, 2017). One important contribution of ours is to remove the dependency of k and Δ in Moitra’s approach. @InProceedings{STOC18p926, author = {Heng Guo and Chao Liao and Pinyan Lu and Chihao Zhang}, title = {Counting Hypergraph Colourings in the Local Lemma Regime}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {926--939}, doi = {}, year = {2018}, } |
|
Lykouris, Thodoris |
STOC '18: "Stochastic Bandits Robust ..."
Stochastic Bandits Robust to Adversarial Corruptions
Thodoris Lykouris, Vahab Mirrokni, and Renato Paes Leme (Cornell University, USA; Google Research, USA) We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam. The goal of this model is to encourage the design of bandit algorithms that (i) work well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models. In our model, the rewards for all arms are initially drawn from a distribution and are then altered by an adaptive adversary. We provide a simple algorithm whose performance gracefully degrades with the total corruption the adversary injected in the data, measured by the sum across rounds of the biggest alteration the adversary made in the data in that round; this total corruption is denoted by C. Our algorithm provides a guarantee that retains the optimal guarantee (up to a logarithmic term) if the input is stochastic and whose performance degrades linearly to the amount of corruption C, while crucially being agnostic to it. We also provide a lower bound showing that this linear degradation is necessary if the algorithm achieves optimal performance in the stochastic setting (the lower bound works even for a known amount of corruption, a special case in which our algorithm achieves optimal performance without the extra logarithm). @InProceedings{STOC18p114, author = {Thodoris Lykouris and Vahab Mirrokni and Renato Paes Leme}, title = {Stochastic Bandits Robust to Adversarial Corruptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {114--122}, doi = {}, year = {2018}, } |
|
Lysikov, Vladimir |
STOC '18: "Generalized Matrix Completion ..."
Generalized Matrix Completion and Algebraic Natural Proofs
Markus Bläser, Christian Ikenmeyer, Gorav Jindal, and Vladimir Lysikov (Saarland University, Germany; Max Planck Institute for Informatics, Germany) Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P#P ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case. @InProceedings{STOC18p1193, author = {Markus Bläser and Christian Ikenmeyer and Gorav Jindal and Vladimir Lysikov}, title = {Generalized Matrix Completion and Algebraic Natural Proofs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1193--1206}, doi = {}, year = {2018}, } STOC '18: "On the Complexity of Hazard-Free ..." On the Complexity of Hazard-Free Circuits Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah (Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Newcastle University, UK; IIT Hyderabad, India) The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems. @InProceedings{STOC18p878, author = {Christian Ikenmeyer and Balagopal Komarath and Christoph Lenzen and Vladimir Lysikov and Andrey Mokhov and Karteek Sreenivasaiah}, title = {On the Complexity of Hazard-Free Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {878--889}, doi = {}, year = {2018}, } |
|
Ma, Tengyu |
STOC '18: "Generalization and Equilibrium ..."
Generalization and Equilibrium in Generative Adversarial Nets (GANs) (Invited Talk)
Tengyu Ma (Stanford University, USA) Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.) @InProceedings{STOC18p2, author = {Tengyu Ma}, title = {Generalization and Equilibrium in Generative Adversarial Nets (GANs) (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {2--2}, doi = {}, year = {2018}, } |
|
Mądry, Aleksander |
STOC '18: "k-Server via Multiscale Entropic ..."
k-Server via Multiscale Entropic Regularization
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Mądry (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA) We present an O((logk)2)-competitive randomized algorithm for the k-server problem on hierarchically separated trees (HSTs). This is the first o(k)-competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)2 logn)-competitive algorithm on any n-point metric space. We give a new dynamic HST embedding that yields an O((logk)3 logΔ)-competitive algorithm on any metric space where the ratio of the largest to smallest non-zero distance is at most Δ. @InProceedings{STOC18p3, author = {Sébastien Bubeck and Michael B. Cohen and Yin Tat Lee and James R. Lee and Aleksander Mądry}, title = {k-Server via Multiscale Entropic Regularization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3--16}, doi = {}, year = {2018}, } STOC '18: "Round Compression for Parallel ..." Round Compression for Parallel Matching Algorithms Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (University of Warwick, UK; Google Research, USA; Massachusetts Institute of Technology, USA; EPFL, Switzerland; IBM Research, USA; University of Warsaw, Poland) For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem—one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n1+Ω(1) memory, this problem can also be solved 2-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+є)-approximate maximum matching, for any fixed constant є>0, in O((loglogn)2) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of the MPC model, as compared to the PRAM model. Firstly, we use vertex–based graph partitioning, instead of the edge–based approaches that were utilized so far. Secondly, we develop a technique of round compression. This technique enables one to take a (distributed) algorithm that computes an O(1)-approximation of maximum matching in O(logn) independent PRAM phases and implement a super-constant number of these phases in only a constant number of MPC rounds. @InProceedings{STOC18p471, author = {Artur Czumaj and Jakub Łącki and Aleksander Mądry and Slobodan Mitrović and Krzysztof Onak and Piotr Sankowski}, title = {Round Compression for Parallel Matching Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {471--484}, doi = {}, year = {2018}, } |
|
Mahabadi, Sepideh |
STOC '18: "Nonlinear Dimension Reduction ..."
Nonlinear Dimension Reduction via Outer Bi-Lipschitz Extensions
Sepideh Mahabadi, Konstantin Makarychev, Yury Makarychev, and Ilya Razenshteyn (Columbia University, USA; Northwestern University, USA; Toyota Technological Institute at Chicago, USA; Microsoft Research, USA) We introduce and study the notion of *an outer bi-Lipschitz extension* of a map between Euclidean spaces. The notion is a natural analogue of the notion of *a Lipschitz extension* of a Lipschitz map. We show that for every map f there exists an outer bi-Lipschitz extension f′ whose distortion is greater than that of f by at most a constant factor. This result can be seen as a counterpart of the classic Kirszbraun theorem for outer bi-Lipschitz extensions. We also study outer bi-Lipschitz extensions of near-isometric maps and show upper and lower bounds for them. Then, we present applications of our results to prioritized and terminal dimension reduction problems, described next. We prove a *prioritized* variant of the Johnson–Lindenstrauss lemma: given a set of points X⊂ ℝd of size N and a permutation (”priority ranking”) of X, there exists an embedding f of X into ℝO(logN) with distortion O(loglogN) such that the point of rank j has only O(log3 + ε j) non-zero coordinates – more specifically, all but the first O(log3+ε j) coordinates are equal to 0; the distortion of f restricted to the first j points (according to the ranking) is at most O(loglogj). The result makes a progress towards answering an open question by Elkin, Filtser, and Neiman about prioritized dimension reductions. We prove that given a set X of N points in ℜd, there exists a *terminal* dimension reduction embedding of ℝd into ℝd′, where d′ = O(logN/ε4), which preserves distances ||x−y|| between points x∈ X and y ∈ ℝd, up to a multiplicative factor of 1 ± ε. This improves a recent result by Elkin, Filtser, and Neiman. The dimension reductions that we obtain are nonlinear, and this nonlinearity is necessary. @InProceedings{STOC18p1088, author = {Sepideh Mahabadi and Konstantin Makarychev and Yury Makarychev and Ilya Razenshteyn}, title = {Nonlinear Dimension Reduction via Outer Bi-Lipschitz Extensions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1088--1101}, doi = {}, year = {2018}, } |
|
Makarychev, Konstantin |
STOC '18: "Nonlinear Dimension Reduction ..."
Nonlinear Dimension Reduction via Outer Bi-Lipschitz Extensions
Sepideh Mahabadi, Konstantin Makarychev, Yury Makarychev, and Ilya Razenshteyn (Columbia University, USA; Northwestern University, USA; Toyota Technological Institute at Chicago, USA; Microsoft Research, USA) We introduce and study the notion of *an outer bi-Lipschitz extension* of a map between Euclidean spaces. The notion is a natural analogue of the notion of *a Lipschitz extension* of a Lipschitz map. We show that for every map f there exists an outer bi-Lipschitz extension f′ whose distortion is greater than that of f by at most a constant factor. This result can be seen as a counterpart of the classic Kirszbraun theorem for outer bi-Lipschitz extensions. We also study outer bi-Lipschitz extensions of near-isometric maps and show upper and lower bounds for them. Then, we present applications of our results to prioritized and terminal dimension reduction problems, described next. We prove a *prioritized* variant of the Johnson–Lindenstrauss lemma: given a set of points X⊂ ℝd of size N and a permutation (”priority ranking”) of X, there exists an embedding f of X into ℝO(logN) with distortion O(loglogN) such that the point of rank j has only O(log3 + ε j) non-zero coordinates – more specifically, all but the first O(log3+ε j) coordinates are equal to 0; the distortion of f restricted to the first j points (according to the ranking) is at most O(loglogj). The result makes a progress towards answering an open question by Elkin, Filtser, and Neiman about prioritized dimension reductions. We prove that given a set X of N points in ℜd, there exists a *terminal* dimension reduction embedding of ℝd into ℝd′, where d′ = O(logN/ε4), which preserves distances ||x−y|| between points x∈ X and y ∈ ℝd, up to a multiplicative factor of 1 ± ε. This improves a recent result by Elkin, Filtser, and Neiman. The dimension reductions that we obtain are nonlinear, and this nonlinearity is necessary. @InProceedings{STOC18p1088, author = {Sepideh Mahabadi and Konstantin Makarychev and Yury Makarychev and Ilya Razenshteyn}, title = {Nonlinear Dimension Reduction via Outer Bi-Lipschitz Extensions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1088--1101}, doi = {}, year = {2018}, } |
|
Makarychev, Yury |
STOC '18: "Nonlinear Dimension Reduction ..."
Nonlinear Dimension Reduction via Outer Bi-Lipschitz Extensions
Sepideh Mahabadi, Konstantin Makarychev, Yury Makarychev, and Ilya Razenshteyn (Columbia University, USA; Northwestern University, USA; Toyota Technological Institute at Chicago, USA; Microsoft Research, USA) We introduce and study the notion of *an outer bi-Lipschitz extension* of a map between Euclidean spaces. The notion is a natural analogue of the notion of *a Lipschitz extension* of a Lipschitz map. We show that for every map f there exists an outer bi-Lipschitz extension f′ whose distortion is greater than that of f by at most a constant factor. This result can be seen as a counterpart of the classic Kirszbraun theorem for outer bi-Lipschitz extensions. We also study outer bi-Lipschitz extensions of near-isometric maps and show upper and lower bounds for them. Then, we present applications of our results to prioritized and terminal dimension reduction problems, described next. We prove a *prioritized* variant of the Johnson–Lindenstrauss lemma: given a set of points X⊂ ℝd of size N and a permutation (”priority ranking”) of X, there exists an embedding f of X into ℝO(logN) with distortion O(loglogN) such that the point of rank j has only O(log3 + ε j) non-zero coordinates – more specifically, all but the first O(log3+ε j) coordinates are equal to 0; the distortion of f restricted to the first j points (according to the ranking) is at most O(loglogj). The result makes a progress towards answering an open question by Elkin, Filtser, and Neiman about prioritized dimension reductions. We prove that given a set X of N points in ℜd, there exists a *terminal* dimension reduction embedding of ℝd into ℝd′, where d′ = O(logN/ε4), which preserves distances ||x−y|| between points x∈ X and y ∈ ℝd, up to a multiplicative factor of 1 ± ε. This improves a recent result by Elkin, Filtser, and Neiman. The dimension reductions that we obtain are nonlinear, and this nonlinearity is necessary. @InProceedings{STOC18p1088, author = {Sepideh Mahabadi and Konstantin Makarychev and Yury Makarychev and Ilya Razenshteyn}, title = {Nonlinear Dimension Reduction via Outer Bi-Lipschitz Extensions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1088--1101}, doi = {}, year = {2018}, } |
|
Manurangsi, Pasin |
STOC '18: "On the Parameterized Complexity ..."
On the Parameterized Complexity of Approximating Dominating Set
Karthik C. S., Bundit Laekhanukit, and Pasin Manurangsi (Weizmann Institute of Science, Israel; Max Planck Institute for Informatics, Germany; Shanghai University of Finance and Economics, China; University of California at Berkeley, USA) We study the parameterized complexity of approximating the k-Dominating Set (domset) problem where an integer k and a graph G on n vertices are given as input, and the goal is to find a dominating set of size at most F(k) · k whenever the graph G has a dominating set of size k. When such an algorithm runs in time T(k)poly(n) (i.e., FPT-time) for some computable function T, it is said to be an F(k)-FPT-approximation algorithm for k-domset. Whether such an algorithm exists is listed in the seminal book of Downey and Fellows (2013) as one of the ”most infamous” open problems in Parameterized Complexity. This work gives an almost complete answer to this question by showing the non-existence of such an algorithm under W[1]≠FPT and further providing tighter running time lower bounds under stronger hypotheses. Specifically, we prove the following for every computable functions T, F and every constant ε > 0: (i) Assuming W[1]≠FPT, there is no F(k)-FPT-approximation algorithm for k-domset, (ii) Assuming the Exponential Time Hypothesis (ETH), there is no F(k)-approximation algorithm for k-domset that runs in T(k)no(k) time, (iii) Assuming the Strong Exponential Time Hypothesis (SETH), for every integer k ≥ 2, there is no F(k)-approximation algorithm for k-domset that runs in T(k)nk − ε time, (iv) Assuming the k-sum Hypothesis, for every integer k ≥ 3, there is no F(k)-approximation algorithm for k-domset that runs in T(k) n⌈ k/2 ⌉ − ε time. Previously, only constant ratio FPT-approximation algorithms were ruled out under W[1]≠FPT and (log1/4 − ε k)-FPT-approximation algorithms were ruled out under ETH [Chen and Lin, FOCS 2016]. Recently, the non-existence of an F(k)-FPT-approximation algorithm for any function F was shown under gapETH [Chalermsook et al., FOCS 2017]. Note that, to the best of our knowledge, no running time lower bound of the form nδ k for any absolute constant δ > 0 was known before even for any constant factor inapproximation ratio. Our results are obtained by establishing a connection between communication complexity and hardness of approximation, generalizing the ideas from a recent breakthrough work of Abboud et al. [FOCS 2017]. Specifically, we show that to prove hardness of approximation of a certain parameterized variant of the label cover problem, it suffices to devise a specific protocol for a communication problem that depends on which hypothesis we rely on. Each of these communication problems turns out to be either a well studied problem or a variant of one; this allows us to easily apply known techniques to solve them. @InProceedings{STOC18p1283, author = {Karthik C. S. and Bundit Laekhanukit and Pasin Manurangsi}, title = {On the Parameterized Complexity of Approximating Dominating Set}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1283--1296}, doi = {}, year = {2018}, } |
|
Marx, Dániel |
STOC '18: "A Framework for ETH-Tight ..."
A Framework for ETH-Tight Algorithms and Lower Bounds in Geometric Intersection Graphs
Mark de Berg, Hans L. Bodlaender, Sándor Kisfaludi-Bak, Dániel Marx, and Tom C. van der Zanden (Eindhoven University of Technology, Netherlands; Utrecht University, Netherlands; Hungarian Academy of Sciences, Hungary) We give an algorithmic and lower-bound framework that facilitates the construction of subexponential algorithms and matching conditional complexity bounds. It can be applied to a wide range of geometric intersection graphs (intersections of similarly sized fat objects), yielding algorithms with running time 2O(n1−1/d) for any fixed dimension d≥ 2 for many well known graph problems, including Independent Set, r-Dominating Set for constant r, and Steiner Tree. For most problems, we get improved running times compared to prior work; in some cases, we give the first known subexponential algorithm in geometric intersection graphs. Additionally, most of the obtained algorithms work on the graph itself, i.e., do not require any geometric information. Our algorithmic framework is based on a weighted separator theorem and various treewidth techniques. The lower-bound framework is based on a constructive embedding of graphs into d-dimensional grids, and it allows us to derive matching 2Ω(n1−1/d) lower bounds under the Exponential Time Hypothesis even in the much more restricted class of d-dimensional induced grid graphs. @InProceedings{STOC18p574, author = {Mark de Berg and Hans L. Bodlaender and Sándor Kisfaludi-Bak and Dániel Marx and Tom C. van der Zanden}, title = {A Framework for ETH-Tight Algorithms and Lower Bounds in Geometric Intersection Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {574--586}, doi = {}, year = {2018}, } |
|
Maus, Yannic |
STOC '18: "Deterministic Distributed ..."
Deterministic Distributed Edge-Coloring with Fewer Colors
Mohsen Ghaffari, Fabian Kuhn, Yannic Maus, and Jara Uitto (ETH Zurich, Switzerland; University of Freiburg, Germany) We present a deterministic distributed algorithm, in the LOCAL model, that computes a (1+o(1))Δ-edge-coloring in polylogarithmic-time, so long as the maximum degree Δ=Ω(logn). For smaller Δ, we give a polylogarithmic-time 3Δ/2-edge-coloring. These are the first deterministic algorithms to go below the natural barrier of 2Δ−1 colors, and they improve significantly on the recent polylogarithmic-time (2Δ−1)(1+o(1))-edge-coloring of Ghaffari and Su [SODA’17] and the (2Δ−1)-edge-coloring of Fischer, Ghaffari, and Kuhn [FOCS’17], positively answering the main open question of the latter. The key technical ingredient of our algorithm is a simple and novel gradual packing of judiciously chosen near-maximum matchings, each of which becomes one of the color classes. @InProceedings{STOC18p418, author = {Mohsen Ghaffari and Fabian Kuhn and Yannic Maus and Jara Uitto}, title = {Deterministic Distributed Edge-Coloring with Fewer Colors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {418--430}, doi = {}, year = {2018}, } |
|
Mehr, Mehran |
STOC '18: "Fast Fencing ..."
Fast Fencing
Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent Cohen-Addad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in nO(k) time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NP-hard for general k. Our polynomial time algorithm refutes this unless P equals NP. @InProceedings{STOC18p564, author = {Mikkel Abrahamsen and Anna Adamaszek and Karl Bringmann and Vincent Cohen-Addad and Mehran Mehr and Eva Rotenberg and Alan Roytman and Mikkel Thorup}, title = {Fast Fencing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564--573}, doi = {}, year = {2018}, } |
|
Mehta, Ruta |
STOC '18: "Sum-of-Squares Meets Nash: ..."
Sum-of-Squares Meets Nash: Lower Bounds for Finding Any Equilibrium
Pravesh K. Kothari and Ruta Mehta (Princeton University, USA; Institute for Advanced Study at Princeton, USA; University of Illinois at Urbana-Champaign, USA) Computing Nash equilibrium (NE) in two-player game is a central question in algorithmic game theory. The main motivation of this work is to understand the power of sum-of-squares method in computing equilibria, both exact and approximate. Previous works in this context have focused on hardness of approximating “best” equilibria with respect to some natural quality measure on equilibria such as social welfare. Such results, however, do not directly relate to the complexity of the problem of finding any equilibrium. In this work, we propose a framework of roundings for the sum-of-squares algorithm (and convex relaxations in general) applicable to finding approximate/exact equilbria in two player bimatrix games. Specifically, we define the notion of oblivious roundings with verification oracle (OV). These are algorithms that can access a solution to the degree d SoS relaxation to construct a list of candidate (partial) solutions and invoke a verification oracle to check if a candidate in the list gives an (exact or approximate) equilibrium. This framework captures most known approximation algorithms in combinatorial optimization including the celebrated semi-definite programming based algorithms for Max-Cut, Constraint-Satisfaction Problems, and the recent works on SoS relaxations for Unique Games/Small-Set Expansion, Best Separable State, and many problems in unsupervised machine learning. Our main results are strong unconditional lower bounds in this framework. Specifically, we show that for є = Θ(1/poly(n)), there’s no algorithm that uses a o(n)-degree SoS relaxation to construct a 2o(n)-size list of candidates and obtain an є-approximate NE. For some constant є, we show a similar result for degree o(log(n)) SoS relaxation and list size no(log(n)). Our results can be seen as an unconditional confirmation, in our restricted algorithmic framework, of the recent Exponential Time Hypothesis for PPAD. Our proof strategy involves constructing a family of games that all share a common sum-of-squares solution but every (approximate) equilibrium of any game is far from every equilibrium of any other game in the family (in either player’s strategy). Along the way, we strengthen the classical unconditional lower bound against enumerative algorithms for finding approximate equilibria due to Daskalakis-Papadimitriou and the classical hardness of computing equilibria due to Gilbow-Zemel. @InProceedings{STOC18p1241, author = {Pravesh K. Kothari and Ruta Mehta}, title = {Sum-of-Squares Meets Nash: Lower Bounds for Finding Any Equilibrium}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1241--1248}, doi = {}, year = {2018}, } |
|
Miltzow, Tillmann |
STOC '18: "The Art Gallery Problem Is ..."
The Art Gallery Problem Is ∃ ℝ-Complete
Mikkel Abrahamsen, Anna Adamaszek, and Tillmann Miltzow (University of Copenhagen, Denmark; Université Libre de Bruxelles, Belgium) We prove that the art gallery problem is equivalent under polynomial time reductions to deciding whether a system of polynomial equations over the real numbers has a solution. The art gallery problem is a classic problem in computational geometry, introduced in 1973 by Victor Klee. Given a simple polygon P and an integer k, the goal is to decide if there exists a set G of k guards within P such that every point p ∈ P is seen by at least one guard g∈ G. Each guard corresponds to a point in the polygon P, and we say that a guard g sees a point p if the line segment pg is contained in P. The art gallery problem has stimulated extensive research in geometry and in algorithms. However, the complexity status of the art gallery problem has not been resolved. It has long been known that the problem is NP-hard, but no one has been able to show that it lies in NP. Recently, the computational geometry community became more aware of the complexity class ∃ ℝ, which has been studied earlier by other communities. The class ∃ ℝ consists of problems that can be reduced in polynomial time to the problem of deciding whether a system of polynomial equations with integer coefficients and any number of real variables has a solution. It can be easily seen that NP ⊆ ∃ ℝ. We prove that the art gallery problem is ∃ ℝ-complete, implying that (1) any system of polynomial equations over the real numbers can be encoded as an instance of the art gallery problem, and (2) the art gallery problem is not in the complexity class NP unless NP=∃ ℝ. As an illustration of our techniques, we can show that for every compact semi-algebraic set S⊂ [0,1]2 there exists a polygon with rational coordinates that enforces one of the guards to be at any position p∈ S, in any optimal guarding. As a corollary of our construction, we prove that for any real algebraic number α there is an instance of the art gallery problem where one of the coordinates of the guards equals α in any guard set of minimum cardinality. That rules out many natural geometric approaches to the problem, as it shows that any approach based on constructing a finite set of candidate points for placing guards has to include points with coordinates being roots of polynomials with arbitrary degree. In the ∃ ℝ-hardness proof for the art gallery problem we introduce a new ∃ ℝ-complete problem ETR-INV. We believe that this problem is of independent interest, as it can be used to obtain ∃ ℝ-hardness proofs for other problems. In particular, ETR-INV has been very recently used to prove ∃ ℝ-hardness of other geometric problems. @InProceedings{STOC18p65, author = {Mikkel Abrahamsen and Anna Adamaszek and Tillmann Miltzow}, title = {The Art Gallery Problem Is ∃ ℝ-Complete}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {65--73}, doi = {}, year = {2018}, } Info |
|
Minzer, Dor |
STOC '18: "Towards a Proof of the 2-to-1 ..."
Towards a Proof of the 2-to-1 Games Conjecture?
Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We present a polynomial time reduction from gap-3LIN to label cover with 2-to-1 constraints. In the “yes” case the fraction of satisfied constraints is at least 1 −ε, and in the “no” case we show that this fraction is at most ε, assuming a certain (new) combinatorial hypothesis on the Grassmann graph. In other words, we describe a combinatorial hypothesis that implies the 2-to-1 conjecture with imperfect completeness. The companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] makes some progress towards proving this hypothesis. Our work builds on earlier work by a subset of the authors [Khot, Minzer and Safra, STOC 2017] where a slightly different hypothesis was used to obtain hardness of approximating vertex cover to within factor of √2−ε. The most important implication of this work is (assuming the hypothesis) an NP-hardness gap of 1/2−ε vs. ε for unique games. In addition, we derive optimal NP-hardness for approximating the max-cut-gain problem, NP-hardness of coloring an almost 4-colorable graph with any constant number of colors, and the same √2−ε NP-hardness for approximate vertex cover that was already obtained based on a slightly different hypothesis. Recent progress towards proving our hypothesis [Barak, Kothari and Steurer, ECCC TR18-077], [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] directly implies some new unconditional NP-hardness results. These include new points of NP-hardness for unique games and for 2-to-1 and 2-to-2 games. More recently, the full version of our hypothesis was proven [Khot, Minzer and Safra, ECCC TR18-006]. @InProceedings{STOC18p376, author = {Irit Dinur and Subhash Khot and Guy Kindler and Dor Minzer and Muli Safra}, title = {Towards a Proof of the 2-to-1 Games Conjecture?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {376--389}, doi = {}, year = {2018}, } STOC '18: "On Non-optimally Expanding ..." On Non-optimally Expanding Sets in Grassmann Graphs Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We study the structure of non-expanding sets in the Grassmann graph. We put forth a hypothesis stating that every small set whose expansion is smaller than 1−δ must be correlated with one of a specified list of sets which are isomorphic to smaller Grassmann graphs. We develop a framework of Fourier analysis for analyzing functions over the Grassmann graph, and prove that our hypothesis holds for all sets whose expansion is below 7/8. In the companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018], the authors show that a linearity agreement hypothesis implies an NP-hardness gap of 1/2− vs for unique games and other inapproximability results. In [Barak, Kothari and Steurer, ECCC TR18-077], the authors show that the hypothesis in this work implies the linearity agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018]. Combined with our main theorem here this proves a version of the linearity agreement hypothesis with certain specific parameters. Short of proving the entire hypothesis, this nevertheless suffices for getting new unconditional NP hardness gaps for label cover with 2-to-1 and unique constraints. Our Expansion Hypothesis has been subsequently proved in its full form [Khot, Minzer and Safra, ECCC TR18-006] thereby proving the agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] and completing the proof of the 2-to-1 Games Conjecture (albeit with imperfect completeness). @InProceedings{STOC18p940, author = {Irit Dinur and Subhash Khot and Guy Kindler and Dor Minzer and Muli Safra}, title = {On Non-optimally Expanding Sets in Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {940--951}, doi = {}, year = {2018}, } |
|
Mirrokni, Vahab |
STOC '18: "Stochastic Bandits Robust ..."
Stochastic Bandits Robust to Adversarial Corruptions
Thodoris Lykouris, Vahab Mirrokni, and Renato Paes Leme (Cornell University, USA; Google Research, USA) We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam. The goal of this model is to encourage the design of bandit algorithms that (i) work well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models. In our model, the rewards for all arms are initially drawn from a distribution and are then altered by an adaptive adversary. We provide a simple algorithm whose performance gracefully degrades with the total corruption the adversary injected in the data, measured by the sum across rounds of the biggest alteration the adversary made in the data in that round; this total corruption is denoted by C. Our algorithm provides a guarantee that retains the optimal guarantee (up to a logarithmic term) if the input is stochastic and whose performance degrades linearly to the amount of corruption C, while crucially being agnostic to it. We also provide a lower bound showing that this linear degradation is necessary if the algorithm achieves optimal performance in the stochastic setting (the lower bound works even for a known amount of corruption, a special case in which our algorithm achieves optimal performance without the extra logarithm). @InProceedings{STOC18p114, author = {Thodoris Lykouris and Vahab Mirrokni and Renato Paes Leme}, title = {Stochastic Bandits Robust to Adversarial Corruptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {114--122}, doi = {}, year = {2018}, } |
|
Mitrović, Slobodan |
STOC '18: "Round Compression for Parallel ..."
Round Compression for Parallel Matching Algorithms
Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (University of Warwick, UK; Google Research, USA; Massachusetts Institute of Technology, USA; EPFL, Switzerland; IBM Research, USA; University of Warsaw, Poland) For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem—one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n1+Ω(1) memory, this problem can also be solved 2-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+є)-approximate maximum matching, for any fixed constant є>0, in O((loglogn)2) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of the MPC model, as compared to the PRAM model. Firstly, we use vertex–based graph partitioning, instead of the edge–based approaches that were utilized so far. Secondly, we develop a technique of round compression. This technique enables one to take a (distributed) algorithm that computes an O(1)-approximation of maximum matching in O(logn) independent PRAM phases and implement a super-constant number of these phases in only a constant number of MPC rounds. @InProceedings{STOC18p471, author = {Artur Czumaj and Jakub Łącki and Aleksander Mądry and Slobodan Mitrović and Krzysztof Onak and Piotr Sankowski}, title = {Round Compression for Parallel Matching Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {471--484}, doi = {}, year = {2018}, } |
|
Mokhov, Andrey |
STOC '18: "On the Complexity of Hazard-Free ..."
On the Complexity of Hazard-Free Circuits
Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah (Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Newcastle University, UK; IIT Hyderabad, India) The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems. @InProceedings{STOC18p878, author = {Christian Ikenmeyer and Balagopal Komarath and Christoph Lenzen and Vladimir Lysikov and Andrey Mokhov and Karteek Sreenivasaiah}, title = {On the Complexity of Hazard-Free Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {878--889}, doi = {}, year = {2018}, } |
|
Mömke, Tobias |
STOC '18: "A (5/3 + ε)-Approximation ..."
A (5/3 + ε)-Approximation for Unsplittable Flow on a Path: Placing Small Tasks into Boxes
Fabrizio Grandoni, Tobias Mömke, Andreas Wiese, and Hang Zhou (University of Lugano, Switzerland; University of Bremen, Germany; Saarland University, Germany; University of Chile, Chile; École Polytechnique, France) In the unsplittable flow on a path problem (UFP) we are given a path with edge capacities and a collection of tasks. Each task is characterized by a subpath, a profit, and a demand. Our goal is to compute a maximum profit subset of tasks such that, for each edge e, the total demand of selected tasks that use e does not exceed the capacity of e. The current best polynomial-time approximation factor for this problem is 2+є for any constant є>0 [Anagostopoulos et al.-SODA 2014]. This is the best known factor even in the case of uniform edge capacities [Călinescu et al.-IPCO 2002, TALG 2011]. These results, likewise most prior work, are based on a partition of tasks into large and small depending on their ratio of demand to capacity over their respective edges: these algorithms invoke (1+є)-approximations for large and small tasks separately. The known techniques do not seem to be able to combine a big fraction of large and small tasks together (apart from some special cases and quasi-polynomial-time algorithms). The main contribution of this paper is to overcome this critical barrier. Namely, we present a polynomial-time algorithm that obtains roughly all profit from the optimal large tasks plus one third of the profit from the optimal small tasks. In combination with known results, this implies a polynomial-time (5/3+є)-approximation algorithm for UFP. Our algorithm is based on two main ingredients. First, we prove that there exist certain sub-optimal solutions where, roughly speaking, small tasks are packed into boxes. To prove that such solutions can yield high profit we introduce a horizontal slicing lemma which yields a novel geometric interpretation of certain solutions. The resulting boxed structure has polynomial complexity, hence cannot be guessed directly. Therefore, our second contribution is a dynamic program that guesses this structure (plus a packing of large and small tasks) on the fly, while losing at most one third of the profit of the remaining small tasks. @InProceedings{STOC18p607, author = {Fabrizio Grandoni and Tobias Mömke and Andreas Wiese and Hang Zhou}, title = {A (5/3 + ε)-Approximation for Unsplittable Flow on a Path: Placing Small Tasks into Boxes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {607--619}, doi = {}, year = {2018}, } |
|
Moran, Shay |
STOC '18: "Near-Optimal Linear Decision ..."
Near-Optimal Linear Decision Trees for k-SUM and Related Problems
Daniel M. Kane, Shachar Lovett, and Shay Moran (University of California at San Diego, USA; Institute for Advanced Study at Princeton, USA) We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the k-SUM problem on n elements using O(n log2 n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two k-subsets; when viewed as linear queries, comparison queries are 2k-sparse and have only {−1,0,1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSET-SUM problem, both with optimal number of queries, up to poly-logarithmic terms. Our constructions are based on the notion of “inference dimension”, recently introduced by the authors in the context of active classification with comparison queries. This can be viewed as another contribution to the fruitful link between machine learning and discrete geometry, which goes back to the discovery of the VC dimension. @InProceedings{STOC18p554, author = {Daniel M. Kane and Shachar Lovett and Shay Moran}, title = {Near-Optimal Linear Decision Trees for k-SUM and Related Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {554--563}, doi = {}, year = {2018}, } |
|
Mukhopadhyay, Sagnik |
STOC '18: "Simulation Beats Richness: ..."
Simulation Beats Richness: New Data-Structure Lower Bounds
Arkadev Chattopadhyay, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay (TIFR, India; Charles University in Prague, Czechia; INESC TEC, Portugal; University of Porto, Portugal; KTH, Sweden) We develop a new technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is the first simulation theorem in the asymmetric setting, where Alice gets a p × n matrix x over F2 and Bob gets a vector y ∈ F2n. Alice and Bob need to evaluate f(x· y) for a Boolean function f: {0,1}p → {0,1}. Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C· n for Alice and C for Bob, if and only if there exists a deterministic/randomized *parity decision tree* of cost Θ(C) for evaluating f. As applications of this technique, we obtain the following results: 1. The first strong lower-bounds against randomized data-structure schemes for the Vector-Matrix-Vector product problem over F2. Moreover, our method yields strong lower bounds even when the data-structure scheme has tiny advantage over random guessing. 2. The first lower bounds against randomized data-structures schemes for two natural Boolean variants of Orthogonal Vector Counting. 3. We construct an asymmetric communication problem and obtain a deterministic lower-bound for it which is provably better than any lower-bound that may be obtained by the classical Richness Method of Miltersen et al. (STOC ’95). This seems to be the first known limitation of the Richness Method in the context of proving deterministic lower bounds. @InProceedings{STOC18p1013, author = {Arkadev Chattopadhyay and Michal Koucký and Bruno Loff and Sagnik Mukhopadhyay}, title = {Simulation Beats Richness: New Data-Structure Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1013--1020}, doi = {}, year = {2018}, } |
|
Murray, Cody |
STOC '18: "Circuit Lower Bounds for Nondeterministic ..."
Circuit Lower Bounds for Nondeterministic Quasi-Polytime: An Easy Witness Lemma for NP and NQP
Cody Murray and Ryan Williams (Massachusetts Institute of Technology, USA) We prove that if every problem in NP has nk-size circuits for a fixed constant k, then for every NP-verifier and every yes-instance x of length n for that verifier, the verifier’s search space has an nO(k3)-size witness circuit: a witness for x that can be encoded with a circuit of only nO(k3) size. An analogous statement is proved for nondeterministic quasi-polynomial time, i.e., NQP = NTIME[nlogO(1) n]. This significantly extends the Easy Witness Lemma of Impagliazzo, Kabanets, and Wigderson [JCSS’02] which only held for larger nondeterministic classes such as NEXP. As a consequence, the connections between circuit-analysis algorithms and circuit lower bounds can be considerably sharpened: algorithms for approximately counting satisfying assignments to given circuits which improve over exhaustive search can imply circuit lower bounds for functions in NQP or even NP. To illustrate, applying known algorithms for satisfiability of ACC ∘ THR circuits [R. Williams, STOC 2014] we conclude that for every fixed k, NQP does not have nlogk n-size ACC ∘ THR circuits. @InProceedings{STOC18p890, author = {Cody Murray and Ryan Williams}, title = {Circuit Lower Bounds for Nondeterministic Quasi-Polytime: An Easy Witness Lemma for NP and NQP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890--901}, doi = {}, year = {2018}, } |
|
Mütze, Torsten |
STOC '18: "Sparse Kneser Graphs Are Hamiltonian ..."
Sparse Kneser Graphs Are Hamiltonian
Torsten Mütze, Jerri Nummenpalo, and Bartosz Walczak (TU Berlin, Germany; ETH Zurich, Switzerland; Jagiellonian University, Poland) For integers k≥1 and n≥2k+1, the Kneser graph K(n,k) is the graph whose vertices are the k-element subsets of {1,…,n} and whose edges connect pairs of subsets that are disjoint. The Kneser graphs of the form K(2k+1,k) are also known as the odd graphs. We settle an old problem due to Meredith, Lloyd, and Biggs from the 1970s, proving that for every k≥3, the odd graph K(2k+1,k) has a Hamilton cycle. This and a known conditional result due to Johnson imply that all Kneser graphs of the form K(2k+2a,k) with k≥3 and a≥0 have a Hamilton cycle. We also prove that K(2k+1,k) has at least 22k−6 distinct Hamilton cycles for k≥6. Our proofs are based on a reduction of the Hamiltonicity problem in the odd graph to the problem of finding a spanning tree in a suitably defined hypergraph on Dyck words. @InProceedings{STOC18p912, author = {Torsten Mütze and Jerri Nummenpalo and Bartosz Walczak}, title = {Sparse Kneser Graphs Are Hamiltonian}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {912--919}, doi = {}, year = {2018}, } |
|
Nakkiran, Preetum |
STOC '18: "General Strong Polarization ..."
General Strong Polarization
Jarosław Błasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, and Madhu Sudan (Harvard University, USA; Carnegie Mellon University, USA; SUNY Buffalo, USA) Arikan’s exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constant-sized) invertible matrix M, a family of polar codes can be associated with this matrix and its ability to approach capacity follows from the polarization of an associated [0,1]-bounded martingale, namely its convergence in the limit to either 0 or 1 with probability 1. Arikan showed appropriate polarization of the martingale associated with the matrix G2 = ( 10 11 ) to get capacity achieving codes. His analysis was later extended to all matrices M which satisfy an obvious necessary condition for polarization. While Arikan’s theorem does not guarantee that the codes achieve capacity at small blocklengths (specifically in length which is a polynomial in 1/є where є is the difference between the capacity of a channel and the rate of the code), it turns out that a “strong” analysis of the polarization of the underlying martingale would lead to such constructions. Indeed for the martingale associated with G2 such a strong polarization was shown in two independent works ([Guruswami and Xia, IEEE IT ’15] and [Hassani et al., IEEE IT’14]), thereby resolving a major theoretical challenge associated with the efficient attainment of Shannon capacity. In this work we extend the result above to cover martingales associated with all matrices that satisfy the necessary condition for (weak) polarization. In addition to being vastly more general, our proofs of strong polarization are (in our view) also much simpler and modular. Key to our proof is a notion of local polarization that only depends on the evolution of the martingale in a single time step. We show that local polarization always implies strong polarization. We then apply relatively simple reasoning about conditional entropies to prove local polarization in very general settings. Specifically, our result shows strong polarization over all prime fields and leads to efficient capacity-achieving source codes for compressing arbitrary i.i.d. sources, and capacity-achieving channel codes for arbitrary symmetric memoryless channels. @InProceedings{STOC18p485, author = {Jarosław Błasiok and Venkatesan Guruswami and Preetum Nakkiran and Atri Rudra and Madhu Sudan}, title = {General Strong Polarization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {485--492}, doi = {}, year = {2018}, } |
|
Naor, Assaf |
STOC '18: "Data-Dependent Hashing via ..."
Data-Dependent Hashing via Nonlinear Spectral Gaps
Alexandr Andoni, Assaf Naor, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Princeton University, USA; University of Toronto, Canada; Microsoft Research, USA) We establish a generic reduction from _nonlinear spectral gaps_ of metric spaces to data-dependent Locality-Sensitive Hashing, yielding a new approach to the high-dimensional Approximate Near Neighbor Search problem (ANN) under various distance functions. Using this reduction, we obtain the following results: * For _general_ d-dimensional normed spaces and n-point datasets, we obtain a _cell-probe_ ANN data structure with approximation O(logd/ε2), space dO(1) n1+ε, and dO(1)nε cell probes per query, for any ε>0. No non-trivial approximation was known before in this generality other than the O(√d) bound which follows from embedding a general norm into ℓ2. * For ℓp and Schatten-p norms, we improve the data structure further, to obtain approximation O(p) and sublinear query _time_. For ℓp, this improves upon the previous best approximation 2O(p) (which required polynomial as opposed to near-linear in n space). For the Schatten-p norm, no non-trivial ANN data structure was known before this work. Previous approaches to the ANN problem either exploit the low dimensionality of a metric, requiring space exponential in the dimension, or circumvent the curse of dimensionality by embedding a metric into a ”tractable” space, such as ℓ1. Our new generic reduction proceeds differently from both of these approaches using a novel partitioning method. @InProceedings{STOC18p787, author = {Alexandr Andoni and Assaf Naor and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Data-Dependent Hashing via Nonlinear Spectral Gaps}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {787--800}, doi = {}, year = {2018}, } |
|
Nayak, Ashwin |
STOC '18: "Capacity Approaching Coding ..."
Capacity Approaching Coding for Low Noise Interactive Quantum Communication
Debbie Leung, Ashwin Nayak, Ala Shayeghi, Dave Touchette, Penghui Yao, and Nengkun Yu (University of Waterloo, Canada; Perimeter Institute Waterloo, Canada; Nanjing University, China; University of Technology Sydney, Australia) We consider the problem of implementing two-party interactive quantum communication over noisy channels, a necessary endeavor if we wish to fully reap quantum advantages for communication. For an arbitrary protocol with n messages, designed for noiseless qudit channels (where d is arbitrary), our main result is a simulation method that fails with probability less than 2−Θ (nє) and uses a qudit channel n (1 + Θ (√є)) times, of which an є fraction can be corrupted adversarially. The simulation is thus capacity achieving to leading order, and we conjecture that it is optimal up to a constant factor in the √є term. Furthermore, the simulation is in a model that does not require pre-shared resources such as randomness or entanglement between the communicating parties. Perhaps surprisingly, this outperforms the best known overhead of 1 + O(√є loglog1/є) in the corresponding classical model, which is also conjectured to be optimal [Haeupler, FOCS’14]. Our work also improves over the best previously known quantum result where the overhead is a non-explicit large constant [Brassard et al., FOCS’14] for low є. @InProceedings{STOC18p339, author = {Debbie Leung and Ashwin Nayak and Ala Shayeghi and Dave Touchette and Penghui Yao and Nengkun Yu}, title = {Capacity Approaching Coding for Low Noise Interactive Quantum Communication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {339--352}, doi = {}, year = {2018}, } |
|
Nederlof, Jesper |
STOC '18: "More Consequences of Falsifying ..."
More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture
Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof (IBM Research, USA; Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Eindhoven University of Technology, Netherlands) The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ε>0 for which an O(N2−ε) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that: - For all d and all large enough k, there is a randomized algorithm that takes O(n(1−ε)k) time to solve the Zero-Weight-k-Clique and Min-Weight-k-Clique problems on d-hypergraphs with n vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. - For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)n). @InProceedings{STOC18p253, author = {Amir Abboud and Karl Bringmann and Holger Dell and Jesper Nederlof}, title = {More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {253--266}, doi = {}, year = {2018}, } |
|
Neiman, Ofer |
STOC '18: "Metric Embedding via Shortest ..."
Metric Embedding via Shortest Path Decompositions
Ittai Abraham, Arnold Filtser, Anupam Gupta, and Ofer Neiman (VMware, USA; Ben-Gurion University of the Negev, Israel; Carnegie Mellon University, USA) We study the problem of embedding weighted graphs of pathwidth k into ℓp spaces. Our main result is an O(kmin{1p,12})-distortion embedding. For p=1, this is a super-exponential improvement over the best previous bound of Lee and Sidiropoulos. Our distortion bound is asymptotically tight for any fixed p >1. Our result is obtained via a novel embedding technique that is based on low depth decompositions of a graph via shortest paths. The core new idea is that given a geodesic shortest path P, we can probabilistically embed all points into 2 dimensions with respect to P. For p>2 our embedding also implies improved distortion on bounded treewidth graphs (O((klogn)1p)). For asymptotically large p, our results also implies improved distortion on graphs excluding a minor. @InProceedings{STOC18p952, author = {Ittai Abraham and Arnold Filtser and Anupam Gupta and Ofer Neiman}, title = {Metric Embedding via Shortest Path Decompositions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {952--963}, doi = {}, year = {2018}, } |
|
Neuen, Daniel |
STOC '18: "An Exponential Lower Bound ..."
An Exponential Lower Bound for Individualization-Refinement Algorithms for Graph Isomorphism
Daniel Neuen and Pascal Schweitzer (RWTH Aachen University, Germany; TU Kaiserslautern, Germany) The individualization-refinement paradigm provides a strong toolbox for testing isomorphism of two graphs and indeed, the currently fastest implementations of isomorphism solvers all follow this approach. While these solvers are fast in practice, from a theoretical point of view, no general lower bounds concerning the worst case complexity of these tools are known. In fact, it is an open question what the running time of individualization-refinement algorithms is. For all we know some of the algorithms could have polynomial running time. In this work we give a negative answer to this question and construct a family of graphs on which algorithms based on the individualization-refinement paradigm require exponential time. Contrary to a previous construction of Miyazaki, that only applies to a specific implementation within the individualization-refinement framework, our construction is immune to changing the cell selector, the refinement operator, the invariant that is used, or adding various heuristic invariants to the algorithm. In fact, our graphs also provide exponential lower bounds in the case when the k-dimensional Weisfeiler-Leman algorithm is used to replace the the 1-dimensional Weisfeiler-Leman algorithm (often called color refinement) that is normally used. Finally, the arguments even work when the entire automorphism group of the inputs is initially provided to the algorithm. The arguments apply to isomorphism testing algorithms as well as canonization algorithms within the framework. @InProceedings{STOC18p138, author = {Daniel Neuen and Pascal Schweitzer}, title = {An Exponential Lower Bound for Individualization-Refinement Algorithms for Graph Isomorphism}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {138--150}, doi = {}, year = {2018}, } |
|
Nikolov, Aleksandar |
STOC '18: "Data-Dependent Hashing via ..."
Data-Dependent Hashing via Nonlinear Spectral Gaps
Alexandr Andoni, Assaf Naor, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Princeton University, USA; University of Toronto, Canada; Microsoft Research, USA) We establish a generic reduction from _nonlinear spectral gaps_ of metric spaces to data-dependent Locality-Sensitive Hashing, yielding a new approach to the high-dimensional Approximate Near Neighbor Search problem (ANN) under various distance functions. Using this reduction, we obtain the following results: * For _general_ d-dimensional normed spaces and n-point datasets, we obtain a _cell-probe_ ANN data structure with approximation O(logd/ε2), space dO(1) n1+ε, and dO(1)nε cell probes per query, for any ε>0. No non-trivial approximation was known before in this generality other than the O(√d) bound which follows from embedding a general norm into ℓ2. * For ℓp and Schatten-p norms, we improve the data structure further, to obtain approximation O(p) and sublinear query _time_. For ℓp, this improves upon the previous best approximation 2O(p) (which required polynomial as opposed to near-linear in n space). For the Schatten-p norm, no non-trivial ANN data structure was known before this work. Previous approaches to the ANN problem either exploit the low dimensionality of a metric, requiring space exponential in the dimension, or circumvent the curse of dimensionality by embedding a metric into a ”tractable” space, such as ℓ1. Our new generic reduction proceeds differently from both of these approaches using a novel partitioning method. @InProceedings{STOC18p787, author = {Alexandr Andoni and Assaf Naor and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Data-Dependent Hashing via Nonlinear Spectral Gaps}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {787--800}, doi = {}, year = {2018}, } |
|
Nimavat, Rachit |
STOC '18: "Almost Polynomial Hardness ..."
Almost Polynomial Hardness of Node-Disjoint Paths in Grids
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical Node-Disjoint Paths (NDP) problem, we are given an n-vertex graph G=(V,E), and a collection M={(s1,t1),…,(sk,tk)} of pairs of its vertices, called source-destination, or demand pairs. The goal is to route as many of the demand pairs as possible, where to route a pair we need to select a path connecting it, so that all selected paths are disjoint in their vertices. The best current algorithm for NDP achieves an O(√n)-approximation, while, until recently, the best negative result was a factor Ω(log1/2−єn)-hardness of approximation, for any constant є, unless NP ⊆ ZPTIME(npoly logn). In a recent work, the authors have shown an improved 2Ω(√logn)-hardness of approximation for NDP, unless NP⊆ DTIME(nO(logn)), even if the underlying graph is a subgraph of a grid graph, and all source vertices lie on the boundary of the grid. Unfortunately, this result does not extend to grid graphs. The approximability of the NDP problem on grid graphs has remained a tantalizing open question, with the best current upper bound of Õ(n1/4), and the best current lower bound of APX-hardness. In a recent work, the authors showed a 2Õ(√logn)-approximation algorithm for NDP in grid graphs, if all source vertices lie on the boundary of the grid – a result that can be seen as suggesting that a sub-polynomial approximation may be achievable for NDP in grids. In this paper we show that this is unlikely to be the case, and come close to resolving the approximability of NDP in general, and of NDP in grids in particular. Our main result is that NDP is 2Ω(log1−є n)-hard to approximate for any constant є, assuming that NP⊈RTIME(npoly logn), and that it is nΩ (1/(loglogn)2)-hard to approximate, assuming that for some constant δ>0, NP ⊈RTIME(2nδ). These results hold even for grid graphs and wall graphs, and extend to the closely related Edge-Disjoint Paths problem, even in wall graphs. Our hardness proof performs a reduction from the 3COL(5) problem to NDP, using a new graph partitioning problem as a proxy. Unlike the more standard approach of employing Karp reductions to prove hardness of approximation, our proof is a Cook-type reduction, where, given an input instance of 3COL(5), we produce a large number of instances of NDP, and apply an approximation algorithm for NDP to each of them. The construction of each new instance of NDP crucially depends on the solutions to the previous instances that were found by the approximation algorithm. @InProceedings{STOC18p1220, author = {Julia Chuzhoy and David H. K. Kim and Rachit Nimavat}, title = {Almost Polynomial Hardness of Node-Disjoint Paths in Grids}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1220--1233}, doi = {}, year = {2018}, } |
|
Nordström, Jakob |
STOC '18: "Clique Is Hard on Average ..."
Clique Is Hard on Average for Regular Resolution
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Alexander Razborov (Universitat Politècnica de Catalunya, Spain; KTH, Sweden; Sapienza University of Rome, Italy; University of Chicago, USA; Russian Academy of Sciences, Russia) We prove that for k ≪ n1/4 regular resolution requires length nΩ(k) to establish that an Erdos-Renyi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional nΩ(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs. @InProceedings{STOC18p866, author = {Albert Atserias and Ilario Bonacina and Susanna F. de Rezende and Massimo Lauria and Jakob Nordström and Alexander Razborov}, title = {Clique Is Hard on Average for Regular Resolution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {866--877}, doi = {}, year = {2018}, } |
|
Nummenpalo, Jerri |
STOC '18: "Sparse Kneser Graphs Are Hamiltonian ..."
Sparse Kneser Graphs Are Hamiltonian
Torsten Mütze, Jerri Nummenpalo, and Bartosz Walczak (TU Berlin, Germany; ETH Zurich, Switzerland; Jagiellonian University, Poland) For integers k≥1 and n≥2k+1, the Kneser graph K(n,k) is the graph whose vertices are the k-element subsets of {1,…,n} and whose edges connect pairs of subsets that are disjoint. The Kneser graphs of the form K(2k+1,k) are also known as the odd graphs. We settle an old problem due to Meredith, Lloyd, and Biggs from the 1970s, proving that for every k≥3, the odd graph K(2k+1,k) has a Hamilton cycle. This and a known conditional result due to Johnson imply that all Kneser graphs of the form K(2k+2a,k) with k≥3 and a≥0 have a Hamilton cycle. We also prove that K(2k+1,k) has at least 22k−6 distinct Hamilton cycles for k≥6. Our proofs are based on a reduction of the Hamiltonicity problem in the odd graph to the problem of finding a spanning tree in a suitably defined hypergraph on Dyck words. @InProceedings{STOC18p912, author = {Torsten Mütze and Jerri Nummenpalo and Bartosz Walczak}, title = {Sparse Kneser Graphs Are Hamiltonian}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {912--919}, doi = {}, year = {2018}, } |
|
Oliveira, Rafael |
STOC '18: "Operator Scaling via Geodesically ..."
Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing
Zeyuan Allen-Zhu, Ankit Garg, Yuanzhi Li, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; Princeton University, USA; University of Toronto, Canada; Institute for Advanced Study at Princeton, USA) We propose a new second-order method for geodesically convex optimization on the natural hyperbolic metric over positive definite matrices. We apply it to solve the operator scaling problem in time polynomial in the input size and logarithmic in the error. This is an exponential improvement over previous algorithms which were analyzed in the usual Euclidean, "commutative" metric (for which the above problem is not convex). Our method is general and applicable to other settings. As a consequence, we solve the equivalence problem for the left-right group action underlying the operator scaling problem. This yields a deterministic polynomial-time algorithm for a new class of Polynomial Identity Testing (PIT) problems, which was the original motivation for studying operator scaling. @InProceedings{STOC18p172, author = {Zeyuan Allen-Zhu and Ankit Garg and Yuanzhi Li and Rafael Oliveira and Avi Wigderson}, title = {Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {172--181}, doi = {}, year = {2018}, } |
|
Olivetti, Dennis |
STOC '18: "New Classes of Distributed ..."
New Classes of Distributed Time Complexity
Alkida Balliu, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Dennis Olivetti, and Jukka Suomela (Aalto University, Finland; University of Freiburg, Germany) A number of recent papers – e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) – have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π in which a solution can be verified by checking all radius-O(1) neighbourhoods, and the question is what is the smallest T such that a solution can be computed so that each node chooses its own output based on its radius-T neighbourhood. Here T is the distributed time complexity of Π. The time complexity classes for deterministic algorithms in bounded-degree graphs that are known to exist by prior work are Θ(1), Θ(log* n), Θ(logn), Θ(n1/k), and Θ(n). It is also known that there are two gaps: one between ω(1) and o(loglog* n), and another between ω(log* n) and o(logn). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple – indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(logα n) for any α ≥ 1, 2Θ(logα n) for any α ≤ 1, and Θ(nα) for any α < 1/2 in the high end of the complexity spectrum, and Θ(logα log* n) for any α ≥ 1, 2Θ(logα log* n) for any α ≤ 1, and Θ((log* n)α) for any α ≤ 1 in the low end of the complexity spectrum; here α is a positive rational number. @InProceedings{STOC18p1307, author = {Alkida Balliu and Juho Hirvonen and Janne H. Korhonen and Tuomo Lempiäinen and Dennis Olivetti and Jukka Suomela}, title = {New Classes of Distributed Time Complexity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1307--1318}, doi = {}, year = {2018}, } |
|
Omidvar, Hamed |
STOC '18: "Shape of Diffusion and Size ..."
Shape of Diffusion and Size of Monochromatic Region of a Two-Dimensional Spin System
Hamed Omidvar and Massimo Franceschetti (University of California at San Diego, USA) We consider an agent-based distributed algorithm with exponentially distributed waiting times in which agents with binary states interact locally over a geometric graph, and based on this interaction and on the value of a common intolerance threshold τ, decide whether to change their states. This model is equivalent to an Asynchronous Cellular Automaton (ACA) with extended Moore neighborhoods, a zero-temperature Ising model with Glauber dynamics, or a Schelling model of self-organized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. We prove a shape theorem for the spread of the “affected” nodes during the process dynamics and show that in the steady state, for τ ∈ (τ*,1−τ*) ∖ {1/2}, where τ* ≈ 0.488, the size of the “mono-chromatic region” at the end of the process is at least exponential in the size of the local neighborhood of interaction with probability approaching one as N grows. Combined with previous results on the expected size of the monochromatic region that provide a matching upper bound, this implies that in the steady state the size of the monochromatic region of any agent is exponential with high probability for the mentioned interval of τ. The shape theorem is based on a novel concentration inequality for the spreading time, and provides a precise geometrical description of the process dynamics. The result on the size of the monochromatic region considerably extends our understanding of the steady state. Showing convergence with high probability, it rules out the possibility that only a small fraction of the nodes are eventually contained in large monochromatic regions, which was left open by previous works. @InProceedings{STOC18p100, author = {Hamed Omidvar and Massimo Franceschetti}, title = {Shape of Diffusion and Size of Monochromatic Region of a Two-Dimensional Spin System}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {100--113}, doi = {}, year = {2018}, } |
|
Onak, Krzysztof |
STOC '18: "The Query Complexity of Graph ..."
The Query Complexity of Graph Isomorphism: Bypassing Distribution Testing Lower Bounds
Krzysztof Onak and Xiaorui Sun (IBM Research, USA; Microsoft Research, USA) We study the query complexity of graph isomorphism in the property testing model for dense graphs. We give an algorithm that makes n1+o(1) queries, improving on the previous best bound of Õ(n5/4). Since the problem is known to require Ω(n) queries, our algorithm is optimal up to a subpolynomial factor. While trying to extend a known connection to distribution testing, discovered by Fischer and Matsliah (SICOMP 2008), one encounters a natural obstacle presented by sampling lower bounds such as the Ω(n2/3)-sample lower bound for distribution closeness testing (Valiant, SICOMP 2011). In the context of graph isomorphism testing, these bounds lead to an n1+Ω(1) barrier for Fischer and Matsliah’s approach. We circumvent this and other limitations by exploiting a geometric representation of the connectivity of vertices. An approximate representation of similarities between vertices can be learned with a near-linear number of queries and allows relaxed versions of sampling and distribution testing problems to be solved more efficiently. @InProceedings{STOC18p165, author = {Krzysztof Onak and Xiaorui Sun}, title = {The Query Complexity of Graph Isomorphism: Bypassing Distribution Testing Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {165--171}, doi = {}, year = {2018}, } STOC '18: "Fully Dynamic Maximal Independent ..." Fully Dynamic Maximal Independent Set with Sublinear Update Time Sepehr Assadi, Krzysztof Onak, Baruch Schieber, and Shay Solomon (University of Pennsylvania, USA; IBM Research, USA) A maximal independent set (MIS) can be maintained in an evolving m-edge graph by simply recomputing it from scratch in O(m) time after each update. But can it be maintained in time sublinear in m in fully dynamic graphs? We answer this fundamental open question in the affirmative. We present a deterministic algorithm with amortized update time O(min{Δ,m3/4}), where Δ is a fixed bound on the maximum degree in the graph and m is the (dynamically changing) number of edges. We further present a distributed implementation of our algorithm with O(min{Δ,m3/4}) amortized message complexity, and O(1) amortized round complexity and adjustment complexity (the number of vertices that change their output after each update). This strengthens a similar result by Censor-Hillel, Haramaty, and Karnin (PODC’16) that required an assumption of a non-adaptive oblivious adversary. @InProceedings{STOC18p815, author = {Sepehr Assadi and Krzysztof Onak and Baruch Schieber and Shay Solomon}, title = {Fully Dynamic Maximal Independent Set with Sublinear Update Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {815--826}, doi = {}, year = {2018}, } STOC '18: "Round Compression for Parallel ..." Round Compression for Parallel Matching Algorithms Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (University of Warwick, UK; Google Research, USA; Massachusetts Institute of Technology, USA; EPFL, Switzerland; IBM Research, USA; University of Warsaw, Poland) For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem—one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n1+Ω(1) memory, this problem can also be solved 2-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+є)-approximate maximum matching, for any fixed constant є>0, in O((loglogn)2) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of the MPC model, as compared to the PRAM model. Firstly, we use vertex–based graph partitioning, instead of the edge–based approaches that were utilized so far. Secondly, we develop a technique of round compression. This technique enables one to take a (distributed) algorithm that computes an O(1)-approximation of maximum matching in O(logn) independent PRAM phases and implement a super-constant number of these phases in only a constant number of MPC rounds. @InProceedings{STOC18p471, author = {Artur Czumaj and Jakub Łącki and Aleksander Mądry and Slobodan Mitrović and Krzysztof Onak and Piotr Sankowski}, title = {Round Compression for Parallel Matching Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {471--484}, doi = {}, year = {2018}, } |
|
Oppenheim, Izhar |
STOC '18: "Construction of New Local ..."
Construction of New Local Spectral High Dimensional Expanders
Tali Kaufman and Izhar Oppenheim (Bar-Ilan University, Israel; Ben-Gurion University of the Negev, Israel) High dimensional expanders is a vibrant emerging field of study. Nevertheless, the only known construction of bounded degree high dimensional expanders is based on Ramanujan complexes, whereas one dimensional bounded degree expanders are abundant. In this work we construct new families of bounded degree high dimensional expanders obeying the local spectral expansion property. A property that implies, geometric overlapping, fast mixing of high dimensional random walks, agreement testing and agreement expansion. The construction also yields new families of expander graphs. The construction is quite elementary and it is presented in a self contained manner; This is in contrary to the highly involved construction of the Ramanujan complexes. The construction is also strongly symmetric; The symmetry of the construction could be used, for example, to obtain good symmetric LDPC codes that were previously based on Ramanujan graphs. @InProceedings{STOC18p773, author = {Tali Kaufman and Izhar Oppenheim}, title = {Construction of New Local Spectral High Dimensional Expanders}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {773--786}, doi = {}, year = {2018}, } |
|
Paes Leme, Renato |
STOC '18: "Stochastic Bandits Robust ..."
Stochastic Bandits Robust to Adversarial Corruptions
Thodoris Lykouris, Vahab Mirrokni, and Renato Paes Leme (Cornell University, USA; Google Research, USA) We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam. The goal of this model is to encourage the design of bandit algorithms that (i) work well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models. In our model, the rewards for all arms are initially drawn from a distribution and are then altered by an adaptive adversary. We provide a simple algorithm whose performance gracefully degrades with the total corruption the adversary injected in the data, measured by the sum across rounds of the biggest alteration the adversary made in the data in that round; this total corruption is denoted by C. Our algorithm provides a guarantee that retains the optimal guarantee (up to a logarithmic term) if the input is stochastic and whose performance degrades linearly to the amount of corruption C, while crucially being agnostic to it. We also provide a lower bound showing that this linear degradation is necessary if the algorithm achieves optimal performance in the stochastic setting (the lower bound works even for a known amount of corruption, a special case in which our algorithm achieves optimal performance without the extra logarithm). @InProceedings{STOC18p114, author = {Thodoris Lykouris and Vahab Mirrokni and Renato Paes Leme}, title = {Stochastic Bandits Robust to Adversarial Corruptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {114--122}, doi = {}, year = {2018}, } |
|
Paneth, Omer |
STOC '18: "Multi-collision Resistance: ..."
Multi-collision Resistance: A Paradigm for Keyless Hash Functions
Nir Bitansky, Yael Tauman Kalai, and Omer Paneth (Tel Aviv University, Israel; Microsoft, USA; Massachusetts Institute of Technology, USA) We introduce a new notion of multi-collision resistance for keyless hash functions. This is a natural relaxation of collision resistance where it is hard to find multiple inputs with the same hash in the following sense. The number of colliding inputs that a polynomial-time non-uniform adversary can find is not much larger than its advice. We discuss potential candidates for this notion and study its applications. Assuming the existence of such hash functions, we resolve the long-standing question of the round complexity of zero knowledge protocols --- we construct a 3-message zero knowledge argument against arbitrary polynomial-size non-uniform adversaries. We also improve the round complexity in several other central applications, including a 3-message succinct argument of knowledge for NP, a 4-message zero-knowledge proof, and a 5-message public-coin zero-knowledge argument. Our techniques can also be applied in the keyed setting, where we match the round complexity of known protocols while relaxing the underlying assumption from collision-resistance to keyed multi-collision resistance. The core technical contribution behind our results is a domain extension transformation from multi-collision-resistant hash functions for a fixed input length to ones with an arbitrary input length and a local opening property. The transformation is based on a combination of classical domain extension techniques, together with new information-theoretic tools. In particular, we define and construct a new variant of list-recoverable codes, which may be of independent interest. @InProceedings{STOC18p671, author = {Nir Bitansky and Yael Tauman Kalai and Omer Paneth}, title = {Multi-collision Resistance: A Paradigm for Keyless Hash Functions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {671--684}, doi = {}, year = {2018}, } |
|
Panigrahi, Debmalya |
STOC '18: "Online Load Balancing on Related ..."
Online Load Balancing on Related Machines
Sungjin Im, Nathaniel Kell, Debmalya Panigrahi, and Maryam Shadloo (University of California at Merced, USA; Duke University, USA) In this paper, we consider the problem of assigning jobs online to machines with non-uniform speeds (also called related machines) so to optimize a given norm of the machine loads. A long line of work, starting with the seminal work of Graham in the 1960s, has led to tight competitive ratios for all ℓq norms for two scenarios: the special case of identical machines (uniform machine speeds) and the more general setting of unrelated machines (jobs have arbitrary processing times on machines). For non-uniform machine speeds, however, the only known result was a constant competitive competitive ratio for the makespan (ℓ∞) norm, via the so-called slowest-fit algorithm (Aspnes, Azar, Fiat, Plotkin, and Waarts, JACM ’97). Our first result in this paper is to obtain the first constant-competitive algorithm for scheduling on related machines for any arbitrary ℓq norm. Recent literature has further expanded the scope of this problem to vector scheduling, to capture multi-dimensional resource requirements in applications such as data centers. As in the scalar case, tight bounds are known for vector scheduling on identical and unrelated machines. Our second set of results is to give tight competitive ratios for vector scheduling on related machines for the makespan and all ℓq norms. No previous bounds were known, even for the makespan norm, for related machines. We employ a convex relaxation of the ℓq-norm objective and use a continuous greedy algorithm to solve this convex program online. To round the fractional solution, we then use a novel restructuring of the instance that we call machine smoothing. This is a generic tool that reduces a problem on related machines to a set of problem instances on identical machines, and we hope it will be useful in other settings with non-uniform machine speeds as well. @InProceedings{STOC18p30, author = {Sungjin Im and Nathaniel Kell and Debmalya Panigrahi and Maryam Shadloo}, title = {Online Load Balancing on Related Machines}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {30--43}, doi = {}, year = {2018}, } |
|
Peng, Richard |
STOC '18: "Incomplete Nested Dissection ..."
Incomplete Nested Dissection
Rasmus Kyng, Richard Peng, Robert Schwieterman, and Peng Zhang (Simons Institute for the Theory of Computing Berkeley, USA; Georgia Tech, USA) We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC’07, Shklarski-Toledo SIMAX’08]. Given a 3-dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O(k1/3 n5/3 log(1 / є)). This asymptotically improves the running time O(n2) by Nested Dissection for all k ≪ n. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n1/44. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination. @InProceedings{STOC18p404, author = {Rasmus Kyng and Richard Peng and Robert Schwieterman and Peng Zhang}, title = {Incomplete Nested Dissection}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {404--417}, doi = {}, year = {2018}, } |
|
Pettie, Seth |
STOC '18: "An Optimal Distributed (Δ+1)-Coloring ..."
An Optimal Distributed (Δ+1)-Coloring Algorithm?
Yi-Jun Chang, Wenzheng Li, and Seth Pettie (University of Michigan, USA; Tsinghua University, China) Vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. In this paper we present a new algorithm for (Δ+1)-list coloring in the randomized LOCAL model running in O(log∗n + Detd(poly logn)) time, where Detd(n′) is the deterministic complexity of (deg+1)-list coloring (v’s palette has size deg(v)+1) on n′-vertex graphs. This improves upon a previous randomized algorithm of Harris, Schneider, and Su (STOC 2016). with complexity O(√logΔ + loglogn + Detd(poly logn)), and (when Δ is sufficiently large) is much faster than the best known deterministic algorithm of Fraigniaud, Heinrich, and Kosowski (FOCS 2016), with complexity O(√Δlog2.5Δ + log* n). Our algorithm appears to be optimal. It matches the Ω(log∗n) randomized lower bound, due to Naor (SIDMA 1991) and sort of matches the Ω(Det(poly logn)) randomized lower bound due to Chang, Kopelowitz, and Pettie (FOCS 2016), where Det is the deterministic complexity of (Δ+1)-list coloring. The best known upper bounds on Detd(n′) and Det(n′) are both 2O(√logn′) by Panconesi and Srinivasan (Journal of Algorithms 1996), and it is quite plausible that the complexities of both problems are the same, asymptotically. @InProceedings{STOC18p445, author = {Yi-Jun Chang and Wenzheng Li and Seth Pettie}, title = {An Optimal Distributed (Δ+1)-Coloring Algorithm?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {445--456}, doi = {}, year = {2018}, } |
|
Pitassi, Toniann |
STOC '18: "Lifting Nullstellensatz to ..."
Lifting Nullstellensatz to Monotone Span Programs over Any Field
Toniann Pitassi and Robert Robere (University of Toronto, Canada; Institute for Advanced Study at Princeton, USA) We characterize the size of monotone span programs computing certain “structured” boolean functions by the Nullstellensatz degree of a related unsatisfiable Boolean formula. This yields the first exponential lower bounds for monotone span programs over arbitrary fields, the first exponential separations between monotone span programs over fields of different characteristic, and the first exponential separation between monotone span programs over arbitrary fields and monotone circuits. We also show tight quasipolynomial lower bounds on monotone span programs computing directed st-connectivity over arbitrary fields, separating monotone span programs from non-deterministic logspace and also separating monotone and non-monotone span programs over GF(2). Our results yield the same lower bounds for linear secret sharing schemes due to the previously known relationship between monotone span programs and linear secret sharing. To prove our characterization we introduce a new and general tool for lifting polynomial degree to rank over arbitrary fields. @InProceedings{STOC18p1207, author = {Toniann Pitassi and Robert Robere}, title = {Lifting Nullstellensatz to Monotone Span Programs over Any Field}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1207--1219}, doi = {}, year = {2018}, } |
|
Prezza, Nicola |
STOC '18: "At the Roots of Dictionary ..."
At the Roots of Dictionary Compression: String Attractors
Dominik Kempa and Nicola Prezza (University of Helsinki, Finland; DTU, Denmark; University of Pisa, Italy) A well-known fact in the field of lossless text compression is that high-order entropy is a weak model when the input contains long repetitions. Motivated by this fact, decades of research have generated myriads of so-called dictionary compressors: algorithms able to reduce the text’s size by exploiting its repetitiveness. Lempel-Ziv 77 is one of the most successful and well-known tools of this kind, followed by straight-line programs, run-length Burrows-Wheeler transform, macro schemes, collage systems, and the compact directed acyclic word graph. In this paper, we show that these techniques are different solutions to the same, elegant, combinatorial problem: to find a small set of positions capturing all distinct text’s substrings. We call such a set a string attractor. We first show reductions between dictionary compressors and string attractors. This gives the approximation ratios of dictionary compressors with respect to the smallest string attractor and allows us to uncover new asymptotic relations between the output sizes of different dictionary compressors. We then show that the k-attractor problem — deciding whether a text has a size-t set of positions capturing all substrings of length at most k — is NP-complete for k≥ 3. This, in particular, includes the full string attractor problem. We provide several approximation techniques for the smallest k-attractor, show that the problem is APX-complete for constant k, and give strong inapproximability results. To conclude, we provide matching lower and upper bounds for the random access problem on string attractors. The upper bound is proved by showing a data structure supporting queries in optimal time. Our data structure is universal: by our reductions to string attractors, it supports random access on any dictionary-compression scheme. In particular, it matches the lower bound also on LZ77, straight-line programs, collage systems, and macro schemes, and therefore essentially closes (at once) the random access problem for all these compressors. @InProceedings{STOC18p827, author = {Dominik Kempa and Nicola Prezza}, title = {At the Roots of Dictionary Compression: String Attractors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {827--840}, doi = {}, year = {2018}, } |
|
Rademacher, Luis |
STOC '18: "The Minimum Euclidean-Norm ..."
The Minimum Euclidean-Norm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm Is Exponential
Jesús A. De Loera, Jamie Haddock, and Luis Rademacher (University of California at Davis, USA) The complexity of Philip Wolfe’s method for the minimum Euclidean-norm point problem over a convex polytope has remained unknown since he proposed the method in 1974. We present the first example that Wolfe’s method takes exponential time. Additionally, we improve previous results to show that linear programming reduces in strongly-polynomial time to the minimum norm point problem over a simplex @InProceedings{STOC18p545, author = {Jesús A. De Loera and Jamie Haddock and Luis Rademacher}, title = {The Minimum Euclidean-Norm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm Is Exponential}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {545--553}, doi = {}, year = {2018}, } |
|
Ramachandran, Akshay |
STOC '18: "The Paulsen Problem, Continuous ..."
The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis
Tsz Chiu Kwok, Lap Chi Lau, Yin Tat Lee, and Akshay Ramachandran (University of Waterloo, Canada; University of Washington, USA) The Paulsen problem is a basic open problem in operator theory: Given vectors u1, …, un ∈ ℝd that are є-nearly satisfying the Parseval’s condition and the equal norm condition, is it close to a set of vectors v1, …, vn ∈ ℝd that exactly satisfy the Parseval’s condition and the equal norm condition? Given u1, …, un, the squared distance (to the set of exact solutions) is defined as infv ∑i=1n || ui − vi ||22 where the infimum is over the set of exact solutions. Previous results show that the squared distance of any є-nearly solution is at most O(poly(d,n,є)) and there are є-nearly solutions with squared distance at least Ω(d є). The fundamental open question is whether the squared distance can be independent of the number of vectors n. We answer this question affirmatively by proving that the squared distance of any є-nearly solution is O(d13/2 є). Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any є-nearly solution is O(d2 n є). Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an є-nearly solution is O(d5/2 є) when n is large enough and є is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyzing the operator scaling algorithm. @InProceedings{STOC18p182, author = {Tsz Chiu Kwok and Lap Chi Lau and Yin Tat Lee and Akshay Ramachandran}, title = {The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {182--189}, doi = {}, year = {2018}, } |
|
Ramachandran, Vijaya |
STOC '18: "Fine-Grained Complexity for ..."
Fine-Grained Complexity for Sparse Graphs
Udit Agarwal and Vijaya Ramachandran (University of Texas at Austin, USA) We consider the fine-grained complexity of sparse graph problems that currently have Õ(mn) time algorithms, where m is the number of edges and n is the number of vertices in the input graph. This class includes several important path problems on both directed and undirected graphs, including APSP, MWC (Minimum Weight Cycle), Radius, Eccentricities, BC (Betweenness Centrality), etc. We introduce the notion of a sparse reduction which preserves the sparsity of graphs, and we present near linear-time sparse reductions between various pairs of graph problems in the Õ(mn) class. There are many sub-cubic reductions between graph problems in the Õ(mn) class, but surprisingly few of these preserve sparsity. In the directed case, our results give a partial order on a large collection of problems in the Õ(mn) class (along with some equivalences), and many of our reductions are very nontrivial. In the undirected case we give two nontrivial sparse reductions: from MWC to APSP, and from unweighted ANSC (all nodes shortest cycles) to unweighted APSP. We develop a new ‘bit-sampling’ method for these sparse reductions on undirected graphs, which also gives rise to improved or simpler algorithms for cycle finding problems in undirected graphs. We formulate the the notion of MWC hardness, which is based on the assumption that a minimum weight cycle in a directed graph cannot be computed in time polynomially smaller than mn. Our sparse reductions for directed path problems in the Õ(mn) class establish that several problems in this class, including 2-SiSP (second simple shortest path), s-t Replacement Paths, Radius, Eccentricities and BC are MWC hard. Our sparse reductions give MWC hardness a status for the Õ(mn) class similar to 3SUM hardness for the quadratic class, since they show sub-mn hardness for a large collection of fundamental and well-studied graph problems that have maintained an Õ(mn) time bound for over half a century. We also identify Eccentricities and BC as key problems in the Õ(mn) class which are simultaneously MWC-hard, SETH-hard and k-DSH-hard, where SETH is the Strong Exponential Time Hypothesis, and k-DSH is the hypothesis that a dominating set of size k cannot be computed in time polynomially smaller than nk. Our framework using sparse reductions is very relevant to real-world graphs, which tend to be sparse and for which the Õ(mn) time algorithms are the ones typically used in practice, and not the Õ(n3) time algorithms. @InProceedings{STOC18p239, author = {Udit Agarwal and Vijaya Ramachandran}, title = {Fine-Grained Complexity for Sparse Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {239--252}, doi = {}, year = {2018}, } |
|
Raz, Ran |
STOC '18: "Extractor-Based Time-Space ..."
Extractor-Based Time-Space Lower Bounds for Learning
Sumegha Garg, Ran Raz, and Avishay Tal (Princeton University, USA; Stanford University, USA) A matrix M: A × X → {−1,1} corresponds to the following learning problem: An unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a1, b1), (a2, b2) …, where for every i, ai ∈ A is chosen uniformly at random and bi = M(ai,x). Assume that k, l, r are such that any submatrix of M of at least 2−k · |A| rows and at least 2−l · |X| columns, has a bias of at most 2−r. We show that any learning algorithm for the learning problem corresponding to M requires either a memory of size at least Ω(k · l ), or at least 2Ω(r) samples. The result holds even if the learner has an exponentially small success probability (of 2−Ω(r)). In particular, this shows that for a large class of learning problems, any learning algorithm requires either a memory of size at least Ω((log|X|) · (log|A|)) or an exponential number of samples, achieving a tight Ω((log|X|) · (log|A|)) lower bound on the size of the memory, rather than a bound of Ω(min{(log|X|)2,(log|A|)2}) obtained in previous works by Raz [FOCS’17] and Moshkovitz and Moshkovitz [ITCS’18]. Moreover, our result implies all previous memory-samples lower bounds, as well as a number of new applications. Our proof builds on the work of Raz [FOCS’17] that gave a general technique for proving memory samples lower bounds. @InProceedings{STOC18p990, author = {Sumegha Garg and Ran Raz and Avishay Tal}, title = {Extractor-Based Time-Space Lower Bounds for Learning}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {990--1002}, doi = {}, year = {2018}, } |
|
Razborov, Alexander |
STOC '18: "Clique Is Hard on Average ..."
Clique Is Hard on Average for Regular Resolution
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Alexander Razborov (Universitat Politècnica de Catalunya, Spain; KTH, Sweden; Sapienza University of Rome, Italy; University of Chicago, USA; Russian Academy of Sciences, Russia) We prove that for k ≪ n1/4 regular resolution requires length nΩ(k) to establish that an Erdos-Renyi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional nΩ(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs. @InProceedings{STOC18p866, author = {Albert Atserias and Ilario Bonacina and Susanna F. de Rezende and Massimo Lauria and Jakob Nordström and Alexander Razborov}, title = {Clique Is Hard on Average for Regular Resolution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {866--877}, doi = {}, year = {2018}, } |
|
Razenshteyn, Ilya |
STOC '18: "Nonlinear Dimension Reduction ..."
Nonlinear Dimension Reduction via Outer Bi-Lipschitz Extensions
Sepideh Mahabadi, Konstantin Makarychev, Yury Makarychev, and Ilya Razenshteyn (Columbia University, USA; Northwestern University, USA; Toyota Technological Institute at Chicago, USA; Microsoft Research, USA) We introduce and study the notion of *an outer bi-Lipschitz extension* of a map between Euclidean spaces. The notion is a natural analogue of the notion of *a Lipschitz extension* of a Lipschitz map. We show that for every map f there exists an outer bi-Lipschitz extension f′ whose distortion is greater than that of f by at most a constant factor. This result can be seen as a counterpart of the classic Kirszbraun theorem for outer bi-Lipschitz extensions. We also study outer bi-Lipschitz extensions of near-isometric maps and show upper and lower bounds for them. Then, we present applications of our results to prioritized and terminal dimension reduction problems, described next. We prove a *prioritized* variant of the Johnson–Lindenstrauss lemma: given a set of points X⊂ ℝd of size N and a permutation (”priority ranking”) of X, there exists an embedding f of X into ℝO(logN) with distortion O(loglogN) such that the point of rank j has only O(log3 + ε j) non-zero coordinates – more specifically, all but the first O(log3+ε j) coordinates are equal to 0; the distortion of f restricted to the first j points (according to the ranking) is at most O(loglogj). The result makes a progress towards answering an open question by Elkin, Filtser, and Neiman about prioritized dimension reductions. We prove that given a set X of N points in ℜd, there exists a *terminal* dimension reduction embedding of ℝd into ℝd′, where d′ = O(logN/ε4), which preserves distances ||x−y|| between points x∈ X and y ∈ ℝd, up to a multiplicative factor of 1 ± ε. This improves a recent result by Elkin, Filtser, and Neiman. The dimension reductions that we obtain are nonlinear, and this nonlinearity is necessary. @InProceedings{STOC18p1088, author = {Sepideh Mahabadi and Konstantin Makarychev and Yury Makarychev and Ilya Razenshteyn}, title = {Nonlinear Dimension Reduction via Outer Bi-Lipschitz Extensions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1088--1101}, doi = {}, year = {2018}, } STOC '18: "Data-Dependent Hashing via ..." Data-Dependent Hashing via Nonlinear Spectral Gaps Alexandr Andoni, Assaf Naor, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Princeton University, USA; University of Toronto, Canada; Microsoft Research, USA) We establish a generic reduction from _nonlinear spectral gaps_ of metric spaces to data-dependent Locality-Sensitive Hashing, yielding a new approach to the high-dimensional Approximate Near Neighbor Search problem (ANN) under various distance functions. Using this reduction, we obtain the following results: * For _general_ d-dimensional normed spaces and n-point datasets, we obtain a _cell-probe_ ANN data structure with approximation O(logd/ε2), space dO(1) n1+ε, and dO(1)nε cell probes per query, for any ε>0. No non-trivial approximation was known before in this generality other than the O(√d) bound which follows from embedding a general norm into ℓ2. * For ℓp and Schatten-p norms, we improve the data structure further, to obtain approximation O(p) and sublinear query _time_. For ℓp, this improves upon the previous best approximation 2O(p) (which required polynomial as opposed to near-linear in n space). For the Schatten-p norm, no non-trivial ANN data structure was known before this work. Previous approaches to the ANN problem either exploit the low dimensionality of a metric, requiring space exponential in the dimension, or circumvent the curse of dimensionality by embedding a metric into a ”tractable” space, such as ℓ1. Our new generic reduction proceeds differently from both of these approaches using a novel partitioning method. @InProceedings{STOC18p787, author = {Alexandr Andoni and Assaf Naor and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Data-Dependent Hashing via Nonlinear Spectral Gaps}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {787--800}, doi = {}, year = {2018}, } |
|
Recht, Benjamin |
STOC '18: "Tight Query Complexity Lower ..."
Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law
Max Simchowitz, Ahmed El Alaoui, and Benjamin Recht (University of California at Berkeley, USA) We prove a query complexity lower bound for approximating the top r dimensional eigenspace of a matrix. We consider an oracle model where, given a symmetric matrix M ∈ ℝd × d, an algorithm Alg is allowed to make T exact queries of the form w(i) = M v(i) for i in {1,...,T}, where v(i) is drawn from a distribution which depends arbitrarily on the past queries and measurements {v(j),w(i)}1 ≤ j ≤ i−1. We show that for every gap ∈ (0,1/2], there exists a distribution over matrices M for which 1) gapr(M) = Ω(gap) (where gapr(M) is the normalized gap between the r and r+1-st largest-magnitude eigenvector of M), and 2) any Alg which takes fewer than const × r logd/√gap queries fails (with overwhelming probability) to identity a matrix V ∈ ℝd × r with orthonormal columns for which ⟨ V, M V⟩ ≥ (1 − const × gap)∑i=1r λi(M). Our bound requires only that d is a small polynomial in 1/gap and r, and matches the upper bounds of Musco and Musco ’15. Moreover, it establishes a strict separation between convex optimization and “strict-saddle” non-convex optimization of which PCA is a canonical example: in the former, first-order methods can have dimension-free iteration complexity, whereas in PCA, the iteration complexity of gradient-based methods must necessarily grow with the dimension. Our argument proceeds via a reduction to estimating a rank-r spike in a deformed Wigner model M =W + λ U U⊤, where W is from the Gaussian Orthogonal Ensemble, U is uniform on the d × r-Stieffel manifold and λ > 1 governs the size of the perturbation. Surprisingly, this ubiquitous random matrix model witnesses the worst-case rate for eigenspace approximation, and the ‘accelerated’ gap−1/2 in the rate follows as a consequence of the correspendence between the asymptotic eigengap and the size of the perturbation λ, when λ is near the “phase transition” λ = 1. To verify that d need only be polynomial in gap−1 and r, we prove a finite sample convergence theorem for top eigenvalues of a deformed Wigner matrix, which may be of independent interest. We then lower bound the above estimation problem with a novel technique based on Fano-style data-processing inequalities with truncated likelihoods; the technique generalizes the Bayes-risk lower bound of Chen et al. ’16, and we believe it is particularly suited to lower bounds in adaptive settings like the one considered in this paper. @InProceedings{STOC18p1249, author = {Max Simchowitz and Ahmed El Alaoui and Benjamin Recht}, title = {Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1249--1259}, doi = {}, year = {2018}, } |
|
Reingold, Omer |
STOC '18: "Improved Pseudorandomness ..."
Improved Pseudorandomness for Unordered Branching Programs through Local Monotonicity
Eshan Chattopadhyay, Pooya Hatami, Omer Reingold, and Avishay Tal (Cornell University, USA; Institute for Advanced Study at Princeton, USA; University of Texas at Austin, USA; Stanford University, USA) We present an explicit pseudorandom generator with seed length Õ((logn)w+1) for read-once, oblivious, width w branching programs that can read their input bits in any order. This improves upon the work of Impagliazzo, Meka and Zuckerman (FOCS’12) where they required seed length n1/2+o(1). A central ingredient in our work is the following bound that we prove on the Fourier spectrum of branching programs. For any width w read-once, oblivious branching program B:{0,1}n→ {0,1}, any k ∈ {1,…,n}, ∑S⊆[n]: |S|=k |B(S)| ≤ O(logn)wk. This settles a conjecture posed by Reingold, Steinke and Vadhan (RANDOM’13). Our analysis crucially uses a notion of local monotonicity on the edge labeling of the branching program. We carry critical parts of our proof under the assumption of local monotonicity and show how to deduce our results for unrestricted branching programs. @InProceedings{STOC18p363, author = {Eshan Chattopadhyay and Pooya Hatami and Omer Reingold and Avishay Tal}, title = {Improved Pseudorandomness for Unordered Branching Programs through Local Monotonicity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {363--375}, doi = {}, year = {2018}, } Info |
|
Robere, Robert |
STOC '18: "Lifting Nullstellensatz to ..."
Lifting Nullstellensatz to Monotone Span Programs over Any Field
Toniann Pitassi and Robert Robere (University of Toronto, Canada; Institute for Advanced Study at Princeton, USA) We characterize the size of monotone span programs computing certain “structured” boolean functions by the Nullstellensatz degree of a related unsatisfiable Boolean formula. This yields the first exponential lower bounds for monotone span programs over arbitrary fields, the first exponential separations between monotone span programs over fields of different characteristic, and the first exponential separation between monotone span programs over arbitrary fields and monotone circuits. We also show tight quasipolynomial lower bounds on monotone span programs computing directed st-connectivity over arbitrary fields, separating monotone span programs from non-deterministic logspace and also separating monotone and non-monotone span programs over GF(2). Our results yield the same lower bounds for linear secret sharing schemes due to the previously known relationship between monotone span programs and linear secret sharing. To prove our characterization we introduce a new and general tool for lifting polynomial degree to rank over arbitrary fields. @InProceedings{STOC18p1207, author = {Toniann Pitassi and Robert Robere}, title = {Lifting Nullstellensatz to Monotone Span Programs over Any Field}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1207--1219}, doi = {}, year = {2018}, } |
|
Roditty, Liam |
STOC '18: "Towards Tight Approximation ..."
Towards Tight Approximation Bounds for Graph Diameter and Eccentricities
Arturs Backurs, Liam Roditty, Gilad Segal, Virginia Vassilevska Williams, and Nicole Wein (Massachusetts Institute of Technology, USA; Bar-Ilan University, Israel) Among the most important graph parameters is the Diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the Diameter exactly. Thus, much research has been devoted to how fast this parameter can be approximated. Chechik et al. [SODA 2014] showed that the diameter can be approximated within a multiplicative factor of 3/2 in Õ(m3/2) time. Furthermore, Roditty and Vassilevska W. [STOC 13] showed that unless the Strong Exponential Time Hypothesis (SETH) fails, no O(n2−ε) time algorithm can achieve an approximation factor better than 3/2 in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than 3/2. It was, however, completely plausible that a 3/2-approximation is possible in linear time. In this work we conditionally rule out such a possibility by showing that unless SETH fails no O(m3/2−ε) time algorithm can achieve an approximation factor better than 5/3. Another fundamental set of graph parameters are the Eccentricities. The Eccentricity of a vertex v is the distance between v and the farthest vertex from v. Chechik et al. [SODA 2014] showed that the Eccentricities of all vertices can be approximated within a factor of 5/3 in Õ(m3/2) time and Abboud et al. [SODA 2016] showed that no O(n2−ε) algorithm can achieve better than 5/3 approximation in sparse graphs. We show that the runtime of the 5/3 approximation algorithm is also optimal by proving that under SETH, there is no O(m3/2−ε) algorithm that achieves a better than 9/5 approximation. We also show that no near-linear time algorithm can achieve a better than 2 approximation for the Eccentricities. This is the first lower bound in fine-grained complexity that addresses near-linear time computation. We show that our lower bound for near-linear time algorithms is essentially tight by giving an algorithm that approximates Eccentricities within a 2+δ factor in Õ(m/δ) time for any 0<δ<1. This beats all Eccentricity algorithms in Cairo et al. [SODA 2016] and is the first constant factor approximation for Eccentricities in directed graphs. To establish the above lower bounds we study the S-T Diameter problem: Given a graph and two subsets S and T of vertices, output the largest distance between a vertex in S and a vertex in T. We give new algorithms and show tight lower bounds that serve as a starting point for all other hardness results. Our lower bounds apply only to sparse graphs. We show that for dense graphs, there are near-linear time algorithms for S-T Diameter, Diameter and Eccentricities, with almost the same approximation guarantees as their Õ(m3/2) counterparts, improving upon the best known algorithms for dense graphs. @InProceedings{STOC18p267, author = {Arturs Backurs and Liam Roditty and Gilad Segal and Virginia Vassilevska Williams and Nicole Wein}, title = {Towards Tight Approximation Bounds for Graph Diameter and Eccentricities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {267--280}, doi = {}, year = {2018}, } |
|
Ron, Dana |
STOC '18: "On Approximating the Number ..."
On Approximating the Number of k-Cliques in Sublinear Time
Talya Eden, Dana Ron, and C. Seshadhri (Tel Aviv University, Israel; University of California at Santa Cruz, USA) We study the problem of approximating the number of k-cliques in a graph when given query access to the graph. We consider the standard query model for general graphs via (1) degree queries, (2) neighbor queries and (3) pair queries. Let n denote the number of vertices in the graph, m the number of edges, and Ck the number of k-cliques. We design an algorithm that outputs a (1+ε)-approximation (with high probability) for Ck, whose expected query complexity and running time are O(n/Ck1/k+mk/2/Ck )(logn, 1/ε,k). Hence, the complexity of the algorithm is sublinear in the size of the graph for Ck = ω(mk/2−1). Furthermore, we prove a lower bound showing that the query complexity of our algorithm is essentially optimal (up to the dependence on logn, 1/ε and k). The previous results in this vein are by Feige (SICOMP 06) and by Goldreich and Ron (RSA 08) for edge counting (k=2) and by Eden et al. (FOCS 2015) for triangle counting (k=3). Our result matches the complexities of these results. The previous result by Eden et al. hinges on a certain amortization technique that works only for triangle counting, and does not generalize for larger cliques. We obtain a general algorithm that works for any k≥ 3 by designing a procedure that samples each k-clique incident to a given set S of vertices with approximately equal probability. The primary difficulty is in finding cliques incident to purely high-degree vertices, since random sampling within neighbors has a low success probability. This is achieved by an algorithm that samples uniform random high degree vertices and a careful tradeoff between estimating cliques incident purely to high-degree vertices and those that include a low-degree vertex. @InProceedings{STOC18p722, author = {Talya Eden and Dana Ron and C. Seshadhri}, title = {On Approximating the Number of k-Cliques in Sublinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {722--734}, doi = {}, year = {2018}, } |
|
Rotenberg, Eva |
STOC '18: "Fast Fencing ..."
Fast Fencing
Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent Cohen-Addad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in nO(k) time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NP-hard for general k. Our polynomial time algorithm refutes this unless P equals NP. @InProceedings{STOC18p564, author = {Mikkel Abrahamsen and Anna Adamaszek and Karl Bringmann and Vincent Cohen-Addad and Mehran Mehr and Eva Rotenberg and Alan Roytman and Mikkel Thorup}, title = {Fast Fencing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564--573}, doi = {}, year = {2018}, } |
|
Rothblum, Guy N. |
STOC '18: "Composable and Versatile Privacy ..."
Composable and Versatile Privacy via Truncated CDP
Mark Bun, Cynthia Dwork, Guy N. Rothblum, and Thomas Steinke (Princeton University, USA; Harvard University, USA; Weizmann Institute of Science, Israel; IBM Research, USA) We propose truncated concentrated differential privacy (tCDP), a refinement of differential privacy and of concentrated differential privacy. This new definition provides robust and efficient composition guarantees, supports powerful algorithmic techniques such as privacy amplification via sub-sampling, and enables more accurate statistical analyses. In particular, we show a central task for which the new definition enables exponential accuracy improvement. @InProceedings{STOC18p74, author = {Mark Bun and Cynthia Dwork and Guy N. Rothblum and Thomas Steinke}, title = {Composable and Versatile Privacy via Truncated CDP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {74--86}, doi = {}, year = {2018}, } |
|
Roytman, Alan |
STOC '18: "Fast Fencing ..."
Fast Fencing
Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent Cohen-Addad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in nO(k) time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NP-hard for general k. Our polynomial time algorithm refutes this unless P equals NP. @InProceedings{STOC18p564, author = {Mikkel Abrahamsen and Anna Adamaszek and Karl Bringmann and Vincent Cohen-Addad and Mehran Mehr and Eva Rotenberg and Alan Roytman and Mikkel Thorup}, title = {Fast Fencing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564--573}, doi = {}, year = {2018}, } |
|
Rubinstein, Aviad |
STOC '18: "Hardness of Approximate Nearest ..."
Hardness of Approximate Nearest Neighbor Search
Aviad Rubinstein (Harvard University, USA) We prove conditional near-quadratic running time lower bounds for approximate Bichromatic Closest Pair with Euclidean, Manhattan, Hamming, or edit distance. Specifically, unless the Strong Exponential Time Hypothesis (SETH) is false, for every δ>0 there exists a constant ε>0 such that computing a (1+ε)-approximation to the Bichromatic Closest Pair requires Ω(n2−δ) time. In particular, this implies a near-linear query time for Approximate Nearest Neighbor search with polynomial preprocessing time. Our reduction uses the recently introduced Distributed PCP framework, but obtains improved efficiency using Algebraic Geometry (AG) codes. Efficient PCPs from AG codes have been constructed in other settings before, but our construction is the first to yield new hardness results. @InProceedings{STOC18p1260, author = {Aviad Rubinstein}, title = {Hardness of Approximate Nearest Neighbor Search}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1260--1268}, doi = {}, year = {2018}, } |
|
Rudra, Atri |
STOC '18: "General Strong Polarization ..."
General Strong Polarization
Jarosław Błasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, and Madhu Sudan (Harvard University, USA; Carnegie Mellon University, USA; SUNY Buffalo, USA) Arikan’s exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constant-sized) invertible matrix M, a family of polar codes can be associated with this matrix and its ability to approach capacity follows from the polarization of an associated [0,1]-bounded martingale, namely its convergence in the limit to either 0 or 1 with probability 1. Arikan showed appropriate polarization of the martingale associated with the matrix G2 = ( 10 11 ) to get capacity achieving codes. His analysis was later extended to all matrices M which satisfy an obvious necessary condition for polarization. While Arikan’s theorem does not guarantee that the codes achieve capacity at small blocklengths (specifically in length which is a polynomial in 1/є where є is the difference between the capacity of a channel and the rate of the code), it turns out that a “strong” analysis of the polarization of the underlying martingale would lead to such constructions. Indeed for the martingale associated with G2 such a strong polarization was shown in two independent works ([Guruswami and Xia, IEEE IT ’15] and [Hassani et al., IEEE IT’14]), thereby resolving a major theoretical challenge associated with the efficient attainment of Shannon capacity. In this work we extend the result above to cover martingales associated with all matrices that satisfy the necessary condition for (weak) polarization. In addition to being vastly more general, our proofs of strong polarization are (in our view) also much simpler and modular. Key to our proof is a notion of local polarization that only depends on the evolution of the martingale in a single time step. We show that local polarization always implies strong polarization. We then apply relatively simple reasoning about conditional entropies to prove local polarization in very general settings. Specifically, our result shows strong polarization over all prime fields and leads to efficient capacity-achieving source codes for compressing arbitrary i.i.d. sources, and capacity-achieving channel codes for arbitrary symmetric memoryless channels. @InProceedings{STOC18p485, author = {Jarosław Błasiok and Venkatesan Guruswami and Preetum Nakkiran and Atri Rudra and Madhu Sudan}, title = {General Strong Polarization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {485--492}, doi = {}, year = {2018}, } |
|
S., Karthik C. |
STOC '18: "On the Parameterized Complexity ..."
On the Parameterized Complexity of Approximating Dominating Set
Karthik C. S., Bundit Laekhanukit, and Pasin Manurangsi (Weizmann Institute of Science, Israel; Max Planck Institute for Informatics, Germany; Shanghai University of Finance and Economics, China; University of California at Berkeley, USA) We study the parameterized complexity of approximating the k-Dominating Set (domset) problem where an integer k and a graph G on n vertices are given as input, and the goal is to find a dominating set of size at most F(k) · k whenever the graph G has a dominating set of size k. When such an algorithm runs in time T(k)poly(n) (i.e., FPT-time) for some computable function T, it is said to be an F(k)-FPT-approximation algorithm for k-domset. Whether such an algorithm exists is listed in the seminal book of Downey and Fellows (2013) as one of the ”most infamous” open problems in Parameterized Complexity. This work gives an almost complete answer to this question by showing the non-existence of such an algorithm under W[1]≠FPT and further providing tighter running time lower bounds under stronger hypotheses. Specifically, we prove the following for every computable functions T, F and every constant ε > 0: (i) Assuming W[1]≠FPT, there is no F(k)-FPT-approximation algorithm for k-domset, (ii) Assuming the Exponential Time Hypothesis (ETH), there is no F(k)-approximation algorithm for k-domset that runs in T(k)no(k) time, (iii) Assuming the Strong Exponential Time Hypothesis (SETH), for every integer k ≥ 2, there is no F(k)-approximation algorithm for k-domset that runs in T(k)nk − ε time, (iv) Assuming the k-sum Hypothesis, for every integer k ≥ 3, there is no F(k)-approximation algorithm for k-domset that runs in T(k) n⌈ k/2 ⌉ − ε time. Previously, only constant ratio FPT-approximation algorithms were ruled out under W[1]≠FPT and (log1/4 − ε k)-FPT-approximation algorithms were ruled out under ETH [Chen and Lin, FOCS 2016]. Recently, the non-existence of an F(k)-FPT-approximation algorithm for any function F was shown under gapETH [Chalermsook et al., FOCS 2017]. Note that, to the best of our knowledge, no running time lower bound of the form nδ k for any absolute constant δ > 0 was known before even for any constant factor inapproximation ratio. Our results are obtained by establishing a connection between communication complexity and hardness of approximation, generalizing the ideas from a recent breakthrough work of Abboud et al. [FOCS 2017]. Specifically, we show that to prove hardness of approximation of a certain parameterized variant of the label cover problem, it suffices to devise a specific protocol for a communication problem that depends on which hypothesis we rely on. Each of these communication problems turns out to be either a well studied problem or a variant of one; this allows us to easily apply known techniques to solve them. @InProceedings{STOC18p1283, author = {Karthik C. S. and Bundit Laekhanukit and Pasin Manurangsi}, title = {On the Parameterized Complexity of Approximating Dominating Set}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1283--1296}, doi = {}, year = {2018}, } |
|
Safra, Muli |
STOC '18: "Towards a Proof of the 2-to-1 ..."
Towards a Proof of the 2-to-1 Games Conjecture?
Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We present a polynomial time reduction from gap-3LIN to label cover with 2-to-1 constraints. In the “yes” case the fraction of satisfied constraints is at least 1 −ε, and in the “no” case we show that this fraction is at most ε, assuming a certain (new) combinatorial hypothesis on the Grassmann graph. In other words, we describe a combinatorial hypothesis that implies the 2-to-1 conjecture with imperfect completeness. The companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] makes some progress towards proving this hypothesis. Our work builds on earlier work by a subset of the authors [Khot, Minzer and Safra, STOC 2017] where a slightly different hypothesis was used to obtain hardness of approximating vertex cover to within factor of √2−ε. The most important implication of this work is (assuming the hypothesis) an NP-hardness gap of 1/2−ε vs. ε for unique games. In addition, we derive optimal NP-hardness for approximating the max-cut-gain problem, NP-hardness of coloring an almost 4-colorable graph with any constant number of colors, and the same √2−ε NP-hardness for approximate vertex cover that was already obtained based on a slightly different hypothesis. Recent progress towards proving our hypothesis [Barak, Kothari and Steurer, ECCC TR18-077], [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] directly implies some new unconditional NP-hardness results. These include new points of NP-hardness for unique games and for 2-to-1 and 2-to-2 games. More recently, the full version of our hypothesis was proven [Khot, Minzer and Safra, ECCC TR18-006]. @InProceedings{STOC18p376, author = {Irit Dinur and Subhash Khot and Guy Kindler and Dor Minzer and Muli Safra}, title = {Towards a Proof of the 2-to-1 Games Conjecture?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {376--389}, doi = {}, year = {2018}, } STOC '18: "On Non-optimally Expanding ..." On Non-optimally Expanding Sets in Grassmann Graphs Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We study the structure of non-expanding sets in the Grassmann graph. We put forth a hypothesis stating that every small set whose expansion is smaller than 1−δ must be correlated with one of a specified list of sets which are isomorphic to smaller Grassmann graphs. We develop a framework of Fourier analysis for analyzing functions over the Grassmann graph, and prove that our hypothesis holds for all sets whose expansion is below 7/8. In the companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018], the authors show that a linearity agreement hypothesis implies an NP-hardness gap of 1/2− vs for unique games and other inapproximability results. In [Barak, Kothari and Steurer, ECCC TR18-077], the authors show that the hypothesis in this work implies the linearity agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018]. Combined with our main theorem here this proves a version of the linearity agreement hypothesis with certain specific parameters. Short of proving the entire hypothesis, this nevertheless suffices for getting new unconditional NP hardness gaps for label cover with 2-to-1 and unique constraints. Our Expansion Hypothesis has been subsequently proved in its full form [Khot, Minzer and Safra, ECCC TR18-006] thereby proving the agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] and completing the proof of the 2-to-1 Games Conjecture (albeit with imperfect completeness). @InProceedings{STOC18p940, author = {Irit Dinur and Subhash Khot and Guy Kindler and Dor Minzer and Muli Safra}, title = {On Non-optimally Expanding Sets in Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {940--951}, doi = {}, year = {2018}, } |
|
Sahai, Amit |
STOC '18: "Succinct Delegation for Low-Space ..."
Succinct Delegation for Low-Space Non-deterministic Computation
Saikrishna Badrinarayanan, Yael Tauman Kalai, Dakshita Khurana, Amit Sahai, and Daniel Wichs (University of California at Los Angeles, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA; Northeastern University, USA) We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in non-deterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)). Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a non-interactive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs. Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on sub-exponential LWE, for many natural languages believed to be outside of P. @InProceedings{STOC18p709, author = {Saikrishna Badrinarayanan and Yael Tauman Kalai and Dakshita Khurana and Amit Sahai and Daniel Wichs}, title = {Succinct Delegation for Low-Space Non-deterministic Computation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {709--721}, doi = {}, year = {2018}, } Info |
|
Sandeep, Sai |
STOC '18: "Constant Approximation for ..."
Constant Approximation for k-Median and k-Means with Outliers via Iterative Rounding
Ravishankar Krishnaswamy, Shi Li, and Sai Sandeep (Microsoft Research, India; SUNY Buffalo, USA) In this paper, we present a new iterative rounding framework for many clustering problems. Using this, we obtain an (α1 + є ≤ 7.081 + є)-approximation algorithm for k-median with outliers, greatly improving upon the large implicit constant approximation ratio of Chen. For k-means with outliers, we give an (α2+є ≤ 53.002 + є)-approximation, which is the first O(1)-approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give α1- and (α1 + є)-approximation algorithms for matroid and knapsack median problems respectively, improving upon the previous best approximations ratios of 8 due to Swamy and 17.46 due to Byrka et al. The natural LP relaxation for the k-median/k-means with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almost-integral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almost-integral and fully-integral solutions. Then, using a pre-processing procedure, we show how to convert an almost-integral solution to a fully-integral solution losing only a constant-factor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any є > 0. @InProceedings{STOC18p646, author = {Ravishankar Krishnaswamy and Shi Li and Sai Sandeep}, title = {Constant Approximation for k-Median and k-Means with Outliers via Iterative Rounding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {646--659}, doi = {}, year = {2018}, } |
|
Sankowski, Piotr |
STOC '18: "Round Compression for Parallel ..."
Round Compression for Parallel Matching Algorithms
Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (University of Warwick, UK; Google Research, USA; Massachusetts Institute of Technology, USA; EPFL, Switzerland; IBM Research, USA; University of Warsaw, Poland) For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem—one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n1+Ω(1) memory, this problem can also be solved 2-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+є)-approximate maximum matching, for any fixed constant є>0, in O((loglogn)2) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of the MPC model, as compared to the PRAM model. Firstly, we use vertex–based graph partitioning, instead of the edge–based approaches that were utilized so far. Secondly, we develop a technique of round compression. This technique enables one to take a (distributed) algorithm that computes an O(1)-approximation of maximum matching in O(logn) independent PRAM phases and implement a super-constant number of these phases in only a constant number of MPC rounds. @InProceedings{STOC18p471, author = {Artur Czumaj and Jakub Łącki and Aleksander Mądry and Slobodan Mitrović and Krzysztof Onak and Piotr Sankowski}, title = {Round Compression for Parallel Matching Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {471--484}, doi = {}, year = {2018}, } |
|
Saranurak, Thatchaphol |
STOC '18: "Smooth Heaps and a Dual View ..."
Smooth Heaps and a Dual View of Self-Adjusting Data Structures
László Kozma and Thatchaphol Saranurak (Eindhoven University of Technology, Netherlands; KTH, Sweden) We present a new connection between self-adjusting binary search trees (BSTs) and heaps, two fundamental, extensively studied, and practically relevant families of data structures (Allen,Munro, 1978; Sleator, Tarjan, 1983; Fredman, Sedgewick, Sleator, Tarjan, 1986; Wilber, 1989; Fredman, 1999; Iacono, Özkan, 2014). Roughly speaking, we map an arbitrary heap algorithm within a broad and natural model, to a corresponding BST algorithm with the same cost on a dual sequence of operations (i.e. the same sequence with the roles of time and key-space switched). This is the first general transformation between the two families of data structures. There is a rich theory of dynamic optimality for BSTs (i.e. the theory of competitiveness between BST algorithms). The lack of an analogous theory for heaps has been noted in the literature (e.g. Pettie; 2005, 2008). Through our connection, we transfer all instance-specific lower bounds known for BSTs to a general model of heaps, initiating a theory of dynamic optimality for heaps. On the algorithmic side, we obtain a new, simple and efficient heap algorithm, which we call the smooth heap. We show the smooth heap to be the heap-counterpart of Greedy, the BST algorithm with the strongest proven and conjectured properties from the literature, conjectured to be instance-optimal (Lucas, 1988; Munro, 2000; Demaine et al., 2009). Assuming the optimality of Greedy, the smooth heap is also optimal within our model of heap algorithms. Intriguingly, the smooth heap, although derived from a non-practical BST algorithm, is simple and easy to implement (e.g. it stores no auxiliary data besides the keys and tree pointers). It can be seen as a variation on the popular pairing heap data structure, extending it with a “power-of-two-choices” type of heuristic. For the smooth heap we obtain instance-specific upper bounds, with applications in adaptive sorting, and we see it as a promising candidate for the long-standing question of a simpler alternative to Fibonacci heaps. @InProceedings{STOC18p801, author = {László Kozma and Thatchaphol Saranurak}, title = {Smooth Heaps and a Dual View of Self-Adjusting Data Structures}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {801--814}, doi = {}, year = {2018}, } |
|
Saxena, Nitin |
STOC '18: "Bootstrapping Variables in ..."
Bootstrapping Variables in Algebraic Circuits
Manindra Agrawal, Sumanta Ghosh, and Nitin Saxena (IIT Kanpur, India) We show that for the blackbox polynomial identity testing (PIT) problem it suffices to study circuits that depend only on the first extremely few variables. One only need to consider size-s degree-s circuits that depend on the first log∘ c s variables (where c is a constant and we are composing c logarithms). Thus, hitting-set generator (hsg) manifests a bootstrapping behavior— a partial hsg against very few variables can be efficiently grown to a complete hsg. A boolean analog, or a pseudorandom generator property of this type, is unheard-of. Our idea is to use the partial hsg and its annihilator polynomial to efficiently bootstrap the hsg exponentially wrt variables. This is repeated c times in an efficient way. Pushing the envelope further we show that: (1) a quadratic-time blackbox PIT for 6913-variate degree-s size-s polynomials, will lead to a “near”-complete derandomization of PIT, and (2) a blackbox PIT for n-variate degree-s size-s circuits in snδ-time, for δ<1/2, will lead to a “near”-complete derandomization of PIT (in contrast, sn-time is trivial). Our second idea is to study depth-4 circuits that depend on constantly many variables. We show that a polynomial-time computable, O(s1.49)-degree hsg for trivariate depth-4 circuits bootstraps to a quasipolynomial time hsg for general poly-degree circuits, and implies a lower bound that is a bit stronger than Kabanets-Impagliazzo (STOC 2003). @InProceedings{STOC18p1166, author = {Manindra Agrawal and Sumanta Ghosh and Nitin Saxena}, title = {Bootstrapping Variables in Algebraic Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1166--1179}, doi = {}, year = {2018}, } Info STOC '18: "Discovering the Roots: Uniform ..." Discovering the Roots: Uniform Closure Results for Algebraic Classes under Factoring Pranjal Dutta, Nitin Saxena, and Amit Sinhababu (Chennai Mathematical Institute, India; IIT Kanpur, India) Newton iteration (NI) is an almost 350 years old recursive formula that approximates a simple root of a polynomial quite rapidly. We generalize it to a matrix recurrence (allRootsNI) that approximates all the roots simultaneously. In this form, the process yields a better circuit complexity in the case when the number of roots r is small but the multiplicities are exponentially large. Our method sets up a linear system in r unknowns and iteratively builds the roots as formal power series. For an algebraic circuit f(x1,…,xn) of size s we prove that each factor has size at most a polynomial in: s and the degree of the squarefree part of f. Consequently, if f1 is a 2Ω(n)-hard polynomial then any nonzero multiple ∏i fiei is equally hard for arbitrary positive ei’s, assuming that ∑ideg(fi) is at most 2O(n). It is an old open question whether the class of poly(n)-sized formulas (resp. algebraic branching programs) is closed under factoring. We show that given a polynomial f of degree nO(1) and formula (resp. ABP) size nO(logn) we can find a similar size formula (resp. ABP) factor in randomized poly(nlogn)-time. Consequently, if determinant requires nΩ(logn) size formula, then the same can be said about any of its nonzero multiples. As part of our proofs, we identify a new property of multivariate polynomial factorization. We show that under a random linear transformation τ, f(τx) completely factors via power series roots. Moreover, the factorization adapts well to circuit complexity analysis. This with allRootsNI are the techniques that help us make progress towards the old open problems; supplementing the large body of classical results and concepts in algebraic circuit factorization (eg. Zassenhaus, J.NT 1969; Kaltofen, STOC 1985-7 & B'urgisser, FOCS 2001). @InProceedings{STOC18p1152, author = {Pranjal Dutta and Nitin Saxena and Amit Sinhababu}, title = {Discovering the Roots: Uniform Closure Results for Algebraic Classes under Factoring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1152--1165}, doi = {}, year = {2018}, } |
|
Saxena, Raghuvansh |
STOC '18: "Interactive Coding over the ..."
Interactive Coding over the Noisy Broadcast Channel
Klim Efremenko, Gillat Kol, and Raghuvansh Saxena (Ben-Gurion University of the Negev, Israel; Princeton University, USA) A set of n players, each holding a private input bit, communicate over a noisy broadcast channel. Their mutual goal is for all players to learn all inputs. At each round one of the players broadcasts a bit to all the other players, and the bit received by each player is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? This problem was first suggested by El Gamal in 1984. In 1988, Gallager gave an elegant noise-resistant protocol requiring only O(n loglogn) rounds. The problem got resolved in 2005 by a seminal paper of Goyal, Kindler, and Saks, proving that Gallager’s protocol is essentially optimal. We revisit the above noisy broadcast problem and show that O(n) rounds suffice. This is possible due to a relaxation of the model assumed by the previous works. We no longer demand that exactly one player broadcasts in every round, but rather allow any number of players to broadcast. However, if it is not the case that exactly one player chooses to broadcast, each of the other players gets an adversely chosen bit. We generalized the above result and initiate the study of interactive coding over the noisy broadcast channel. We show that any interactive protocol that works over the noiseless broadcast channel can be simulated over our restrictive noisy broadcast model with constant blowup of the communication. Our results also establish that modern techniques for interactive coding can help us make progress on the classical problems. @InProceedings{STOC18p507, author = {Klim Efremenko and Gillat Kol and Raghuvansh Saxena}, title = {Interactive Coding over the Noisy Broadcast Channel}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {507--520}, doi = {}, year = {2018}, } |
|
Schieber, Baruch |
STOC '18: "Fully Dynamic Maximal Independent ..."
Fully Dynamic Maximal Independent Set with Sublinear Update Time
Sepehr Assadi, Krzysztof Onak, Baruch Schieber, and Shay Solomon (University of Pennsylvania, USA; IBM Research, USA) A maximal independent set (MIS) can be maintained in an evolving m-edge graph by simply recomputing it from scratch in O(m) time after each update. But can it be maintained in time sublinear in m in fully dynamic graphs? We answer this fundamental open question in the affirmative. We present a deterministic algorithm with amortized update time O(min{Δ,m3/4}), where Δ is a fixed bound on the maximum degree in the graph and m is the (dynamically changing) number of edges. We further present a distributed implementation of our algorithm with O(min{Δ,m3/4}) amortized message complexity, and O(1) amortized round complexity and adjustment complexity (the number of vertices that change their output after each update). This strengthens a similar result by Censor-Hillel, Haramaty, and Karnin (PODC’16) that required an assumption of a non-adaptive oblivious adversary. @InProceedings{STOC18p815, author = {Sepehr Assadi and Krzysztof Onak and Baruch Schieber and Shay Solomon}, title = {Fully Dynamic Maximal Independent Set with Sublinear Update Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {815--826}, doi = {}, year = {2018}, } |
|
Schild, Aaron |
STOC '18: "An Almost-Linear Time Algorithm ..."
An Almost-Linear Time Algorithm for Uniform Random Spanning Tree Generation
Aaron Schild (University of California at Berkeley, USA) We give an m1+o(1)βo(1)-time algorithm for generating uniformly random spanning trees in weighted graphs with max-to-min weight ratio β. In the process, we illustrate how fundamental tradeoffs in graph partitioning can be overcome by eliminating vertices from a graph using Schur complements of the associated Laplacian matrix. Our starting point is the Aldous-Broder algorithm, which samples a random spanning tree using a random walk. As in prior work, we use fast Laplacian linear system solvers to shortcut the random walk from a vertex v to the boundary of a set of vertices assigned to v called a “shortcutter.” We depart from prior work by introducing a new way of employing Laplacian solvers to shortcut the walk. To bound the amount of shortcutting work, we show that most random walk steps occur far away from an unvisited vertex. We apply this observation by charging uses of a shortcutter S to random walk steps in the Schur complement obtained by eliminating all vertices in S that are not assigned to it. @InProceedings{STOC18p214, author = {Aaron Schild}, title = {An Almost-Linear Time Algorithm for Uniform Random Spanning Tree Generation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {214--227}, doi = {}, year = {2018}, } |
|
Schulman, Leonard J. |
STOC '18: "Explicit Binary Tree Codes ..."
Explicit Binary Tree Codes with Polylogarithmic Size Alphabet
Gil Cohen, Bernhard Haeupler, and Leonard J. Schulman (Princeton University, USA; Carnegie Mellon University, USA; California Institute of Technology, USA) This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size. We give an explicit binary tree code with constant distance and alphabet size poly(logn), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly(n). At the core of our construction is the first explicit tree code with constant rate and constant distance, though with non-constant arity - a result of independent interest. This construction adapts the polynomial interpolation framework to the online setting. @InProceedings{STOC18p535, author = {Gil Cohen and Bernhard Haeupler and Leonard J. Schulman}, title = {Explicit Binary Tree Codes with Polylogarithmic Size Alphabet}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {535--544}, doi = {}, year = {2018}, } |
|
Schweitzer, Pascal |
STOC '18: "An Exponential Lower Bound ..."
An Exponential Lower Bound for Individualization-Refinement Algorithms for Graph Isomorphism
Daniel Neuen and Pascal Schweitzer (RWTH Aachen University, Germany; TU Kaiserslautern, Germany) The individualization-refinement paradigm provides a strong toolbox for testing isomorphism of two graphs and indeed, the currently fastest implementations of isomorphism solvers all follow this approach. While these solvers are fast in practice, from a theoretical point of view, no general lower bounds concerning the worst case complexity of these tools are known. In fact, it is an open question what the running time of individualization-refinement algorithms is. For all we know some of the algorithms could have polynomial running time. In this work we give a negative answer to this question and construct a family of graphs on which algorithms based on the individualization-refinement paradigm require exponential time. Contrary to a previous construction of Miyazaki, that only applies to a specific implementation within the individualization-refinement framework, our construction is immune to changing the cell selector, the refinement operator, the invariant that is used, or adding various heuristic invariants to the algorithm. In fact, our graphs also provide exponential lower bounds in the case when the k-dimensional Weisfeiler-Leman algorithm is used to replace the the 1-dimensional Weisfeiler-Leman algorithm (often called color refinement) that is normally used. Finally, the arguments even work when the entire automorphism group of the inputs is initially provided to the algorithm. The arguments apply to isomorphism testing algorithms as well as canonization algorithms within the framework. @InProceedings{STOC18p138, author = {Daniel Neuen and Pascal Schweitzer}, title = {An Exponential Lower Bound for Individualization-Refinement Algorithms for Graph Isomorphism}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {138--150}, doi = {}, year = {2018}, } |
|
Schwieterman, Robert |
STOC '18: "Incomplete Nested Dissection ..."
Incomplete Nested Dissection
Rasmus Kyng, Richard Peng, Robert Schwieterman, and Peng Zhang (Simons Institute for the Theory of Computing Berkeley, USA; Georgia Tech, USA) We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC’07, Shklarski-Toledo SIMAX’08]. Given a 3-dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O(k1/3 n5/3 log(1 / є)). This asymptotically improves the running time O(n2) by Nested Dissection for all k ≪ n. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n1/44. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination. @InProceedings{STOC18p404, author = {Rasmus Kyng and Richard Peng and Robert Schwieterman and Peng Zhang}, title = {Incomplete Nested Dissection}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {404--417}, doi = {}, year = {2018}, } |
|
Seddighin, Saeed |
STOC '18: "Fast Algorithms for Knapsack ..."
Fast Algorithms for Knapsack via Convolution and Prediction
MohammadHossein Bateni, MohammadTaghi Hajiaghayi, Saeed Seddighin, and Cliff Stein (Google, USA; University of Maryland, USA; Columbia University, USA) The knapsack problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most well-known NP-hard problems. The goal is to pack a knapsack of size t with the maximum value from a collection of n items with given sizes and values. Recent evidence suggests that a classic O(nt) dynamic-programming solution for the knapsack problem might be the fastest in the worst case. In fact, solving the knapsack problem was shown to be computationally equivalent to the (min, +) convolution problem, which is thought to be facing a quadratic-time barrier. This hardness is in contrast to the more famous (+, ·) convolution (generally known as polynomial multiplication), that has an O(nlogn)-time solution via Fast Fourier Transform. Our main results are algorithms with near-linear running times (in terms of the size of the knapsack and the number of items) for the knapsack problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by , the running time of our algorithm is Õ((n+t)). If the item values are integers bounded by , our algorithm runs in time Õ(n+t). Best previously known running times were O(nt), O(n2) and O(n) (Pisinger, J. of Alg., 1999). At the core of our algorithms lies the prediction technique: Roughly speaking, this new technique enables us to compute the convolution of two vectors in time (n) when an approximation of the solution within an additive error of is available. Our results also improve the best known strongly polynomial time solutions for knapsack. In the limited size setting, when the items have multiplicities, the fastest strongly polynomial time algorithms for knapsack run in time O(n2 2) and O(n3 2) for the cases of infinite and given multiplicities, respectively. Our results improve both running times by a factor of (n max{1, n/}). @InProceedings{STOC18p1269, author = {MohammadHossein Bateni and MohammadTaghi Hajiaghayi and Saeed Seddighin and Cliff Stein}, title = {Fast Algorithms for Knapsack via Convolution and Prediction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1269--1282}, doi = {}, year = {2018}, } |
|
Segal, Gilad |
STOC '18: "Towards Tight Approximation ..."
Towards Tight Approximation Bounds for Graph Diameter and Eccentricities
Arturs Backurs, Liam Roditty, Gilad Segal, Virginia Vassilevska Williams, and Nicole Wein (Massachusetts Institute of Technology, USA; Bar-Ilan University, Israel) Among the most important graph parameters is the Diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the Diameter exactly. Thus, much research has been devoted to how fast this parameter can be approximated. Chechik et al. [SODA 2014] showed that the diameter can be approximated within a multiplicative factor of 3/2 in Õ(m3/2) time. Furthermore, Roditty and Vassilevska W. [STOC 13] showed that unless the Strong Exponential Time Hypothesis (SETH) fails, no O(n2−ε) time algorithm can achieve an approximation factor better than 3/2 in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than 3/2. It was, however, completely plausible that a 3/2-approximation is possible in linear time. In this work we conditionally rule out such a possibility by showing that unless SETH fails no O(m3/2−ε) time algorithm can achieve an approximation factor better than 5/3. Another fundamental set of graph parameters are the Eccentricities. The Eccentricity of a vertex v is the distance between v and the farthest vertex from v. Chechik et al. [SODA 2014] showed that the Eccentricities of all vertices can be approximated within a factor of 5/3 in Õ(m3/2) time and Abboud et al. [SODA 2016] showed that no O(n2−ε) algorithm can achieve better than 5/3 approximation in sparse graphs. We show that the runtime of the 5/3 approximation algorithm is also optimal by proving that under SETH, there is no O(m3/2−ε) algorithm that achieves a better than 9/5 approximation. We also show that no near-linear time algorithm can achieve a better than 2 approximation for the Eccentricities. This is the first lower bound in fine-grained complexity that addresses near-linear time computation. We show that our lower bound for near-linear time algorithms is essentially tight by giving an algorithm that approximates Eccentricities within a 2+δ factor in Õ(m/δ) time for any 0<δ<1. This beats all Eccentricity algorithms in Cairo et al. [SODA 2016] and is the first constant factor approximation for Eccentricities in directed graphs. To establish the above lower bounds we study the S-T Diameter problem: Given a graph and two subsets S and T of vertices, output the largest distance between a vertex in S and a vertex in T. We give new algorithms and show tight lower bounds that serve as a starting point for all other hardness results. Our lower bounds apply only to sparse graphs. We show that for dense graphs, there are near-linear time algorithms for S-T Diameter, Diameter and Eccentricities, with almost the same approximation guarantees as their Õ(m3/2) counterparts, improving upon the best known algorithms for dense graphs. @InProceedings{STOC18p267, author = {Arturs Backurs and Liam Roditty and Gilad Segal and Virginia Vassilevska Williams and Nicole Wein}, title = {Towards Tight Approximation Bounds for Graph Diameter and Eccentricities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {267--280}, doi = {}, year = {2018}, } |
|
Servedio, Rocco A. |
STOC '18: "Distribution-Free Junta Testing ..."
Distribution-Free Junta Testing
Zhengyang Liu, Xi Chen, Rocco A. Servedio, Ying Sheng, and Jinyu Xie (Shanghai Jiao Tong University, China; Columbia University, USA) We study the problem of testing whether an unknown n-variable Boolean function is a k-junta in the distribution-free property testing model, where the distance between functions is measured with respect to an arbitrary and unknown probability distribution over {0,1}n. Our first main result is that distribution-free k-junta testing can be performed, with one-sided error, by an adaptive algorithm that uses Õ(k2)/є queries (independent of n). Complementing this, our second main result is a lower bound showing that any non-adaptive distribution-free k-junta testing algorithm must make Ω(2k/3) queries even to test to accuracy є=1/3. These bounds establish that while the optimal query complexity of non-adaptive k-junta testing is 2Θ(k), for adaptive testing it is poly(k), and thus show that adaptivity provides an exponential improvement in the distribution-free query complexity of testing juntas. @InProceedings{STOC18p749, author = {Zhengyang Liu and Xi Chen and Rocco A. Servedio and Ying Sheng and Jinyu Xie}, title = {Distribution-Free Junta Testing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {749--759}, doi = {}, year = {2018}, } |
|
Seshadhri, C. |
STOC '18: "On Approximating the Number ..."
On Approximating the Number of k-Cliques in Sublinear Time
Talya Eden, Dana Ron, and C. Seshadhri (Tel Aviv University, Israel; University of California at Santa Cruz, USA) We study the problem of approximating the number of k-cliques in a graph when given query access to the graph. We consider the standard query model for general graphs via (1) degree queries, (2) neighbor queries and (3) pair queries. Let n denote the number of vertices in the graph, m the number of edges, and Ck the number of k-cliques. We design an algorithm that outputs a (1+ε)-approximation (with high probability) for Ck, whose expected query complexity and running time are O(n/Ck1/k+mk/2/Ck )(logn, 1/ε,k). Hence, the complexity of the algorithm is sublinear in the size of the graph for Ck = ω(mk/2−1). Furthermore, we prove a lower bound showing that the query complexity of our algorithm is essentially optimal (up to the dependence on logn, 1/ε and k). The previous results in this vein are by Feige (SICOMP 06) and by Goldreich and Ron (RSA 08) for edge counting (k=2) and by Eden et al. (FOCS 2015) for triangle counting (k=3). Our result matches the complexities of these results. The previous result by Eden et al. hinges on a certain amortization technique that works only for triangle counting, and does not generalize for larger cliques. We obtain a general algorithm that works for any k≥ 3 by designing a procedure that samples each k-clique incident to a given set S of vertices with approximately equal probability. The primary difficulty is in finding cliques incident to purely high-degree vertices, since random sampling within neighbors has a low success probability. This is achieved by an algorithm that samples uniform random high degree vertices and a careful tradeoff between estimating cliques incident purely to high-degree vertices and those that include a low-degree vertex. @InProceedings{STOC18p722, author = {Talya Eden and Dana Ron and C. Seshadhri}, title = {On Approximating the Number of k-Cliques in Sublinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {722--734}, doi = {}, year = {2018}, } |
|
Shadloo, Maryam |
STOC '18: "Online Load Balancing on Related ..."
Online Load Balancing on Related Machines
Sungjin Im, Nathaniel Kell, Debmalya Panigrahi, and Maryam Shadloo (University of California at Merced, USA; Duke University, USA) In this paper, we consider the problem of assigning jobs online to machines with non-uniform speeds (also called related machines) so to optimize a given norm of the machine loads. A long line of work, starting with the seminal work of Graham in the 1960s, has led to tight competitive ratios for all ℓq norms for two scenarios: the special case of identical machines (uniform machine speeds) and the more general setting of unrelated machines (jobs have arbitrary processing times on machines). For non-uniform machine speeds, however, the only known result was a constant competitive competitive ratio for the makespan (ℓ∞) norm, via the so-called slowest-fit algorithm (Aspnes, Azar, Fiat, Plotkin, and Waarts, JACM ’97). Our first result in this paper is to obtain the first constant-competitive algorithm for scheduling on related machines for any arbitrary ℓq norm. Recent literature has further expanded the scope of this problem to vector scheduling, to capture multi-dimensional resource requirements in applications such as data centers. As in the scalar case, tight bounds are known for vector scheduling on identical and unrelated machines. Our second set of results is to give tight competitive ratios for vector scheduling on related machines for the makespan and all ℓq norms. No previous bounds were known, even for the makespan norm, for related machines. We employ a convex relaxation of the ℓq-norm objective and use a continuous greedy algorithm to solve this convex program online. To round the fractional solution, we then use a novel restructuring of the instance that we call machine smoothing. This is a generic tool that reduces a problem on related machines to a set of problem instances on identical machines, and we hope it will be useful in other settings with non-uniform machine speeds as well. @InProceedings{STOC18p30, author = {Sungjin Im and Nathaniel Kell and Debmalya Panigrahi and Maryam Shadloo}, title = {Online Load Balancing on Related Machines}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {30--43}, doi = {}, year = {2018}, } |
|
Shahrasbi, Amirbehshad |
STOC '18: "Synchronization Strings: Explicit ..."
Synchronization Strings: Explicit Constructions, Local Decoding, and Applications
Bernhard Haeupler and Amirbehshad Shahrasbi (Carnegie Mellon Universi |