STOC 2018 – Author Index 
Contents 
Abstracts 
Authors

A B C D E F G H I J K L M N O P R S T U V W X Y Z
Aaronson, Scott 
STOC '18: "Shadow Tomography of Quantum ..."
Shadow Tomography of Quantum States
Scott Aaronson (University of Texas at Austin, USA) We introduce the problem of *shadow tomography*: given an unknown Ddimensional quantum mixed state ρ, as well as known twooutcome measurements E_{1},…,E_{M}, estimate the probability that E_{i} accepts ρ, to within additive error ε, for each of the M measurements. How many copies of ρ are needed to achieve this, with high probability? Surprisingly, we give a procedure that solves the problem by measuring only O( ε^{−5}·log^{4} M·logD) copies. This means, for example, that we can learn the behavior of an arbitrary nqubit state, on *all* accepting/rejecting circuits of some fixed polynomial size, by measuring only n^{O( 1)} copies of the state. This resolves an open problem of the author, which arose from his work on privatekey quantum money schemes, but which also has applications to quantum copyprotected software, quantum advice, and quantum oneway communication. Recently, building on this work, Brandão et al. have given a different approach to shadow tomography using semidefinite programming, which achieves a savings in computation time. 

Abboud, Amir 
STOC '18: "More Consequences of Falsifying ..."
More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture
Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof (IBM Research, USA; Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Eindhoven University of Technology, Netherlands) The Strong Exponential Time Hypothesis and the OVconjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomialtime algorithms. The OVconjecture in moderate dimension states there is no ε>0 for which an O(N^{2−ε}) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains Ddimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OVconjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that:  For all d and all large enough k, there is a randomized algorithm that takes O(n^{(1−ε)k}) time to solve the ZeroWeightkClique and MinWeightkClique problems on dhypergraphs with n vertices. As a consequence, the OVconjecture is implied by the Weighted Clique conjecture.  For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)^{n}). 

Abraham, Ittai 
STOC '18: "Metric Embedding via Shortest ..."
Metric Embedding via Shortest Path Decompositions
Ittai Abraham, Arnold Filtser, Anupam Gupta, and Ofer Neiman (VMware, USA; BenGurion University of the Negev, Israel; Carnegie Mellon University, USA) We study the problem of embedding weighted graphs of pathwidth k into ℓ_{p} spaces. Our main result is an O(k^{min{1p,12}})distortion embedding. For p=1, this is a superexponential improvement over the best previous bound of Lee and Sidiropoulos. Our distortion bound is asymptotically tight for any fixed p >1. Our result is obtained via a novel embedding technique that is based on low depth decompositions of a graph via shortest paths. The core new idea is that given a geodesic shortest path P, we can probabilistically embed all points into 2 dimensions with respect to P. For p>2 our embedding also implies improved distortion on bounded treewidth graphs (O((klogn)^{1p})). For asymptotically large p, our results also implies improved distortion on graphs excluding a minor. 

Abrahamsen, Mikkel 
STOC '18: "The Art Gallery Problem Is ..."
The Art Gallery Problem Is ∃ ℝComplete
Mikkel Abrahamsen, Anna Adamaszek, and Tillmann Miltzow (University of Copenhagen, Denmark; Université Libre de Bruxelles, Belgium) We prove that the art gallery problem is equivalent under polynomial time reductions to deciding whether a system of polynomial equations over the real numbers has a solution. The art gallery problem is a classic problem in computational geometry, introduced in 1973 by Victor Klee. Given a simple polygon P and an integer k, the goal is to decide if there exists a set G of k guards within P such that every point p ∈ P is seen by at least one guard g∈ G. Each guard corresponds to a point in the polygon P, and we say that a guard g sees a point p if the line segment pg is contained in P. The art gallery problem has stimulated extensive research in geometry and in algorithms. However, the complexity status of the art gallery problem has not been resolved. It has long been known that the problem is NPhard, but no one has been able to show that it lies in NP. Recently, the computational geometry community became more aware of the complexity class ∃ ℝ, which has been studied earlier by other communities. The class ∃ ℝ consists of problems that can be reduced in polynomial time to the problem of deciding whether a system of polynomial equations with integer coefficients and any number of real variables has a solution. It can be easily seen that NP ⊆ ∃ ℝ. We prove that the art gallery problem is ∃ ℝcomplete, implying that (1) any system of polynomial equations over the real numbers can be encoded as an instance of the art gallery problem, and (2) the art gallery problem is not in the complexity class NP unless NP=∃ ℝ. As an illustration of our techniques, we can show that for every compact semialgebraic set S⊂ [0,1]^{2} there exists a polygon with rational coordinates that enforces one of the guards to be at any position p∈ S, in any optimal guarding. As a corollary of our construction, we prove that for any real algebraic number α there is an instance of the art gallery problem where one of the coordinates of the guards equals α in any guard set of minimum cardinality. That rules out many natural geometric approaches to the problem, as it shows that any approach based on constructing a finite set of candidate points for placing guards has to include points with coordinates being roots of polynomials with arbitrary degree. In the ∃ ℝhardness proof for the art gallery problem we introduce a new ∃ ℝcomplete problem ETRINV. We believe that this problem is of independent interest, as it can be used to obtain ∃ ℝhardness proofs for other problems. In particular, ETRINV has been very recently used to prove ∃ ℝhardness of other geometric problems. Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent CohenAddad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a nearlinear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in n^{O(k)} time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NPhard for general k. Our polynomial time algorithm refutes this unless P equals NP. 

Adamaszek, Anna 
STOC '18: "The Art Gallery Problem Is ..."
The Art Gallery Problem Is ∃ ℝComplete
Mikkel Abrahamsen, Anna Adamaszek, and Tillmann Miltzow (University of Copenhagen, Denmark; Université Libre de Bruxelles, Belgium) We prove that the art gallery problem is equivalent under polynomial time reductions to deciding whether a system of polynomial equations over the real numbers has a solution. The art gallery problem is a classic problem in computational geometry, introduced in 1973 by Victor Klee. Given a simple polygon P and an integer k, the goal is to decide if there exists a set G of k guards within P such that every point p ∈ P is seen by at least one guard g∈ G. Each guard corresponds to a point in the polygon P, and we say that a guard g sees a point p if the line segment pg is contained in P. The art gallery problem has stimulated extensive research in geometry and in algorithms. However, the complexity status of the art gallery problem has not been resolved. It has long been known that the problem is NPhard, but no one has been able to show that it lies in NP. Recently, the computational geometry community became more aware of the complexity class ∃ ℝ, which has been studied earlier by other communities. The class ∃ ℝ consists of problems that can be reduced in polynomial time to the problem of deciding whether a system of polynomial equations with integer coefficients and any number of real variables has a solution. It can be easily seen that NP ⊆ ∃ ℝ. We prove that the art gallery problem is ∃ ℝcomplete, implying that (1) any system of polynomial equations over the real numbers can be encoded as an instance of the art gallery problem, and (2) the art gallery problem is not in the complexity class NP unless NP=∃ ℝ. As an illustration of our techniques, we can show that for every compact semialgebraic set S⊂ [0,1]^{2} there exists a polygon with rational coordinates that enforces one of the guards to be at any position p∈ S, in any optimal guarding. As a corollary of our construction, we prove that for any real algebraic number α there is an instance of the art gallery problem where one of the coordinates of the guards equals α in any guard set of minimum cardinality. That rules out many natural geometric approaches to the problem, as it shows that any approach based on constructing a finite set of candidate points for placing guards has to include points with coordinates being roots of polynomials with arbitrary degree. In the ∃ ℝhardness proof for the art gallery problem we introduce a new ∃ ℝcomplete problem ETRINV. We believe that this problem is of independent interest, as it can be used to obtain ∃ ℝhardness proofs for other problems. In particular, ETRINV has been very recently used to prove ∃ ℝhardness of other geometric problems. Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent CohenAddad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a nearlinear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in n^{O(k)} time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NPhard for general k. Our polynomial time algorithm refutes this unless P equals NP. 

Agarwal, Udit 
STOC '18: "FineGrained Complexity for ..."
FineGrained Complexity for Sparse Graphs
Udit Agarwal and Vijaya Ramachandran (University of Texas at Austin, USA) We consider the finegrained complexity of sparse graph problems that currently have Õ(mn) time algorithms, where m is the number of edges and n is the number of vertices in the input graph. This class includes several important path problems on both directed and undirected graphs, including APSP, MWC (Minimum Weight Cycle), Radius, Eccentricities, BC (Betweenness Centrality), etc. We introduce the notion of a sparse reduction which preserves the sparsity of graphs, and we present near lineartime sparse reductions between various pairs of graph problems in the Õ(mn) class. There are many subcubic reductions between graph problems in the Õ(mn) class, but surprisingly few of these preserve sparsity. In the directed case, our results give a partial order on a large collection of problems in the Õ(mn) class (along with some equivalences), and many of our reductions are very nontrivial. In the undirected case we give two nontrivial sparse reductions: from MWC to APSP, and from unweighted ANSC (all nodes shortest cycles) to unweighted APSP. We develop a new ‘bitsampling’ method for these sparse reductions on undirected graphs, which also gives rise to improved or simpler algorithms for cycle finding problems in undirected graphs. We formulate the the notion of MWC hardness, which is based on the assumption that a minimum weight cycle in a directed graph cannot be computed in time polynomially smaller than mn. Our sparse reductions for directed path problems in the Õ(mn) class establish that several problems in this class, including 2SiSP (second simple shortest path), st Replacement Paths, Radius, Eccentricities and BC are MWC hard. Our sparse reductions give MWC hardness a status for the Õ(mn) class similar to 3SUM hardness for the quadratic class, since they show submn hardness for a large collection of fundamental and wellstudied graph problems that have maintained an Õ(mn) time bound for over half a century. We also identify Eccentricities and BC as key problems in the Õ(mn) class which are simultaneously MWChard, SETHhard and kDSHhard, where SETH is the Strong Exponential Time Hypothesis, and kDSH is the hypothesis that a dominating set of size k cannot be computed in time polynomially smaller than n^{k}. Our framework using sparse reductions is very relevant to realworld graphs, which tend to be sparse and for which the Õ(mn) time algorithms are the ones typically used in practice, and not the Õ(n^{3}) time algorithms. 

Aggarwal, Divesh 
STOC '18: "(Gap/S)ETH Hardness of SVP ..."
(Gap/S)ETH Hardness of SVP
Divesh Aggarwal and Noah StephensDavidowitz (National University of Singapore, Singapore; Princeton University, USA) We prove the following quantitative hardness results for the Shortest Vector Problem in the ℓ_{p} norm (SVP_p), where n is the rank of the input lattice. For “almost all” p > p_{0} ≈ 2.1397, there is no 2^{n/Cp}time algorithm for SVP_p for some explicit (easily computable) constant C_{p} > 0 unless the (randomized) Strong Exponential Time Hypothesis (SETH) is false. (E.g., for p ≥ 3, C_{p} < 1 + (p+3) 2^{−p} + 10 p^{2} 2^{−2p}.) For any 1 ≤ p ≤ ∞, there is no 2^{o(n)}time algorithm for SVP_p unless the nonuniform GapExponential Time Hypothesis (GapETH) is false. Furthermore, for each such p, there exists a constant γ_{p} > 1 such that the same result holds even for γ_{p}approximate SVP_p. For p > 2, the above statement holds under the weaker assumption of randomized GapETH. I.e., there is no 2^{o(n)}time algorithm for γ_{p}approximate SVP_p unless randomized GapETH is false. See http://arxiv.org/abs/1712.00942 for a complete exposition. 

Agrawal, Manindra 
STOC '18: "Bootstrapping Variables in ..."
Bootstrapping Variables in Algebraic Circuits
Manindra Agrawal, Sumanta Ghosh, and Nitin Saxena (IIT Kanpur, India) We show that for the blackbox polynomial identity testing (PIT) problem it suffices to study circuits that depend only on the first extremely few variables. One only need to consider sizes degrees circuits that depend on the first log^{∘ c} s variables (where c is a constant and we are composing c logarithms). Thus, hittingset generator (hsg) manifests a bootstrapping behavior— a partial hsg against very few variables can be efficiently grown to a complete hsg. A boolean analog, or a pseudorandom generator property of this type, is unheardof. Our idea is to use the partial hsg and its annihilator polynomial to efficiently bootstrap the hsg exponentially wrt variables. This is repeated c times in an efficient way. Pushing the envelope further we show that: (1) a quadratictime blackbox PIT for 6913variate degrees sizes polynomials, will lead to a “near”complete derandomization of PIT, and (2) a blackbox PIT for nvariate degrees sizes circuits in s^{nδ}time, for δ<1/2, will lead to a “near”complete derandomization of PIT (in contrast, s^{n}time is trivial). Our second idea is to study depth4 circuits that depend on constantly many variables. We show that a polynomialtime computable, O(s^{1.49})degree hsg for trivariate depth4 circuits bootstraps to a quasipolynomial time hsg for general polydegree circuits, and implies a lower bound that is a bit stronger than KabanetsImpagliazzo (STOC 2003). 

AllenZhu, Zeyuan 
STOC '18: "Operator Scaling via Geodesically ..."
Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing
Zeyuan AllenZhu, Ankit Garg, Yuanzhi Li, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; Princeton University, USA; University of Toronto, Canada; Institute for Advanced Study at Princeton, USA)
We propose a new secondorder method for geodesically convex optimization on the natural hyperbolic metric over positive definite matrices. We apply it to solve the operator scaling problem in time polynomial in the input size and logarithmic in the error. This is an exponential improvement over previous algorithms which were analyzed in the usual Euclidean, "commutative" metric (for which the above problem is not convex). Our method is general and applicable to other settings.
As a consequence, we solve the equivalence problem for the leftright group action underlying the operator scaling problem. This yields a deterministic polynomialtime algorithm for a new class of Polynomial Identity Testing (PIT) problems, which was the original motivation for studying operator scaling.
@InProceedings{STOC18p172,
author = {Zeyuan AllenZhu and Ankit Garg and Yuanzhi Li and Rafael Oliveira and Avi Wigderson},
title = {Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {172181},
doi = {10.1145/3188745.3188942},
year = {2018},
}
Publisher's Version
Article Search


Alman, Josh 
STOC '18: "CellProbe Lower Bounds from ..."
CellProbe Lower Bounds from Online Communication Complexity
Josh Alman, Joshua R. Wang, and Huacheng Yu (Massachusetts Institute of Technology, USA; Stanford University, USA; Harvard University, USA) In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piecebypiece, our model presents one of the players, Bob, his input piecebypiece, and has the players Alice and Bob cooperate to compute a result each time before the next piece is revealed to Bob. This model has a closer and more natural correspondence to dynamic data structures than classic communication models do, and hence presents a new perspective on data structures. We first present a tight lower bound for the online set intersection problem in the online communication model, demonstrating a general approach for proving online communication lower bounds. The online communication model prevents a batching trick that classic communication complexity allows, and yields a stronger lower bound. We then apply the online communication model to prove data structure lower bounds for two dynamic data structure problems: the Group Range problem and the Dynamic Connectivity problem for forests. Both of the problems admit a worst case O(logn)time data structure. Using online communication complexity, we prove a tight cellprobe lower bound for each: spending o(logn) (even amortized) time per operation results in at best an exp(−δ^{2} n) probability of correctly answering a (1/2+δ)fraction of the n queries. 

Andoni, Alexandr 
STOC '18: "DataDependent Hashing via ..."
DataDependent Hashing via Nonlinear Spectral Gaps
Alexandr Andoni, Assaf Naor, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Princeton University, USA; University of Toronto, Canada; Microsoft Research, USA) We establish a generic reduction from _nonlinear spectral gaps_ of metric spaces to datadependent LocalitySensitive Hashing, yielding a new approach to the highdimensional Approximate Near Neighbor Search problem (ANN) under various distance functions. Using this reduction, we obtain the following results: * For _general_ ddimensional normed spaces and npoint datasets, we obtain a _cellprobe_ ANN data structure with approximation O(logd/ε^{2}), space d^{O(1)} n^{1+ε}, and d^{O(1)}n^{ε} cell probes per query, for any ε>0. No nontrivial approximation was known before in this generality other than the O(√d) bound which follows from embedding a general norm into ℓ_{2}. * For ℓ_{p} and Schattenp norms, we improve the data structure further, to obtain approximation O(p) and sublinear query _time_. For ℓ_{p}, this improves upon the previous best approximation 2^{O(p)} (which required polynomial as opposed to nearlinear in n space). For the Schattenp norm, no nontrivial ANN data structure was known before this work. Previous approaches to the ANN problem either exploit the low dimensionality of a metric, requiring space exponential in the dimension, or circumvent the curse of dimensionality by embedding a metric into a ”tractable” space, such as ℓ_{1}. Our new generic reduction proceeds differently from both of these approaches using a novel partitioning method. 

Assadi, Sepehr 
STOC '18: "Fully Dynamic Maximal Independent ..."
Fully Dynamic Maximal Independent Set with Sublinear Update Time
Sepehr Assadi, Krzysztof Onak, Baruch Schieber, and Shay Solomon (University of Pennsylvania, USA; IBM Research, USA) A maximal independent set (MIS) can be maintained in an evolving medge graph by simply recomputing it from scratch in O(m) time after each update. But can it be maintained in time sublinear in m in fully dynamic graphs? We answer this fundamental open question in the affirmative. We present a deterministic algorithm with amortized update time O(min{Δ,m^{3/4}}), where Δ is a fixed bound on the maximum degree in the graph and m is the (dynamically changing) number of edges. We further present a distributed implementation of our algorithm with O(min{Δ,m^{3/4}}) amortized message complexity, and O(1) amortized round complexity and adjustment complexity (the number of vertices that change their output after each update). This strengthens a similar result by CensorHillel, Haramaty, and Karnin (PODC’16) that required an assumption of a nonadaptive oblivious adversary. 

Atserias, Albert 
STOC '18: "Clique Is Hard on Average ..."
Clique Is Hard on Average for Regular Resolution
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Alexander Razborov (Universitat Politècnica de Catalunya, Spain; KTH, Sweden; Sapienza University of Rome, Italy; University of Chicago, USA; Russian Academy of Sciences, Russia) We prove that for k ≪ n^{1/4} regular resolution requires length n^{Ω(k)} to establish that an ErdosRenyi graph with appropriately chosen edge density does not contain a kclique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional n^{Ω(k)} lower bounds on running time for several stateoftheart algorithms for finding maximum cliques in graphs. 

Backurs, Arturs 
STOC '18: "Towards Tight Approximation ..."
Towards Tight Approximation Bounds for Graph Diameter and Eccentricities
Arturs Backurs, Liam Roditty, Gilad Segal, Virginia Vassilevska Williams, and Nicole Wein (Massachusetts Institute of Technology, USA; BarIlan University, Israel) Among the most important graph parameters is the Diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the Diameter exactly. Thus, much research has been devoted to how fast this parameter can be approximated. Chechik et al. [SODA 2014] showed that the diameter can be approximated within a multiplicative factor of 3/2 in Õ(m^{3/2}) time. Furthermore, Roditty and Vassilevska W. [STOC 13] showed that unless the Strong Exponential Time Hypothesis (SETH) fails, no O(n^{2−ε}) time algorithm can achieve an approximation factor better than 3/2 in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than 3/2. It was, however, completely plausible that a 3/2approximation is possible in linear time. In this work we conditionally rule out such a possibility by showing that unless SETH fails no O(m^{3/2−ε}) time algorithm can achieve an approximation factor better than 5/3. Another fundamental set of graph parameters are the Eccentricities. The Eccentricity of a vertex v is the distance between v and the farthest vertex from v. Chechik et al. [SODA 2014] showed that the Eccentricities of all vertices can be approximated within a factor of 5/3 in Õ(m^{3/2}) time and Abboud et al. [SODA 2016] showed that no O(n^{2−ε}) algorithm can achieve better than 5/3 approximation in sparse graphs. We show that the runtime of the 5/3 approximation algorithm is also optimal by proving that under SETH, there is no O(m^{3/2−ε}) algorithm that achieves a better than 9/5 approximation. We also show that no nearlinear time algorithm can achieve a better than 2 approximation for the Eccentricities. This is the first lower bound in finegrained complexity that addresses nearlinear time computation. We show that our lower bound for nearlinear time algorithms is essentially tight by giving an algorithm that approximates Eccentricities within a 2+δ factor in Õ(m/δ) time for any 0<δ<1. This beats all Eccentricity algorithms in Cairo et al. [SODA 2016] and is the first constant factor approximation for Eccentricities in directed graphs. To establish the above lower bounds we study the ST Diameter problem: Given a graph and two subsets S and T of vertices, output the largest distance between a vertex in S and a vertex in T. We give new algorithms and show tight lower bounds that serve as a starting point for all other hardness results. Our lower bounds apply only to sparse graphs. We show that for dense graphs, there are nearlinear time algorithms for ST Diameter, Diameter and Eccentricities, with almost the same approximation guarantees as their Õ(m^{3/2}) counterparts, improving upon the best known algorithms for dense graphs. 

Badrinarayanan, Saikrishna 
STOC '18: "Succinct Delegation for LowSpace ..."
Succinct Delegation for LowSpace Nondeterministic Computation
Saikrishna Badrinarayanan, Yael Tauman Kalai, Dakshita Khurana, Amit Sahai, and Daniel Wichs (University of California at Los Angeles, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA; Northeastern University, USA)
We construct a delegation scheme for verifying nondeterministic computations, with complexity proportional only to the nondeterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in nondeterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)).
Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a subexponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, subexponential) cryptographic assumptions, such as the subexponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to noninteractively prove the correctness of any adaptively chosen nondeterministic computation. Such a scheme is referred to as a noninteractive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs.
Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions.
Our results yield succinct noninteractive arguments based on subexponential LWE, for many natural languages believed to be outside of P.
@InProceedings{STOC18p709,
author = {Saikrishna Badrinarayanan and Yael Tauman Kalai and Dakshita Khurana and Amit Sahai and Daniel Wichs},
title = {Succinct Delegation for LowSpace Nondeterministic Computation},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {709721},
doi = {10.1145/3188745.3188924},
year = {2018},
}
Publisher's Version
Article Search
Info


Balkanski, Eric 
STOC '18: "The Adaptive Complexity of ..."
The Adaptive Complexity of Maximizing a Submodular Function
Eric Balkanski and Yaron Singer (Harvard University, USA) In this paper we study the adaptive complexity of submodular optimization. Informally, the adaptive complexity of a problem is the minimal number of sequential rounds required to achieve a constant factor approximation when polynomiallymany queries can be executed in parallel at each round. Adaptivity is a fundamental concept that is heavily studied in computer science, largely due to the need for parallelizing computation. Somewhat surprisingly, very little is known about adaptivity in submodular optimization. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, to the best of our knowledge, all that is known to date is that the adaptive complexity is between 1 and Ω(n). Our main result in this paper is a tight characterization showing that the adaptive complexity of maximizing a monotone submodular function under a cardinality constraint is Θ(log n):  We describe an algorithm which requires O(log n) sequential rounds and achieves an approximation that is arbitrarily close to 1/3;  We show that no algorithm can achieve an approximation better than O(1 / log n) with fewer than O(log n / log log n) rounds. Thus, when allowing for parallelization, our algorithm achieves a constant factor approximation exponentially faster than any known existing algorithm for submodular maximization. Importantly, the approximation algorithm is achieved via adaptive sampling and complements a recent line of work on optimization of functions learned from data. In many cases we do not know the functions we optimize and learn them from labeled samples. Recent results show that no algorithm can obtain a constant factor approximation guarantee using polynomiallymany labeled samples as in the PAC and PMAC models, drawn from any distribution. Since learning with nonadaptive samples over any distribution results in a sharp impossibility, we consider learning with adaptive samples where the learner obtains poly(n) samples drawn from a distribution of her choice in every round. Our result implies that in the realizable case, where there is a true underlying function generating the data, Θ(log n) batches of adaptive samples are necessary and sufficient to approximately “learn to optimize” a monotone submodular function under a cardinality constraint. 

Balliu, Alkida 
STOC '18: "New Classes of Distributed ..."
New Classes of Distributed Time Complexity
Alkida Balliu, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Dennis Olivetti, and Jukka Suomela (Aalto University, Finland; University of Freiburg, Germany) A number of recent papers – e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) – have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π in which a solution can be verified by checking all radiusO(1) neighbourhoods, and the question is what is the smallest T such that a solution can be computed so that each node chooses its own output based on its radiusT neighbourhood. Here T is the distributed time complexity of Π. The time complexity classes for deterministic algorithms in boundeddegree graphs that are known to exist by prior work are Θ(1), Θ(log^{*} n), Θ(logn), Θ(n^{1/k}), and Θ(n). It is also known that there are two gaps: one between ω(1) and o(loglog^{*} n), and another between ω(log^{*} n) and o(logn). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple – indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(log^{α} n) for any α ≥ 1, 2^{Θ(logα n)} for any α ≤ 1, and Θ(n^{α}) for any α < 1/2 in the high end of the complexity spectrum, and Θ(log^{α} log^{*} n) for any α ≥ 1, 2^{Θ(logα log* n)} for any α ≤ 1, and Θ((log^{*} n)^{α}) for any α ≤ 1 in the low end of the complexity spectrum; here α is a positive rational number. 

Bansal, Nikhil 
STOC '18: "The GramSchmidt Walk: A Cure ..."
The GramSchmidt Walk: A Cure for the Banaszczyk Blues
Nikhil Bansal, Daniel Dadush, Shashwat Garg, and Shachar Lovett (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of California at San Diego, USA) An important result in discrepancy due to Banaszczyk states that for any set of n vectors in ℝ^{m} of ℓ_{2} norm at most 1 and any convex body K in ℝ^{m} of Gaussian measure at least half, there exists a ± 1 combination of these vectors which lies in 5K. This result implies the best known bounds for several problems in discrepancy. Banaszczyk’s proof of this result is nonconstructive and an open problem has been to give an efficient algorithm to find such a ± 1 combination of the vectors. In this paper, we resolve this question and give an efficient randomized algorithm to find a ± 1 combination of the vectors which lies in cK for c>0 an absolute constant. This leads to new efficient algorithms for several problems in discrepancy theory. 

Bateni, MohammadHossein 
STOC '18: "Fast Algorithms for Knapsack ..."
Fast Algorithms for Knapsack via Convolution and Prediction
MohammadHossein Bateni, MohammadTaghi Hajiaghayi, Saeed Seddighin, and Cliff Stein (Google, USA; University of Maryland, USA; Columbia University, USA) The knapsack problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most wellknown NPhard problems. The goal is to pack a knapsack of size t with the maximum value from a collection of n items with given sizes and values. Recent evidence suggests that a classic O(nt) dynamicprogramming solution for the knapsack problem might be the fastest in the worst case. In fact, solving the knapsack problem was shown to be computationally equivalent to the (min, +) convolution problem, which is thought to be facing a quadratictime barrier. This hardness is in contrast to the more famous (+, ·) convolution (generally known as polynomial multiplication), that has an O(nlogn)time solution via Fast Fourier Transform. Our main results are algorithms with nearlinear running times (in terms of the size of the knapsack and the number of items) for the knapsack problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by , the running time of our algorithm is Õ((n+t)). If the item values are integers bounded by , our algorithm runs in time Õ(n+t). Best previously known running times were O(nt), O(n^{2}) and O(n) (Pisinger, J. of Alg., 1999). At the core of our algorithms lies the prediction technique: Roughly speaking, this new technique enables us to compute the convolution of two vectors in time (n) when an approximation of the solution within an additive error of is available. Our results also improve the best known strongly polynomial time solutions for knapsack. In the limited size setting, when the items have multiplicities, the fastest strongly polynomial time algorithms for knapsack run in time O(n^{2} ^{2}) and O(n^{3} ^{2}) for the cases of infinite and given multiplicities, respectively. Our results improve both running times by a factor of (n max{1, n/}). 

Bezáková, Ivona 
STOC '18: "Inapproximability of the Independent ..."
Inapproximability of the Independent Set Polynomial in the Complex Plane
Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, and Daniel Štefankovič (Rochester Institute of Technology, USA; University of Oxford, UK; University of Rochester, USA) We study the complexity of approximating the value of the independent set polynomial Z_{G}(λ) of a graph G with maximum degree Δ when the activity λ is a complex number. When λ is real, the complexity picture is wellunderstood, and is captured by two realvalued thresholds λ^{*} and λ_{c}, which depend on Δ and satisfy 0<λ^{*}<λ_{c}. It is known that if λ is a real number in the interval (−λ^{*},λ_{c}) then there is an FPTAS for approximating Z_{G}(λ) on graphs G with maximum degree at most Δ. On the other hand, if λ is a real number outside of the (closed) interval, then approximation is NPhard. The key to establishing this picture was the interpretation of the thresholds λ^{*} and λ_{c} on the Δregular tree. The ”occupation ratio” of a Δregular tree T is the contribution to Z_{T}(λ) from independent sets containing the root of the tree, divided by Z_{T}(λ) itself. This occupation ratio converges to a limit, as the height of the tree grows, if and only if λ∈ [−λ^{*},λ_{c}]. Unsurprisingly, the case where λ is complex is more challenging. It is known that there is an FPTAS when λ is a complex number with norm at most λ^{*} and also when λ is in a small strip surrounding the real interval [0,λ_{c}). However, neither of these results is believed to fully capture the truth about when approximation is possible. Peters and Regts identified the values of λ for which the occupation ratio of the Δregular tree converges. These values carve a cardioidshaped region Λ_{Δ} in the complex plane, whose boundary includes the critical points −λ^{*} and λ_{c}. Motivated by the picture in the real case, they asked whether Λ_{Δ} marks the true approximability threshold for general complex values λ. Our main result shows that for every λ outside of Λ_{Δ}, the problem of approximating Z_{G}(λ) on graphs G with maximum degree at most Δ is indeed NPhard. In fact, when λ is outside of Λ_{Δ} and is not a positive real number, we give the stronger result that approximating Z_{G}(λ) is actually #Phard. Further, on the negative real axis, when λ<−λ^{*}, we show that it is #Phard to even decide whether Z_{G}(λ)>0, resolving in the affirmative a conjecture of Harvey, Srivastava and Vondrak. Our proof techniques are based around tools from complex analysis — specifically the study of iterative multivariate rational maps. 

Bitansky, Nir 
STOC '18: "Multicollision Resistance: ..."
Multicollision Resistance: A Paradigm for Keyless Hash Functions
Nir Bitansky, Yael Tauman Kalai, and Omer Paneth (Tel Aviv University, Israel; Microsoft, USA; Massachusetts Institute of Technology, USA)
We introduce a new notion of multicollision resistance for keyless hash functions. This is a natural relaxation of collision resistance where it is hard to find multiple inputs with the same hash in the following sense. The number of colliding inputs that a polynomialtime nonuniform adversary can find is not much larger than its advice. We discuss potential candidates for this notion and study its applications.
Assuming the existence of such hash functions, we resolve the longstanding question of the round complexity of zero knowledge protocols  we construct a 3message zero knowledge argument against arbitrary polynomialsize nonuniform adversaries. We also improve the round complexity in several other central applications, including a 3message succinct argument of knowledge for NP, a 4message zeroknowledge proof, and a 5message publiccoin zeroknowledge argument. Our techniques can also be applied in the keyed setting, where we match the round complexity of known protocols while relaxing the underlying assumption from collisionresistance to keyed multicollision resistance.
The core technical contribution behind our results is a domain extension transformation from multicollisionresistant hash functions for a fixed input length to ones with an arbitrary input length and a local opening property. The transformation is based on a combination of classical domain extension techniques, together with new informationtheoretic tools. In particular, we define and construct a new variant of listrecoverable codes, which may be of independent interest.
@InProceedings{STOC18p671,
author = {Nir Bitansky and Yael Tauman Kalai and Omer Paneth},
title = {Multicollision Resistance: A Paradigm for Keyless Hash Functions},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {671684},
doi = {10.1145/3188745.3188870},
year = {2018},
}
Publisher's Version
Article Search


Bläser, Markus 
STOC '18: "Generalized Matrix Completion ..."
Generalized Matrix Completion and Algebraic Natural Proofs
Markus Bläser, Christian Ikenmeyer, Gorav Jindal, and Vladimir Lysikov (Saarland University, Germany; Max Planck Institute for Informatics, Germany) Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudorandom generators. Unfortunately, there is no known analogous theory of pseudorandomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NPhard problem. As our first main result, we prove that it is also NPhard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a wellestablished and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P^{#P} ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case. 

Błasiok, Jarosław 
STOC '18: "General Strong Polarization ..."
General Strong Polarization
Jarosław Błasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, and Madhu Sudan (Harvard University, USA; Carnegie Mellon University, USA; SUNY Buffalo, USA) Arikan’s exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constantsized) invertible matrix M, a family of polar codes can be associated with this matrix and its ability to approach capacity follows from the polarization of an associated [0,1]bounded martingale, namely its convergence in the limit to either 0 or 1 with probability 1. Arikan showed appropriate polarization of the martingale associated with the matrix G_{2} = (
) to get capacity achieving codes. His analysis was later extended to all matrices M which satisfy an obvious necessary condition for polarization. While Arikan’s theorem does not guarantee that the codes achieve capacity at small blocklengths (specifically in length which is a polynomial in 1/є where є is the difference between the capacity of a channel and the rate of the code), it turns out that a “strong” analysis of the polarization of the underlying martingale would lead to such constructions. Indeed for the martingale associated with G_{2} such a strong polarization was shown in two independent works ([Guruswami and Xia, IEEE IT ’15] and [Hassani et al., IEEE IT’14]), thereby resolving a major theoretical challenge associated with the efficient attainment of Shannon capacity. In this work we extend the result above to cover martingales associated with all matrices that satisfy the necessary condition for (weak) polarization. In addition to being vastly more general, our proofs of strong polarization are (in our view) also much simpler and modular. Key to our proof is a notion of local polarization that only depends on the evolution of the martingale in a single time step. We show that local polarization always implies strong polarization. We then apply relatively simple reasoning about conditional entropies to prove local polarization in very general settings. Specifically, our result shows strong polarization over all prime fields and leads to efficient capacityachieving source codes for compressing arbitrary i.i.d. sources, and capacityachieving channel codes for arbitrary symmetric memoryless channels. 

Bodlaender, Hans L. 
STOC '18: "A Framework for ETHTight ..."
A Framework for ETHTight Algorithms and Lower Bounds in Geometric Intersection Graphs
Mark de Berg, Hans L. Bodlaender, Sándor KisfaludiBak, Dániel Marx, and Tom C. van der Zanden (Eindhoven University of Technology, Netherlands; Utrecht University, Netherlands; Hungarian Academy of Sciences, Hungary) We give an algorithmic and lowerbound framework that facilitates the construction of subexponential algorithms and matching conditional complexity bounds. It can be applied to a wide range of geometric intersection graphs (intersections of similarly sized fat objects), yielding algorithms with running time 2^{O(n1−1/d)} for any fixed dimension d≥ 2 for many well known graph problems, including Independent Set, rDominating Set for constant r, and Steiner Tree. For most problems, we get improved running times compared to prior work; in some cases, we give the first known subexponential algorithm in geometric intersection graphs. Additionally, most of the obtained algorithms work on the graph itself, i.e., do not require any geometric information. Our algorithmic framework is based on a weighted separator theorem and various treewidth techniques. The lowerbound framework is based on a constructive embedding of graphs into ddimensional grids, and it allows us to derive matching 2^{Ω(n1−1/d)} lower bounds under the Exponential Time Hypothesis even in the much more restricted class of ddimensional induced grid graphs. 

Bonacina, Ilario 
STOC '18: "Clique Is Hard on Average ..."
Clique Is Hard on Average for Regular Resolution
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Alexander Razborov (Universitat Politècnica de Catalunya, Spain; KTH, Sweden; Sapienza University of Rome, Italy; University of Chicago, USA; Russian Academy of Sciences, Russia) We prove that for k ≪ n^{1/4} regular resolution requires length n^{Ω(k)} to establish that an ErdosRenyi graph with appropriately chosen edge density does not contain a kclique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional n^{Ω(k)} lower bounds on running time for several stateoftheart algorithms for finding maximum cliques in graphs. 

Brand, Cornelius 
STOC '18: "ExtensorCoding ..."
ExtensorCoding
Cornelius Brand, Holger Dell, and Thore Husfeldt (Saarland University, Germany; M2CI, Germany; Lund University, Sweden; IT University of Copenhagen, Denmark) We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time ε^{−2} 4^{k}(n+m) poly(k). The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2^{k}·poly(n) time algorithm to find a kpath in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect kmultilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants. 

Braverman, Mark 
STOC '18: "Hitting Sets with NearOptimal ..."
Hitting Sets with NearOptimal Error for ReadOnce Branching Programs
Mark Braverman, Gil Cohen, and Sumegha Garg (Princeton University, USA) Nisan (Combinatorica’92) constructed a pseudorandom generator for length n, width n readonce branching programs (ROBPs) with error ε and seed length O(log^{2}n + logn · log(1/ε)). A major goal in complexity theory is to reduce the seed length, hopefully, to the optimal O(logn+log(1/ε)), or to construct improved hitting sets, as these would yield stronger derandomization of BPL and RL, respectively. In contrast to a successful line of work in restricted settings, no progress has been made for general, unrestricted, ROBPs. Indeed, Nisan’s construction is the best pseudorandom generator and, prior to this work, also the best hitting set for unrestricted ROBPs. In this work, we make the first improvement for the general case by constructing a hitting set with seed length O(log^{2}n+log(1/ε)). That is, we decouple ε and n, and obtain nearoptimal dependence on the former. The regime of parameters in which our construction strictly improves upon prior works, namely, log(1/ε) ≫ logn, is wellmotivated by the work of Saks and Zhou (J.CSS’99) who use pseudorandom generators with error ε = 2^{−(logn)2} in their proof for BPL ⊆ L^{3/2}. We further suggest a research program towards proving that BPL ⊆ L^{4/3} in which our result achieves one step. As our main technical tool, we introduce and construct a new type of primitive we call pseudorandom pseudodistributions. Informally, this is a generalization of pseudorandom generators in which one may assign negative and unbounded weights to paths as opposed to working with probability distributions. We show that such a primitive yields hitting sets and, for derandomization purposes, can be used to derandomize twosided error algorithms. Mark Braverman and Gillat Kol (Princeton University, USA) We describe a new way of compressing twoparty communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants’ private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) · loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost. 

Bringmann, Karl 
STOC '18: "Fast Fencing ..."
Fast Fencing
Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent CohenAddad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a nearlinear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in n^{O(k)} time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NPhard for general k. Our polynomial time algorithm refutes this unless P equals NP. Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof (IBM Research, USA; Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Eindhoven University of Technology, Netherlands) The Strong Exponential Time Hypothesis and the OVconjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomialtime algorithms. The OVconjecture in moderate dimension states there is no ε>0 for which an O(N^{2−ε}) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains Ddimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OVconjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that:  For all d and all large enough k, there is a randomized algorithm that takes O(n^{(1−ε)k}) time to solve the ZeroWeightkClique and MinWeightkClique problems on dhypergraphs with n vertices. As a consequence, the OVconjecture is implied by the Weighted Clique conjecture.  For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)^{n}). 

Bubeck, Sébastien 
STOC '18: "kServer via Multiscale Entropic ..."
kServer via Multiscale Entropic Regularization
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Mądry (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA) We present an O((logk)^{2})competitive randomized algorithm for the kserver problem on hierarchically separated trees (HSTs). This is the first o(k)competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)^{2} logn)competitive algorithm on any npoint metric space. We give a new dynamic HST embedding that yields an O((logk)^{3} logΔ)competitive algorithm on any metric space where the ratio of the largest to smallest nonzero distance is at most Δ. Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, and Yuanzhi Li (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA; Princeton University, USA) We consider the problem of linear regression where the ℓ_{2}^{n} norm loss (i.e., the usual least squares loss) is replaced by the ℓ_{p}^{n} norm. We show how to solve such problems up to machine precision in Õ_{p}(n^{1/2 − 1/p}) (dense) matrixvector products and Õ_{p}(1) matrix inversions, or alternatively in Õ_{p}(n^{1/2 − 1/p}) calls to a (sparse) linear system solver. This improves the state of the art for any p∉{1,2,+∞}. Furthermore we also propose a randomized algorithm solving such problems in input sparsity time, i.e., Õ_{p}(N + poly(d)) where N is the size of the input and d is the number of variables. Such a result was only known for p=2. Finally we prove that these results lie outside the scope of the NesterovNemirovski’s theory of interior point methods by showing that any symmetric selfconcordant barrier on the ℓ_{p}^{n} unit ball has selfconcordance parameter Ω(n). 

Bun, Mark 
STOC '18: "Composable and Versatile Privacy ..."
Composable and Versatile Privacy via Truncated CDP
Mark Bun, Cynthia Dwork, Guy N. Rothblum, and Thomas Steinke (Princeton University, USA; Harvard University, USA; Weizmann Institute of Science, Israel; IBM Research, USA)
We propose truncated concentrated differential privacy (tCDP), a refinement of differential privacy and of concentrated differential privacy. This new definition provides robust and efficient composition guarantees, supports powerful algorithmic techniques such as privacy amplification via subsampling, and enables more accurate statistical analyses. In particular, we show a central task for which the new definition enables exponential accuracy improvement.
@InProceedings{STOC18p74,
author = {Mark Bun and Cynthia Dwork and Guy N. Rothblum and Thomas Steinke},
title = {Composable and Versatile Privacy via Truncated CDP},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {7486},
doi = {10.1145/3188745.3188946},
year = {2018},
}
Publisher's Version
Article Search
STOC '18: "The Polynomial Method Strikes ..."
The Polynomial Method Strikes Back: Tight Quantum Query Bounds via Dual Polynomials
Mark Bun, Robin Kothari, and Justin Thaler (Princeton University, USA; Microsoft Research, USA; Georgetown University, USA) The approximate degree of a Boolean function f is the least degree of a real polynomial that approximates f pointwise to error at most 1/3. The approximate degree of f is known to be a lower bound on the quantum query complexity of f (Beals et al., FOCS 1998 and J. ACM 2001). We resolve or nearly resolve the approximate degree and quantum query complexities of several basic functions. Specifically, we show the following: *kdistinctness: For any constant k, the approximate degree and quantum query complexity of the kdistinctness function is Ω(n^{3/4−1/(2k)}). This is nearly tight for large k, as Belovs (FOCS 2012) has shown that for any constant k, the approximate degree and quantum query complexity of kdistinctness is O(n^{3/4−1/(2k+2−4)}). *Image Size Testing: The approximate degree and quantum query complexity of testing the size of the image of a function [n] → [n] is Ω(n^{1/2}). This proves a conjecture of Ambainis et al. (SODA 2016), and it implies tight lower bounds on the approximate degree and quantum query complexity of the following natural problems. **kjunta testing: A tight Ω(k^{1/2}) lower bound for kjunta testing, answering the main open question of Ambainis et al. (SODA 2016). **Statistical Distance from Uniform: A tight Ω(n^{1/2}) lower bound for approximating the statistical distance from uniform of a distribution, answering the main question left open by Bravyi et al. (STACS 2010 and IEEE Trans. Inf. Theory 2011). **Shannon entropy: A tight Ω(n^{1/2}) lower bound for approximating Shannon entropy up to a certain additive constant, answering a question of Li and Wu (2017). *Surjectivity: The approximate degree of the Surjectivity function is Ω(n^{3/4}). The best prior lower bound was Ω(n^{2/3}). Our result matches an upper bound of Õ(n^{3/4}) due to Sherstov, which we reprove using different techniques. The quantum query complexity of this function is known to be Θ(n) (Beame and Machmouchi, Quantum Inf. Comput. 2012 and Sherstov, FOCS 2015). Our upper bound for Surjectivity introduces new techniques for approximating Boolean functions by lowdegree polynomials. Our lower bounds are proved by significantly refining techniques recently introduced by Bun and Thaler (FOCS 2017). 

Byrka, Jarosław 
STOC '18: "ConstantFactor Approximation ..."
ConstantFactor Approximation for Ordered kMedian
Jarosław Byrka, Krzysztof Sornat, and Joachim Spoerhase (University of Wrocław, Poland)
We study the Ordered kMedian problem, in which the solution is evaluated by first sorting the client connection costs and then multiplying them with a predefined nonincreasing weight vector (higher connection costs are taken with larger weights). Since the 1990s, this problem has been studied extensively in the discrete optimization and operations research communities and has emerged as a framework unifying many fundamental clustering and location problems such as kMedian and kCenter. Obtaining nontrivial approximation algorithms was an open problem even for simple topologies such as trees. Recently, Aouad and Segev (2017) were able to obtain an O(log n) approximation algorithm for Ordered kMedian using a sophisticated localsearch approach. The existence of a constantfactor approximation algorithm, however, remained open even for the rectangular weight vector.
In this paper, we provide an LProunding constantfactor approximation algorithm for the Ordered kMedian
problem.
We achieve this result by revealing an interesting connection to the classic kMedian problem. In particular, we propose a novel LP relaxation that uses the constraints of the natural LP relaxation for kMedian but minimizes over a nonmetric, distorted cost vector. This cost function (approximately) emulates the weighting of distances in an optimum solution and can be guessed by means of a clever enumeration scheme of Aouad and Segev. Although the resulting LP has an unbounded integrality gap, we can show that the LP rounding process by Charikar and Li (2012) for kMedian, operating on the original, metric space, gives a constantfactor approximation when relating not only to the LP value but also to a combinatorial bound derived from the guessing phase. To analyze the rounding process under the nonlinear, rankingbased objective of Ordered kMedian, we employ several new ideas and technical ingredients that we believe could be of interest in some of the numerous other settings related to ordered, weighted cost functions.
@InProceedings{STOC18p620,
author = {Jarosław Byrka and Krzysztof Sornat and Joachim Spoerhase},
title = {ConstantFactor Approximation for Ordered kMedian},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {620631},
doi = {10.1145/3188745.3188930},
year = {2018},
}
Publisher's Version
Article Search


Canonne, Clément L. 
STOC '18: "Testing Conditional Independence ..."
Testing Conditional Independence of Discrete Distributions
Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (Stanford University, USA; University of Southern California, USA; University of California at San Diego, USA) We study the problem of testing *conditional independence* for discrete distributions. Specifically, given samples from a discrete random variable (X, Y, Z) on domain [ℓ_{1}]×[ℓ_{2}] × [n], we want to distinguish, with probability at least 2/3, between the case that X and Y are conditionally independent given Z from the case that (X, Y, Z) is єfar, in ℓ_{1}distance, from every distribution that has this property. Conditional independence is a concept of central importance in probability and statistics with important applications in various scientific domains. As such, the statistical task of testing conditional independence has been extensively studied in various forms within the statistics and econometrics community for nearly a century. Perhaps surprisingly, this problem has not been previously considered in the framework of distribution property testing and in particular no tester with *sublinear* sample complexity is known, even for the important special case that the domains of X and Y are binary. The main algorithmic result of this work is the first conditional independence tester with sublinear sample complexity. To complement our upper bound, we prove informationtheoretic lower bounds establishing that the sample complexity of our algorithm is optimal, up to constant factors, for a number of settings. Specifically, for the prototypical setting when ℓ_{1}, ℓ_{2} = O(1), we show that the sample complexity of testing conditional independence (upper bound and matching lower bound) is
We also achieve sublinear sample complexity for the general case of arbitrary ℓ_{1}, ℓ_{2}, and n. To obtain our tester, we employ a variety of tools, including (1) a subtle weighted adaptation of the ”flattening” technique, and (2) the design and analysis of an optimal (unbiased) estimator for the following statistical problem of independent interest: Given a degreed polynomial Q∶ℝ^{n} → ℝ and sample access to a distribution p over [n], estimate Q(p_{1}, …, p_{n}) up to small additive error. Obtaining tight variance analyses for specific estimators of this form has been a major technical hurdle in distribution testing. As an important contribution of this work, we develop a general theory providing tight variance bounds for *all* such estimators. 

Chakraborty, Diptarka 
STOC '18: "Tight Cell Probe Bounds for ..."
Tight Cell Probe Bounds for Succinct Boolean MatrixVector Multiplication
Diptarka Chakraborty, Lior Kamma, and Kasper Green Larsen (Charles University in Prague, Czechia; Aarhus University, Denmark) The conjectured hardness of Boolean matrixvector multiplication has been used with great success to prove conditional lower bounds for numerous important data structure problems, see Henzinger et al. [STOC’15]. In recent work, Larsen and Williams [SODA’17] attacked the problem from the upper bound side and gave a surprising cell probe data structure (that is, we only charge for memory accesses, while computation is free). Their cell probe data structure answers queries in Õ(n^{7/4}) time and is succinct in the sense that it stores the input matrix in readonly memory, plus an additional Õ(n^{7/4}) bits on the side. In this paper, we essentially settle the cell probe complexity of succinct Boolean matrixvector multiplication. We present a new cell probe data structure with query time Õ(n^{3/2}) storing just Õ(n^{3/2}) bits on the side. We then complement our data structure with a lower bound showing that any data structure storing r bits on the side, with n < r < n^{2} must have query time t satisfying t r = Ω(n^{3}). For r ≤ n, any data structure must have t = Ω(n^{2}). Since lower bounds in the cell probe model also apply to classic wordRAM data structures, the lower bounds naturally carry over. We also prove similar lower bounds for matrixvector multiplication over F_{2}. 

Chang, YiJun 
STOC '18: "An Optimal Distributed (Δ+1)Coloring ..."
An Optimal Distributed (Δ+1)Coloring Algorithm?
YiJun Chang, Wenzheng Li, and Seth Pettie (University of Michigan, USA; Tsinghua University, China) Vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. In this paper we present a new algorithm for (Δ+1)list coloring in the randomized LOCAL model running in O(log^{∗}n + Det_{d}(poly logn)) time, where Det_{d}(n′) is the deterministic complexity of (deg+1)list coloring (v’s palette has size deg(v)+1) on n′vertex graphs. This improves upon a previous randomized algorithm of Harris, Schneider, and Su (STOC 2016). with complexity O(√logΔ + loglogn + Det_{d}(poly logn)), and (when Δ is sufficiently large) is much faster than the best known deterministic algorithm of Fraigniaud, Heinrich, and Kosowski (FOCS 2016), with complexity O(√Δlog^{2.5}Δ + log^{*} n). Our algorithm appears to be optimal. It matches the Ω(log^{∗}n) randomized lower bound, due to Naor (SIDMA 1991) and sort of matches the Ω(Det(poly logn)) randomized lower bound due to Chang, Kopelowitz, and Pettie (FOCS 2016), where Det is the deterministic complexity of (Δ+1)list coloring. The best known upper bounds on Det_{d}(n′) and Det(n′) are both 2^{O(√logn′)} by Panconesi and Srinivasan (Journal of Algorithms 1996), and it is quite plausible that the complexities of both problems are the same, asymptotically. 

Chattopadhyay, Arkadev 
STOC '18: "Simulation Beats Richness: ..."
Simulation Beats Richness: New DataStructure Lower Bounds
Arkadev Chattopadhyay, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay (TIFR, India; Charles University in Prague, Czechia; INESC TEC, Portugal; University of Porto, Portugal; KTH, Sweden) We develop a new technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is the first simulation theorem in the asymmetric setting, where Alice gets a p × n matrix x over F_{2} and Bob gets a vector y ∈ F_{2}^{n}. Alice and Bob need to evaluate f(x· y) for a Boolean function f: {0,1}^{p} → {0,1}. Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C· n for Alice and C for Bob, if and only if there exists a deterministic/randomized *parity decision tree* of cost Θ(C) for evaluating f. As applications of this technique, we obtain the following results: 1. The first strong lowerbounds against randomized datastructure schemes for the VectorMatrixVector product problem over F_{2}. Moreover, our method yields strong lower bounds even when the datastructure scheme has tiny advantage over random guessing. 2. The first lower bounds against randomized datastructures schemes for two natural Boolean variants of Orthogonal Vector Counting. 3. We construct an asymmetric communication problem and obtain a deterministic lowerbound for it which is provably better than any lowerbound that may be obtained by the classical Richness Method of Miltersen et al. (STOC ’95). This seems to be the first known limitation of the Richness Method in the context of proving deterministic lower bounds. 

Chattopadhyay, Eshan 
STOC '18: "Improved Pseudorandomness ..."
Improved Pseudorandomness for Unordered Branching Programs through Local Monotonicity
Eshan Chattopadhyay, Pooya Hatami, Omer Reingold, and Avishay Tal (Cornell University, USA; Institute for Advanced Study at Princeton, USA; University of Texas at Austin, USA; Stanford University, USA) We present an explicit pseudorandom generator with seed length Õ((logn)^{w+1}) for readonce, oblivious, width w branching programs that can read their input bits in any order. This improves upon the work of Impagliazzo, Meka and Zuckerman (FOCS’12) where they required seed length n^{1/2+o(1)}. A central ingredient in our work is the following bound that we prove on the Fourier spectrum of branching programs. For any width w readonce, oblivious branching program B:{0,1}^{n}→ {0,1}, any k ∈ {1,…,n},
This settles a conjecture posed by Reingold, Steinke and Vadhan (RANDOM’13). Our analysis crucially uses a notion of local monotonicity on the edge labeling of the branching program. We carry critical parts of our proof under the assumption of local monotonicity and show how to deduce our results for unrestricted branching programs. 

Chen, Xi 
STOC '18: "DistributionFree Junta Testing ..."
DistributionFree Junta Testing
Zhengyang Liu, Xi Chen, Rocco A. Servedio, Ying Sheng, and Jinyu Xie (Shanghai Jiao Tong University, China; Columbia University, USA) We study the problem of testing whether an unknown nvariable Boolean function is a kjunta in the distributionfree property testing model, where the distance between functions is measured with respect to an arbitrary and unknown probability distribution over {0,1}^{n}. Our first main result is that distributionfree kjunta testing can be performed, with onesided error, by an adaptive algorithm that uses Õ(k^{2})/є queries (independent of n). Complementing this, our second main result is a lower bound showing that any nonadaptive distributionfree kjunta testing algorithm must make Ω(2^{k/3}) queries even to test to accuracy є=1/3. These bounds establish that while the optimal query complexity of nonadaptive kjunta testing is 2^{Θ(k)}, for adaptive testing it is poly(k), and thus show that adaptivity provides an exponential improvement in the distributionfree query complexity of testing juntas. 

Cheraghchi, Mahdi 
STOC '18: "Capacity Upper Bounds for ..."
Capacity Upper Bounds for DeletionType Channels
Mahdi Cheraghchi (Imperial College London, UK) We develop a systematic approach, based on convex programming and real analysis, for obtaining upper bounds on the capacity of the binary deletion channel and, more generally, channels with i.i.d. insertions and deletions. Other than the classical deletion channel, we give a special attention to the Poissonrepeat channel introduced by Mitzenmacher and Drinea (IEEE Transactions on Information Theory, 2006). Our framework can be applied to obtain capacity upper bounds for any repetition distribution (the deletion and Poissonrepeat channels corresponding to the special cases of Bernoulli and Poisson distributions). Our techniques essentially reduce the task of proving capacity upper bounds to maximizing a univariate, realvalued, and often concave function over a bounded interval. The corresponding univariate function is carefully designed according to the underlying distribution of repetitions and the choices vary depending on the desired strength of the upper bounds as well as the desired simplicity of the function (e.g., being only efficiently computable versus having an explicit closedform expression in terms of elementary, or common special, functions). Among our results, we show that the capacity of the binary deletion channel with deletion probability d is at most (1−d) logϕ for d ≥ 1/2, and, assuming the capacity function is convex, is at most 1−d log(4/ϕ) for d<1/2, where ϕ=(1+√5)/2 is the golden ratio. This is the first nontrivial capacity upper bound for any value of d outside the limiting case d → 0 that is fully explicit and proved without computer assistance. Furthermore, we derive the first set of capacity upper bounds for the Poissonrepeat channel. Our results uncover further striking connections between this channel and the deletion channel, and suggest, somewhat counterintuitively, that the Poissonrepeat channel is actually analytically simpler than the deletion channel and may be of key importance to a complete understanding of the deletion channel. Finally, we derive several novel upper bounds on the capacity of the deletion channel. All upper bounds are maximums of efficiently computable, and concave, univariate real functions over a bounded domain. In turn, we upper bound these functions in terms of explicit elementary and standard special functions, whose maximums can be found even more efficiently (and sometimes, analytically, for example for d=1/2). Along the way, we develop several new techniques of potentially independent interest. For example, we develop systematic techniques to study channels with mean constraints over the reals. Furthermore, we motivate the study of novel probability distributions over nonnegative integers, as well as novel special functions which could be of interest to mathematical analysis. 

Christandl, Matthias 
STOC '18: "Universal Points in the Asymptotic ..."
Universal Points in the Asymptotic Spectrum of Tensors
Matthias Christandl, Péter Vrana, and Jeroen Zuiddam (University of Copenhagen, Denmark; Budapest University of Technology and Economics, Hungary; CWI, Netherlands) The asymptotic restriction problem for tensors s and t is to find the smallest β ≥ 0 such that the nth tensor power of t can be obtained from the (β n+o(n))th tensor power of s by applying linear maps to the tensor legs — this is called restriction — when n goes to infinity. Applications include computing the arithmetic complexity of matrix multiplication in algebraic complexity theory, deciding the feasibility of an asymptotic transformation between pure quantum states via stochastic local operations and classical communication in quantum information theory, bounding the query complexity of certain properties in algebraic property testing, and bounding the size of combinatorial structures like tricolored sumfree sets in additive combinatorics. Naturally, the asymptotic restriction problem asks for obstructions (think of lower bounds in computational complexity) and constructions (think of fast matrix multiplication algorithms). Strassen showed that for obstructions it is sufficient to consider maps from ktensors to nonnegative reals, that are monotone under restriction, normalised on diagonal tensors, additive under direct sum and multiplicative under tensor product, named spectral points (SFCS 1986 and J. Reine Angew. Math. 1988). Strassen introduced the support functionals, which are spectral points for oblique tensors, a strict subfamily of all tensors (J. Reine Angew. Math. 1991). On the construction side, an important work is the CoppersmithWinograd method for tight tensors and tight sets. We present the first nontrivial spectral points for the family of all complex tensors, named quantum functionals. Finding such universal spectral points has been an open problem for thirty years. We use techniques from quantum information theory, invariant theory and moment polytopes. We present comparisons among the support functionals and our quantum functionals, and compute generic values. We relate the functionals to instability from geometric invariant theory, in the spirit of Blasiak et al. (Discrete Anal. 2017). We prove that the quantum functionals are asymptotic upper bounds on slicerank and multislice rank, extending a result of Tao and Sawin. Furthermore, we make progress on the construction side of the combinatorial version of the asymptotic restriction problem by extending the Coppersmith–Winograd method via combinatorial degeneration. The regular method constructs large free diagonals in powers of any tight set. Our extended version works for any set that has a combinatorial degeneration to a tight set. This generalizes a result of Kleinberg, Sawin and Speyer. As an application we reprove in hindsight recent results on tricolored sumfree sets by reducing this problem to a result of Strassen on reduced polynomial multiplication. Proofs are in the full version of this paper, available at https://arxiv.org/abs/1709.07851. 

Chuzhoy, Julia 
STOC '18: "Almost Polynomial Hardness ..."
Almost Polynomial Hardness of NodeDisjoint Paths in Grids
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical NodeDisjoint Paths (NDP) problem, we are given an nvertex graph G=(V,E), and a collection M={(s_{1},t_{1}),…,(s_{k},t_{k})} of pairs of its vertices, called sourcedestination, or demand pairs. The goal is to route as many of the demand pairs as possible, where to route a pair we need to select a path connecting it, so that all selected paths are disjoint in their vertices. The best current algorithm for NDP achieves an O(√n)approximation, while, until recently, the best negative result was a factor Ω(log^{1/2−є}n)hardness of approximation, for any constant є, unless NP ⊆ ZPTIME(n^{poly logn}). In a recent work, the authors have shown an improved 2^{Ω(√logn)}hardness of approximation for NDP, unless NP⊆ DTIME(n^{O(logn)}), even if the underlying graph is a subgraph of a grid graph, and all source vertices lie on the boundary of the grid. Unfortunately, this result does not extend to grid graphs. The approximability of the NDP problem on grid graphs has remained a tantalizing open question, with the best current upper bound of Õ(n^{1/4}), and the best current lower bound of APXhardness. In a recent work, the authors showed a 2^{Õ(√logn)}approximation algorithm for NDP in grid graphs, if all source vertices lie on the boundary of the grid – a result that can be seen as suggesting that a subpolynomial approximation may be achievable for NDP in grids. In this paper we show that this is unlikely to be the case, and come close to resolving the approximability of NDP in general, and of NDP in grids in particular. Our main result is that NDP is 2^{Ω(log1−є n)}hard to approximate for any constant є, assuming that NP⊈RTIME(n^{poly logn}), and that it is n^{Ω (1/(loglogn)2)}hard to approximate, assuming that for some constant δ>0, NP ⊈RTIME(2^{nδ}). These results hold even for grid graphs and wall graphs, and extend to the closely related EdgeDisjoint Paths problem, even in wall graphs. Our hardness proof performs a reduction from the 3COL(5) problem to NDP, using a new graph partitioning problem as a proxy. Unlike the more standard approach of employing Karp reductions to prove hardness of approximation, our proof is a Cooktype reduction, where, given an input instance of 3COL(5), we produce a large number of instances of NDP, and apply an approximation algorithm for NDP to each of them. The construction of each new instance of NDP crucially depends on the solutions to the previous instances that were found by the approximation algorithm. 

Cohen, Gil 
STOC '18: "Explicit Binary Tree Codes ..."
Explicit Binary Tree Codes with Polylogarithmic Size Alphabet
Gil Cohen, Bernhard Haeupler, and Leonard J. Schulman (Princeton University, USA; Carnegie Mellon University, USA; California Institute of Technology, USA) This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size. We give an explicit binary tree code with constant distance and alphabet size poly(logn), where n is the depth of the tree. This is the first improvement over a twodecadeold construction that has an exponentially larger alphabet of size poly(n). At the core of our construction is the first explicit tree code with constant rate and constant distance, though with nonconstant arity  a result of independent interest. This construction adapts the polynomial interpolation framework to the online setting. Mark Braverman, Gil Cohen, and Sumegha Garg (Princeton University, USA) Nisan (Combinatorica’92) constructed a pseudorandom generator for length n, width n readonce branching programs (ROBPs) with error ε and seed length O(log^{2}n + logn · log(1/ε)). A major goal in complexity theory is to reduce the seed length, hopefully, to the optimal O(logn+log(1/ε)), or to construct improved hitting sets, as these would yield stronger derandomization of BPL and RL, respectively. In contrast to a successful line of work in restricted settings, no progress has been made for general, unrestricted, ROBPs. Indeed, Nisan’s construction is the best pseudorandom generator and, prior to this work, also the best hitting set for unrestricted ROBPs. In this work, we make the first improvement for the general case by constructing a hitting set with seed length O(log^{2}n+log(1/ε)). That is, we decouple ε and n, and obtain nearoptimal dependence on the former. The regime of parameters in which our construction strictly improves upon prior works, namely, log(1/ε) ≫ logn, is wellmotivated by the work of Saks and Zhou (J.CSS’99) who use pseudorandom generators with error ε = 2^{−(logn)2} in their proof for BPL ⊆ L^{3/2}. We further suggest a research program towards proving that BPL ⊆ L^{4/3} in which our result achieves one step. As our main technical tool, we introduce and construct a new type of primitive we call pseudorandom pseudodistributions. Informally, this is a generalization of pseudorandom generators in which one may assign negative and unbounded weights to paths as opposed to working with probability distributions. We show that such a primitive yields hitting sets and, for derandomization purposes, can be used to derandomize twosided error algorithms. 

Cohen, Michael B. 
STOC '18: "kServer via Multiscale Entropic ..."
kServer via Multiscale Entropic Regularization
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Mądry (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA) We present an O((logk)^{2})competitive randomized algorithm for the kserver problem on hierarchically separated trees (HSTs). This is the first o(k)competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)^{2} logn)competitive algorithm on any npoint metric space. We give a new dynamic HST embedding that yields an O((logk)^{3} logΔ)competitive algorithm on any metric space where the ratio of the largest to smallest nonzero distance is at most Δ. Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, and Yuanzhi Li (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA; Princeton University, USA) We consider the problem of linear regression where the ℓ_{2}^{n} norm loss (i.e., the usual least squares loss) is replaced by the ℓ_{p}^{n} norm. We show how to solve such problems up to machine precision in Õ_{p}(n^{1/2 − 1/p}) (dense) matrixvector products and Õ_{p}(1) matrix inversions, or alternatively in Õ_{p}(n^{1/2 − 1/p}) calls to a (sparse) linear system solver. This improves the state of the art for any p∉{1,2,+∞}. Furthermore we also propose a randomized algorithm solving such problems in input sparsity time, i.e., Õ_{p}(N + poly(d)) where N is the size of the input and d is the number of variables. Such a result was only known for p=2. Finally we prove that these results lie outside the scope of the NesterovNemirovski’s theory of interior point methods by showing that any symmetric selfconcordant barrier on the ℓ_{p}^{n} unit ball has selfconcordance parameter Ω(n). 

CohenAddad, Vincent 
STOC '18: "Fast Fencing ..."
Fast Fencing
Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent CohenAddad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a nearlinear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in n^{O(k)} time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NPhard for general k. Our polynomial time algorithm refutes this unless P equals NP. 

Czumaj, Artur 
STOC '18: "Round Compression for Parallel ..."
Round Compression for Parallel Matching Algorithms
Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (University of Warwick, UK; Google Research, USA; Massachusetts Institute of Technology, USA; EPFL, Switzerland; IBM Research, USA; University of Warsaw, Poland) For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of largescale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem—one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n^{1+Ω(1)} memory, this problem can also be solved 2approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) nearlinear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+є)approximate maximum matching, for any fixed constant є>0, in O((loglogn)^{2}) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of the MPC model, as compared to the PRAM model. Firstly, we use vertex–based graph partitioning, instead of the edge–based approaches that were utilized so far. Secondly, we develop a technique of round compression. This technique enables one to take a (distributed) algorithm that computes an O(1)approximation of maximum matching in O(logn) independent PRAM phases and implement a superconstant number of these phases in only a constant number of MPC rounds. 

Dadush, Daniel 
STOC '18: "The GramSchmidt Walk: A Cure ..."
The GramSchmidt Walk: A Cure for the Banaszczyk Blues
Nikhil Bansal, Daniel Dadush, Shashwat Garg, and Shachar Lovett (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of California at San Diego, USA) An important result in discrepancy due to Banaszczyk states that for any set of n vectors in ℝ^{m} of ℓ_{2} norm at most 1 and any convex body K in ℝ^{m} of Gaussian measure at least half, there exists a ± 1 combination of these vectors which lies in 5K. This result implies the best known bounds for several problems in discrepancy. Banaszczyk’s proof of this result is nonconstructive and an open problem has been to give an efficient algorithm to find such a ± 1 combination of the vectors. In this paper, we resolve this question and give an efficient randomized algorithm to find a ± 1 combination of the vectors which lies in cK for c>0 an absolute constant. This leads to new efficient algorithms for several problems in discrepancy theory. Daniel Dadush and Sophie Huiberts (CWI, Netherlands) Explaining the excellent practical performance of the simplex method for linear programming has been a major topic of research for over 50 years. One of the most successful frameworks for understanding the simplex method was given by Spielman and Teng (JACM ‘04), who the developed the notion of smoothed analysis. Starting from an arbitrary linear program with d variables and n constraints, Spielman and Teng analyzed the expected runtime over random perturbations of the LP (smoothed LP), where variance σ Gaussian noise is added to the LP data. In particular, they gave a twostage shadow vertex simplex algorithm which uses an expected O(n^{86} d^{55} σ^{−30}) number of simplex pivots to solve the smoothed LP. Their analysis and runtime was substantially improved by SpielmanDeshpande (FOCS ‘05) and later Vershynin (SICOMP ‘09). The fastest current algorithm, due to Vershynin, solves the smoothed LP using an expected O(d^{3} log^{3} n σ^{−4} + d^{9}log^{7} n) number of pivots, improving the dependence on n from polynomial to logarithmic. While the original proof of SpielmanTeng has now been substantially simplified, the resulting analyses are still quite long and complex and the parameter dependencies far from optimal. In this work, we make substantial progress on this front, providing an improved and simpler analysis of shadow simplex methods, where our main algorithm requires an expected O(d^{2} √logn σ^{−2} + d^{5} log^{3/2} n) number of simplex pivots. We obtain our results via an improved shadow bound, key to earlier analyses as well, combined with algorithmic techniques of Borgwardt (ZOR ‘82) and Vershynin. As an added bonus, our analysis is completely modular, allowing us to obtain nontrivial bounds for perturbations beyond Gaussians, such as Laplace perturbations. 

Daskalakis, Constantinos 
STOC '18: "A Converse to Banach's ..."
A Converse to Banach's Fixed Point Theorem and Its CLSCompleteness
Constantinos Daskalakis, Christos Tzamos, and Manolis Zampetakis (Massachusetts Institute of Technology, USA; Microsoft Research, USA)
Banach's fixed point theorem for contraction maps has been widely used to analyze the convergence of iterative methods in nonconvex problems. It is a common experience, however, that iterative maps fail to be globally contracting under the natural metric in their domain, making the applicability of Banach's theorem limited. We explore how generally we can apply Banach's fixed point theorem to establish the convergence of iterative methods when pairing it with carefully designed metrics.
Our first result is a strong converse of Banach's theorem, showing that it is a universal analysis tool for establishing global convergence of iterative methods to unique fixed points, and for bounding their convergence rate. In other words, we show that, whenever an iterative map globally converges to a unique fixed point, there exists a metric under which the iterative map is contracting and which can be used to bound the number of iterations until convergence. We illustrate our approach in the widely used power method, providing a new way of bounding its convergence rate through contraction arguments.
We next consider the computational complexity of Banach's fixed point theorem. Making the proof of our converse theorem constructive, we show that computing a fixed point whose existence is guaranteed by Banach's fixed point theorem is CLScomplete. We thus provide the first natural complete problem for the class CLS, which was defined in [DP11] to capture the complexity of problems such as Pmatrix LCP, computing KKTpoints, and finding mixed Nash equilibria in congestion and network coordination games.
@InProceedings{STOC18p44,
author = {Constantinos Daskalakis and Christos Tzamos and Manolis Zampetakis},
title = {A Converse to Banach's Fixed Point Theorem and Its CLSCompleteness},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {4450},
doi = {10.1145/3188745.3188968},
year = {2018},
}
Publisher's Version
Article Search


De Berg, Mark 
STOC '18: "A Framework for ETHTight ..."
A Framework for ETHTight Algorithms and Lower Bounds in Geometric Intersection Graphs
Mark de Berg, Hans L. Bodlaender, Sándor KisfaludiBak, Dániel Marx, and Tom C. van der Zanden (Eindhoven University of Technology, Netherlands; Utrecht University, Netherlands; Hungarian Academy of Sciences, Hungary) We give an algorithmic and lowerbound framework that facilitates the construction of subexponential algorithms and matching conditional complexity bounds. It can be applied to a wide range of geometric intersection graphs (intersections of similarly sized fat objects), yielding algorithms with running time 2^{O(n1−1/d)} for any fixed dimension d≥ 2 for many well known graph problems, including Independent Set, rDominating Set for constant r, and Steiner Tree. For most problems, we get improved running times compared to prior work; in some cases, we give the first known subexponential algorithm in geometric intersection graphs. Additionally, most of the obtained algorithms work on the graph itself, i.e., do not require any geometric information. Our algorithmic framework is based on a weighted separator theorem and various treewidth techniques. The lowerbound framework is based on a constructive embedding of graphs into ddimensional grids, and it allows us to derive matching 2^{Ω(n1−1/d)} lower bounds under the Exponential Time Hypothesis even in the much more restricted class of ddimensional induced grid graphs. 

Dell, Holger 
STOC '18: "ExtensorCoding ..."
ExtensorCoding
Cornelius Brand, Holger Dell, and Thore Husfeldt (Saarland University, Germany; M2CI, Germany; Lund University, Sweden; IT University of Copenhagen, Denmark) We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time ε^{−2} 4^{k}(n+m) poly(k). The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2^{k}·poly(n) time algorithm to find a kpath in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect kmultilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants. Holger Dell and John Lapinskas (Saarland University, Germany; M2CI, Germany; University of Oxford, UK) In this paper, we introduce a general framework for finegrained reductions of approximate counting problems to their decision versions. (Thus we use an oracle that decides whether any witness exists to multiplicatively approximate the number of witnesses with minimal overhead.) This mirrors a foundational result of Sipser (STOC 1983) and Stockmeyer (SICOMP 1985) in the polynomialtime setting, and a similar result of Müller (IWPEC 2006) in the FPT setting. Using our framework, we obtain such reductions for some of the most important problems in finegrained complexity: the Orthogonal Vectors problem, 3SUM, and the NegativeWeight Triangle problem (which is closely related to AllPairs Shortest Path). While all these problems have simple algorithms over which it is conjectured that no polynomial improvement is possible, our reductions would remain interesting even if these conjectures were proved; they have only polylogarithmic overhead, and can therefore be applied to subpolynomial improvements such as the n^{3}/exp(Θ(√logn))time algorithm for the NegativeWeight Triangle problem due to Williams (STOC 2014). Our framework is also general enough to apply to versions of the problems for which more efficient algorithms are known. For example, the Orthogonal Vectors problem over GF(m)^{d} for constant m can be solved in time n·poly(d) by a result of Williams and Yu (SODA 2014); our result implies that we can approximately count the number of orthogonal pairs with essentially the same running time. We also provide a finegrained reduction from approximate #SAT to SAT. Suppose the Strong Exponential Time Hypothesis (SETH) is false, so that for some 1<c<2 and all k there is an O(c^{n})time algorithm for #kSAT. Then we prove that for all k, there is an O((c+o(1))^{n})time algorithm for approximate #kSAT. In particular, our result implies that the Exponential Time Hypothesis (ETH) is equivalent to the seeminglyweaker statement that there is no algorithm to approximate #3SAT to within a factor of 1+ε in time 2^{o(n)}/ε^{2} (taking ε > 0 as part of the input). A full version of this paper containing detailed proofs is available at https://arxiv.org/abs/1707.04609. Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof (IBM Research, USA; Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Eindhoven University of Technology, Netherlands) The Strong Exponential Time Hypothesis and the OVconjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomialtime algorithms. The OVconjecture in moderate dimension states there is no ε>0 for which an O(N^{2−ε}) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains Ddimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OVconjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that:  For all d and all large enough k, there is a randomized algorithm that takes O(n^{(1−ε)k}) time to solve the ZeroWeightkClique and MinWeightkClique problems on dhypergraphs with n vertices. As a consequence, the OVconjecture is implied by the Weighted Clique conjecture.  For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)^{n}). 

De Loera, Jesús A. 
STOC '18: "The Minimum EuclideanNorm ..."
The Minimum EuclideanNorm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm Is Exponential
Jesús A. De Loera, Jamie Haddock, and Luis Rademacher (University of California at Davis, USA)
The complexity of Philip Wolfe’s method for the minimum Euclideannorm point problem over a convex polytope has remained unknown since he proposed the method in 1974. We present the first example that Wolfe’s method takes exponential time. Additionally, we improve previous results to show that linear programming reduces in stronglypolynomial time to the minimum norm point problem over a simplex
@InProceedings{STOC18p545,
author = {Jesús A. De Loera and Jamie Haddock and Luis Rademacher},
title = {The Minimum EuclideanNorm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm Is Exponential},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {545553},
doi = {10.1145/3188745.3188820},
year = {2018},
}
Publisher's Version
Article Search


De Rezende, Susanna F. 
STOC '18: "Clique Is Hard on Average ..."
Clique Is Hard on Average for Regular Resolution
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Alexander Razborov (Universitat Politècnica de Catalunya, Spain; KTH, Sweden; Sapienza University of Rome, Italy; University of Chicago, USA; Russian Academy of Sciences, Russia) We prove that for k ≪ n^{1/4} regular resolution requires length n^{Ω(k)} to establish that an ErdosRenyi graph with appropriately chosen edge density does not contain a kclique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional n^{Ω(k)} lower bounds on running time for several stateoftheart algorithms for finding maximum cliques in graphs. 

Diakonikolas, Ilias 
STOC '18: "ListDecodable Robust Mean ..."
ListDecodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians
Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (University of Southern California, USA; University of California at San Diego, USA) We study the problem of listdecodable (robust) Gaussian mean estimation and the related problem of learning mixtures of separated spherical Gaussians. In the former problem, we are given a set T of points in ^{n} with the promise that an αfraction of points in T, where 0< α < 1/2, are drawn from an unknown mean identity covariance Gaussian G, and no assumptions are made about the remaining points. The goal is to output a small list of candidate vectors with the guarantee that at least one of the candidates is close to the mean of G. In the latter problem, we are given samples from a kmixture of spherical Gaussians on ^{n} and the goal is to estimate the unknown model parameters up to small accuracy. We develop a set of techniques that yield new efficient algorithms with significantly improved guarantees for these problems. Specifically, our main contributions are as follows:
ListDecodable Mean Estimation. Fix any d ∈ _{+} and 0< α <1/2. We design an algorithm with sample complexity O_{d} ((n^{d}/α)) and runtime O_{d} ((n/α)^{d}) that outputs a list of O(1/α) many candidate vectors such that with high probability one of the candidates is within ℓ_{2}distance O_{d}(α^{−1/(2d)}) from the mean of G. The only previous algorithm for this problem achieved error Õ(α^{−1/2}) under second moment conditions. For d = O(1/), where >0 is a constant, our algorithm runs in polynomial time and achieves error O(α). For d = Θ(log(1/α)), our algorithm runs in time (n/α)^{O(log(1/α))} and achieves error O(log^{3/2}(1/α)), almost matching the informationtheoretically optimal bound of Θ(log^{1/2}(1/α)) that we establish. We also give a Statistical Query (SQ) lower bound suggesting that the complexity of our algorithm is qualitatively close to best possible.
Learning Mixtures of Spherical Gaussians. We give a learning algorithm for mixtures of spherical Gaussians, with unknown spherical covariances, that succeeds under significantly weaker separation assumptions compared to prior work. For the prototypical case of a uniform kmixture of identity covariance Gaussians we obtain the following: For any >0, if the pairwise separation between the means is at least Ω(k+√log(1/δ)), our algorithm learns the unknown parameters within accuracy δ with sample complexity and running time (n, 1/δ, (k/)^{1/}). Moreover, our algorithm is robust to a small dimensionindependent fraction of corrupted data. The previously best known polynomial time algorithm required separation at least k^{1/4} (k/δ). Finally, our algorithm works under separation of Õ(log^{3/2}(k)+√log(1/δ)) with sample complexity and running time (n, 1/δ, k^{logk}). This bound is close to the informationtheoretically minimum separation of Ω(√logk).
Our main technical contribution is a new technique, using degreed multivariate polynomials, to remove outliers from highdimensional datasets where the majority of the points are corrupted. Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (University of Southern California, USA; University of California at San Diego, USA) We study the efficient learnability of geometric concept classes — specifically, lowdegree polynomial threshold functions (PTFs) and intersections of halfspaces — when a fraction of the training data is adversarially corrupted. We give the first polynomialtime PAC learning algorithms for these concept classes with dimensionindependent error guarantees in the presence of nasty noise under the Gaussian distribution. In the nasty noise model, an omniscient adversary can arbitrarily corrupt a small fraction of both the unlabeled data points and their labels. This model generalizes wellstudied noise models, including the malicious noise model and the agnostic (adversarial label noise) model. Prior to our work, the only concept class for which efficient malicious learning algorithms were known was the class of origincentered halfspaces. At the core of our results is an efficient algorithm to approximate the lowdegree Chowparameters of any bounded function in the presence of nasty noise. Our robust approximation algorithm for the Chow parameters provides nearoptimal error guarantees for a range of distribution families satisfying mild concentration bounds and moment conditions. At the technical level, this algorithm employs an iterative “spectral” technique for outlier detection and removal inspired by recent work in robust unsupervised learning, which makes essential use of lowdegree multivariate polynomials. Our robust learning algorithm for lowdegree PTFs provides dimensionindependent error guarantees for a class of tame distributions, including Gaussians and, more generally, any logconcave distribution with (approximately) known lowdegree moments. For LTFs under the Gaussian distribution, using a refinement of the localization technique, we give a polynomialtime algorithm that achieves a nearoptimal error of O(є), where є is the noise rate. Our robust learning algorithm for intersections of halfspaces proceeds by projecting down to an appropriate lowdimensional subspace. Its correctness makes essential use of a novel robust inverse independence lemma that is of independent interest. Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (Stanford University, USA; University of Southern California, USA; University of California at San Diego, USA) We study the problem of testing *conditional independence* for discrete distributions. Specifically, given samples from a discrete random variable (X, Y, Z) on domain [ℓ_{1}]×[ℓ_{2}] × [n], we want to distinguish, with probability at least 2/3, between the case that X and Y are conditionally independent given Z from the case that (X, Y, Z) is єfar, in ℓ_{1}distance, from every distribution that has this property. Conditional independence is a concept of central importance in probability and statistics with important applications in various scientific domains. As such, the statistical task of testing conditional independence has been extensively studied in various forms within the statistics and econometrics community for nearly a century. Perhaps surprisingly, this problem has not been previously considered in the framework of distribution property testing and in particular no tester with *sublinear* sample complexity is known, even for the important special case that the domains of X and Y are binary. The main algorithmic result of this work is the first conditional independence tester with sublinear sample complexity. To complement our upper bound, we prove informationtheoretic lower bounds establishing that the sample complexity of our algorithm is optimal, up to constant factors, for a number of settings. Specifically, for the prototypical setting when ℓ_{1}, ℓ_{2} = O(1), we show that the sample complexity of testing conditional independence (upper bound and matching lower bound) is
We also achieve sublinear sample complexity for the general case of arbitrary ℓ_{1}, ℓ_{2}, and n. To obtain our tester, we employ a variety of tools, including (1) a subtle weighted adaptation of the ”flattening” technique, and (2) the design and analysis of an optimal (unbiased) estimator for the following statistical problem of independent interest: Given a degreed polynomial Q∶ℝ^{n} → ℝ and sample access to a distribution p over [n], estimate Q(p_{1}, …, p_{n}) up to small additive error. Obtaining tight variance analyses for specific estimators of this form has been a major technical hurdle in distribution testing. As an important contribution of this work, we develop a general theory providing tight variance bounds for *all* such estimators. 

Dinur, Irit 
STOC '18: "Towards a Proof of the 2to1 ..."
Towards a Proof of the 2to1 Games Conjecture?
Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We present a polynomial time reduction from gap3LIN to label cover with 2to1 constraints. In the “yes” case the fraction of satisfied constraints is at least 1 −ε, and in the “no” case we show that this fraction is at most ε, assuming a certain (new) combinatorial hypothesis on the Grassmann graph. In other words, we describe a combinatorial hypothesis that implies the 2to1 conjecture with imperfect completeness. The companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] makes some progress towards proving this hypothesis. Our work builds on earlier work by a subset of the authors [Khot, Minzer and Safra, STOC 2017] where a slightly different hypothesis was used to obtain hardness of approximating vertex cover to within factor of √2−ε. The most important implication of this work is (assuming the hypothesis) an NPhardness gap of 1/2−ε vs. ε for unique games. In addition, we derive optimal NPhardness for approximating the maxcutgain problem, NPhardness of coloring an almost 4colorable graph with any constant number of colors, and the same √2−ε NPhardness for approximate vertex cover that was already obtained based on a slightly different hypothesis. Recent progress towards proving our hypothesis [Barak, Kothari and Steurer, ECCC TR18077], [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] directly implies some new unconditional NPhardness results. These include new points of NPhardness for unique games and for 2to1 and 2to2 games. More recently, the full version of our hypothesis was proven [Khot, Minzer and Safra, ECCC TR18006]. Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We study the structure of nonexpanding sets in the Grassmann graph. We put forth a hypothesis stating that every small set whose expansion is smaller than 1−δ must be correlated with one of a specified list of sets which are isomorphic to smaller Grassmann graphs. We develop a framework of Fourier analysis for analyzing functions over the Grassmann graph, and prove that our hypothesis holds for all sets whose expansion is below 7/8. In the companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018], the authors show that a linearity agreement hypothesis implies an NPhardness gap of 1/2− vs for unique games and other inapproximability results. In [Barak, Kothari and Steurer, ECCC TR18077], the authors show that the hypothesis in this work implies the linearity agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018]. Combined with our main theorem here this proves a version of the linearity agreement hypothesis with certain specific parameters. Short of proving the entire hypothesis, this nevertheless suffices for getting new unconditional NP hardness gaps for label cover with 2to1 and unique constraints. Our Expansion Hypothesis has been subsequently proved in its full form [Khot, Minzer and Safra, ECCC TR18006] thereby proving the agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] and completing the proof of the 2to1 Games Conjecture (albeit with imperfect completeness). 

Dudek, Bartłomiej 
STOC '18: "Universal Protocols for Information ..."
Universal Protocols for Information Dissemination using Emergent Signals
Bartłomiej Dudek and Adrian Kosowski (University of Wrocław, Poland; Inria, France) We consider a population of n agents which communicate with each other in a decentralized manner, through random pairwise interactions. One or more agents in the population may act as authoritative sources of information, and the objective of the remaining agents is to obtain information from or about these source agents. We study two basic tasks: broadcasting, in which the agents are to learn the bitstate of an authoritative source which is present in the population, and source detection, in which the agents are required to decide if at least one source agent is present in the population or not. We focus on designing protocols which meet two natural conditions: (1) universality, i.e., independence of population size, and (2) rapid convergence to a correct global state after a reconfiguration, such as a change in the state of a source agent. Our main positive result is to show that both of these constraints can be met. For both the broadcasting problem and the source detection problem, we obtain solutions with an expected convergence time of O(logn), from any starting configuration. The solution to broadcasting is exact, which means that all agents reach the state broadcast by the source, while the solution to source detection admits onesided error on a εfraction of the population (which is unavoidable for this problem). Both protocols are easy to implement in practice and are selfstabilizing, in the sense that the stated bounds on convergence time hold starting from any possible initial configuration of the system. Our protocols exploit the properties of selforganizing oscillatory dynamics. On the hardness side, our main structural insight is to prove that any protocol which meets the constraints of universality and of rapid convergence after reconfiguration must display a form of nonstationary behavior (of which oscillatory dynamics are an example). We also observe that the periodicity of the oscillatory behavior of the protocol, when present, must necessarily depend on the number ^{#}X of source agents present in the population. For instance, our protocols inherently rely on the emergence of a signal passing through the population, whose period is Θ(log(n/^{#}X)) rounds for most starting configurations. The design of phase clocks with tunable frequency may be of independent interest, notably in modeling biological networks. 

Dutta, Pranjal 
STOC '18: "Discovering the Roots: Uniform ..."
Discovering the Roots: Uniform Closure Results for Algebraic Classes under Factoring
Pranjal Dutta, Nitin Saxena, and Amit Sinhababu (Chennai Mathematical Institute, India; IIT Kanpur, India) Newton iteration (NI) is an almost 350 years old recursive formula that approximates a simple root of a polynomial quite rapidly. We generalize it to a matrix recurrence (allRootsNI) that approximates all the roots simultaneously. In this form, the process yields a better circuit complexity in the case when the number of roots r is small but the multiplicities are exponentially large. Our method sets up a linear system in r unknowns and iteratively builds the roots as formal power series. For an algebraic circuit f(x_{1},…,x_{n}) of size s we prove that each factor has size at most a polynomial in: s and the degree of the squarefree part of f. Consequently, if f_{1} is a 2^{Ω(n)}hard polynomial then any nonzero multiple ∏_{i} f_{i}^{ei} is equally hard for arbitrary positive e_{i}’s, assuming that ∑_{i}deg(f_{i}) is at most 2^{O(n)}. It is an old open question whether the class of poly(n)sized formulas (resp. algebraic branching programs) is closed under factoring. We show that given a polynomial f of degree n^{O(1)} and formula (resp. ABP) size n^{O(logn)} we can find a similar size formula (resp. ABP) factor in randomized poly(n^{logn})time. Consequently, if determinant requires n^{Ω(logn)} size formula, then the same can be said about any of its nonzero multiples. As part of our proofs, we identify a new property of multivariate polynomial factorization. We show that under a random linear transformation τ, f(τx) completely factors via power series roots. Moreover, the factorization adapts well to circuit complexity analysis. This with allRootsNI are the techniques that help us make progress towards the old open problems; supplementing the large body of classical results and concepts in algebraic circuit factorization (eg. Zassenhaus, J.NT 1969; Kaltofen, STOC 19857 & B'urgisser, FOCS 2001). 

Dwork, Cynthia 
STOC '18: "Composable and Versatile Privacy ..."
Composable and Versatile Privacy via Truncated CDP
Mark Bun, Cynthia Dwork, Guy N. Rothblum, and Thomas Steinke (Princeton University, USA; Harvard University, USA; Weizmann Institute of Science, Israel; IBM Research, USA)
We propose truncated concentrated differential privacy (tCDP), a refinement of differential privacy and of concentrated differential privacy. This new definition provides robust and efficient composition guarantees, supports powerful algorithmic techniques such as privacy amplification via subsampling, and enables more accurate statistical analyses. In particular, we show a central task for which the new definition enables exponential accuracy improvement.
@InProceedings{STOC18p74,
author = {Mark Bun and Cynthia Dwork and Guy N. Rothblum and Thomas Steinke},
title = {Composable and Versatile Privacy via Truncated CDP},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {7486},
doi = {10.1145/3188745.3188946},
year = {2018},
}
Publisher's Version
Article Search


Eden, Talya 
STOC '18: "On Approximating the Number ..."
On Approximating the Number of kCliques in Sublinear Time
Talya Eden, Dana Ron, and C. Seshadhri (Tel Aviv University, Israel; University of California at Santa Cruz, USA) We study the problem of approximating the number of kcliques in a graph when given query access to the graph. We consider the standard query model for general graphs via (1) degree queries, (2) neighbor queries and (3) pair queries. Let n denote the number of vertices in the graph, m the number of edges, and C_{k} the number of kcliques. We design an algorithm that outputs a (1+ε)approximation (with high probability) for C_{k}, whose expected query complexity and running time are O(n/C_{k}^{1/k}+m^{k/2}/C_{k} )(logn, 1/ε,k). Hence, the complexity of the algorithm is sublinear in the size of the graph for C_{k} = ω(m^{k/2−1}). Furthermore, we prove a lower bound showing that the query complexity of our algorithm is essentially optimal (up to the dependence on logn, 1/ε and k). The previous results in this vein are by Feige (SICOMP 06) and by Goldreich and Ron (RSA 08) for edge counting (k=2) and by Eden et al. (FOCS 2015) for triangle counting (k=3). Our result matches the complexities of these results. The previous result by Eden et al. hinges on a certain amortization technique that works only for triangle counting, and does not generalize for larger cliques. We obtain a general algorithm that works for any k≥ 3 by designing a procedure that samples each kclique incident to a given set S of vertices with approximately equal probability. The primary difficulty is in finding cliques incident to purely highdegree vertices, since random sampling within neighbors has a low success probability. This is achieved by an algorithm that samples uniform random high degree vertices and a careful tradeoff between estimating cliques incident purely to highdegree vertices and those that include a lowdegree vertex. 

Efremenko, Klim 
STOC '18: "Interactive Coding over the ..."
Interactive Coding over the Noisy Broadcast Channel
Klim Efremenko, Gillat Kol, and Raghuvansh Saxena (BenGurion University of the Negev, Israel; Princeton University, USA) A set of n players, each holding a private input bit, communicate over a noisy broadcast channel. Their mutual goal is for all players to learn all inputs. At each round one of the players broadcasts a bit to all the other players, and the bit received by each player is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? This problem was first suggested by El Gamal in 1984. In 1988, Gallager gave an elegant noiseresistant protocol requiring only O(n loglogn) rounds. The problem got resolved in 2005 by a seminal paper of Goyal, Kindler, and Saks, proving that Gallager’s protocol is essentially optimal. We revisit the above noisy broadcast problem and show that O(n) rounds suffice. This is possible due to a relaxation of the model assumed by the previous works. We no longer demand that exactly one player broadcasts in every round, but rather allow any number of players to broadcast. However, if it is not the case that exactly one player chooses to broadcast, each of the other players gets an adversely chosen bit. We generalized the above result and initiate the study of interactive coding over the noisy broadcast channel. We show that any interactive protocol that works over the noiseless broadcast channel can be simulated over our restrictive noisy broadcast model with constant blowup of the communication. Our results also establish that modern techniques for interactive coding can help us make progress on the classical problems. 

El Alaoui, Ahmed 
STOC '18: "Tight Query Complexity Lower ..."
Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law
Max Simchowitz, Ahmed El Alaoui, and Benjamin Recht (University of California at Berkeley, USA) We prove a query complexity lower bound for approximating the top r dimensional eigenspace of a matrix. We consider an oracle model where, given a symmetric matrix M ∈ ℝ^{d × d}, an algorithm Alg is allowed to make T exact queries of the form w^{(i)} = M v^{(i)} for i in {1,...,T}, where v^{(i)} is drawn from a distribution which depends arbitrarily on the past queries and measurements {v^{(j)},w^{(i)}}_{1 ≤ j ≤ i−1}. We show that for every gap ∈ (0,1/2], there exists a distribution over matrices M for which 1) gap_{r}(M) = Ω(gap) (where gap_{r}(M) is the normalized gap between the r and r+1st largestmagnitude eigenvector of M), and 2) any Alg which takes fewer than const × r logd/√gap queries fails (with overwhelming probability) to identity a matrix V ∈ ℝ^{d × r} with orthonormal columns for which ⟨ V, M V⟩ ≥ (1 − const × gap)∑_{i=1}^{r} λ_{i}(M). Our bound requires only that d is a small polynomial in 1/gap and r, and matches the upper bounds of Musco and Musco ’15. Moreover, it establishes a strict separation between convex optimization and “strictsaddle” nonconvex optimization of which PCA is a canonical example: in the former, firstorder methods can have dimensionfree iteration complexity, whereas in PCA, the iteration complexity of gradientbased methods must necessarily grow with the dimension. Our argument proceeds via a reduction to estimating a rankr spike in a deformed Wigner model M =W + λ U U^{⊤}, where W is from the Gaussian Orthogonal Ensemble, U is uniform on the d × rStieffel manifold and λ > 1 governs the size of the perturbation. Surprisingly, this ubiquitous random matrix model witnesses the worstcase rate for eigenspace approximation, and the ‘accelerated’ gap^{−1/2} in the rate follows as a consequence of the correspendence between the asymptotic eigengap and the size of the perturbation λ, when λ is near the “phase transition” λ = 1. To verify that d need only be polynomial in gap^{−1} and r, we prove a finite sample convergence theorem for top eigenvalues of a deformed Wigner matrix, which may be of independent interest. We then lower bound the above estimation problem with a novel technique based on Fanostyle dataprocessing inequalities with truncated likelihoods; the technique generalizes the Bayesrisk lower bound of Chen et al. ’16, and we believe it is particularly suited to lower bounds in adaptive settings like the one considered in this paper. 

Emek, Yuval 
STOC '18: "Approximating Generalized ..."
Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency
Yuval Emek, Shay Kutten, Ron Lavi, and Yangguang Shi (Technion, Israel)
In a generalized network design (GND) problem, a set of resources are assigned (nonexclusively) to multiple requests. Each request contributes its weight to the resources it uses and the total load on a resource is then translated to the cost it incurs via a resource specific cost function. Motivated by energy efficiency applications, recently, there is a growing interest in GND using cost functions that exhibit (dis)economies of scale ((D)oS), namely, cost functions that appear subadditive for small loads and superadditive for larger loads.
The current paper advances the existing literature on approximation algorithms for GND problems with (D)oS cost functions in various aspects: (1) while the existing results are restricted to routing requests in undirected graphs, identifying the resources with the graph's edges, the current paper presents a generic approximation framework that yields approximation results for a much wider family of requests (including various types of Steiner tree and Steiner forest requests) in both directed and undirected graphs, where the resources can be identified with either the edges or the vertices; (2) while the existing results assume that a request contributes the same weight to each resource it uses, our approximation framework allows for unrelated weights, thus providing the first nontrivial approximation for the problem of scheduling unrelated parallel machines with (D)oS cost functions; (3) while most of the existing approximation algorithms are based on convex programming, our approximation framework is fully combinatorial and runs in strongly polynomial time; (4) the family of (D)oS cost functions considered in the current paper is more general than the one considered in the existing literature, providing a more accurate abstraction for practical energy conservation scenarios; and (5) we obtain the first approximation ratio for GND with (D)oS cost functions that depends only on the parameters of the resources' technology and does not grow with the number of resources, the number of requests, or their weights. The design of our approximation framework relies heavily on Roughgarden's smoothness toolbox (JACM 2015), thus demonstrating the possible usefulness of this toolbox in the area of approximation algorithms.
@InProceedings{STOC18p598,
author = {Yuval Emek and Shay Kutten and Ron Lavi and Yangguang Shi},
title = {Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {598606},
doi = {10.1145/3188745.3188812},
year = {2018},
}
Publisher's Version
Article Search


Erickson, Jeff 
STOC '18: "Holiest MinimumCost Paths ..."
Holiest MinimumCost Paths and Flows in Surface Graphs
Jeff Erickson, Kyle Fox, and Luvsandondov Lkhamsuren (University of Illinois at UrbanaChampaign, USA; University of Texas at Dallas, USA; Airbnb, USA) Let G be an edgeweighted directed graph with n vertices embedded on an orientable surface of genus g. We describe a simple deterministic lexicographic perturbation scheme that guarantees uniqueness of minimumcost flows and shortest paths in G. The perturbations take O(gn) time to compute. We use our perturbation scheme in a black box manner to derive a deterministic O(n loglogn) time algorithm for minimum cut in directed edgeweighted planar graphs and a deterministic O(g^{2} n logn) time proprocessing scheme for the multiplesource shortest paths problem of computing a shortest path oracle for all vertices lying on a common face of a surface embedded graph. The latter result yields faster deterministic nearlinear time algorithms for a variety of problems in constant genus surface embedded graphs. Finally, we open the black box in order to generalize a recent lineartime algorithm for multiplesource shortest paths in unweighted undirected planar graphs to work in arbitrary orientable surfaces. Our algorithm runs in O(g^{2} n logg) time in this setting, and it can be used to give improved linear time algorithms for several problems in unweighted undirected surface embedded graphs of constant genus including the computation of minimum cuts, shortest topologically nontrivial cycles, and minimum homology bases. 

Fawzi, Omar 
STOC '18: "Efficient Decoding of Random ..."
Efficient Decoding of Random Errors for Quantum Expander Codes
Omar Fawzi, Antoine Grospellier, and Anthony Leverrier (ENS Lyon, France; Inria, France) We show that quantum expander codes, a constantrate family of quantum lowdensity parity check (LDPC) codes, with the quasilinear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constantrate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman’s construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of αpercolation: for a random subset E of vertices of a given graph, we consider the size of the largest connected αsubset of E, where X is an αsubset of E if X ∩ E ≥ α X. 

FilosRatsikas, Aris 
STOC '18: "Consensus Halving Is PPAComplete ..."
Consensus Halving Is PPAComplete
Aris FilosRatsikas and Paul W. Goldberg (University of Oxford, UK)
We show that the computational problem Consensus Halving is PPAComplete, the first PPACompleteness result for a problem whose definition does not involve an explicit circuit. We also show that an approximate version of this problem is polynomialtime equivalent to Necklace Splitting, which establishes PPADhardness for Necklace Splitting and suggests that it is also PPAComplete.
@InProceedings{STOC18p51,
author = {Aris FilosRatsikas and Paul W. Goldberg},
title = {Consensus Halving Is PPAComplete},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {5164},
doi = {10.1145/3188745.3188880},
year = {2018},
}
Publisher's Version
Article Search


Filtser, Arnold 
STOC '18: "Metric Embedding via Shortest ..."
Metric Embedding via Shortest Path Decompositions
Ittai Abraham, Arnold Filtser, Anupam Gupta, and Ofer Neiman (VMware, USA; BenGurion University of the Negev, Israel; Carnegie Mellon University, USA) We study the problem of embedding weighted graphs of pathwidth k into ℓ_{p} spaces. Our main result is an O(k^{min{1p,12}})distortion embedding. For p=1, this is a superexponential improvement over the best previous bound of Lee and Sidiropoulos. Our distortion bound is asymptotically tight for any fixed p >1. Our result is obtained via a novel embedding technique that is based on low depth decompositions of a graph via shortest paths. The core new idea is that given a geodesic shortest path P, we can probabilistically embed all points into 2 dimensions with respect to P. For p>2 our embedding also implies improved distortion on bounded treewidth graphs (O((klogn)^{1p})). For asymptotically large p, our results also implies improved distortion on graphs excluding a minor. 

Fineman, Jeremy T. 
STOC '18: "Nearly WorkEfficient Parallel ..."
Nearly WorkEfficient Parallel Algorithm for Digraph Reachability
Jeremy T. Fineman (Georgetown University, USA) One of the simplest problems on directed graphs is that of identifying the set of vertices reachable from a designated source vertex. This problem can be solved easily sequentially by performing a graph search, but efficient parallel algorithms have eluded researchers for decades. For sparse highdiameter graphs in particular, there is no known workefficient parallel algorithm with nontrivial parallelism. This amounts to one of the most fundamental open questions in parallel graph algorithms: Is there a parallel algorithm for digraph reachability with nearly linear work? This paper shows that the answer is yes. This paper presents a randomized parallel algorithm for digraph reachability and related problems with expected work Õ(m) and span Õ(n^{2/3}), and hence parallelism Ω(m/n^{2/3}) = Ω(n^{1/3}), on any graph with n vertices and m arcs. This is the first parallel algorithm having both nearly linear work and strongly sublinear span, i.e., span Õ(n^{1−є}) for any constant є>0. The algorithm can be extended to produce a directed spanning tree, determine whether the graph is acyclic, topologically sort the strongly connected components of the graph, or produce a directed ear decomposition, all with work Õ(m) and span Õ(n^{2/3}). The main technical contribution is an efficient Monte Carlo algorithm that, through the addition of Õ(n) shortcuts, reduces the diameter of the graph to Õ(n^{2/3}) with high probability. While both sequential and parallel algorithms are known with those combinatorial properties, even the sequential algorithms are not efficient, having sequential runtime Ω(mn^{Ω(1)}). This paper presents a surprisingly simple sequential algorithm that achieves the stated diameter reduction and runs in Õ(m) time. Parallelizing that algorithm yields the main result, but doing so involves overcoming several other challenges. 

Forbes, Michael A. 
STOC '18: "A PSPACE Construction of a ..."
A PSPACE Construction of a Hitting Set for the Closure of Small Algebraic Circuits
Michael A. Forbes and Amir Shpilka (University of Illinois at UrbanaChampaign, USA; Tel Aviv University, Israel) In this paper we study the complexity of constructing a hitting set for VP, the class of polynomials that can be infinitesimally approximated by polynomials that are computed by polynomial sized algebraic circuits, over the real or complex numbers. Specifically, we show that there is a PSPACE algorithm that given n,s,r in unary outputs a set of rational ntuples of size poly(n,s,r), with poly(n,s,r) bit complexity, that hits all nvariate polynomials of degree r that are the limit of size s algebraic circuits. Previously it was known that a random set of this size is a hitting set, but a construction that is certified to work was only known in EXPSPACE (or EXPH assuming the generalized Riemann hypothesis). As a corollary we get that a host of other algebraic problems such as Noether Normalization Lemma, can also be solved in PSPACE deterministically, where earlier only randomized algorithms and EXPSPACE algorithms (or EXPH assuming the generalized Riemann hypothesis) were known. The proof relies on the new notion of a robust hitting set which is a set of inputs such that any nonzero polynomial that can be computed by a polynomial size algebraic circuit, evaluates to a not too small value on at least one element of the set. Proving the existence of such a robust hitting set is the main technical difficulty in the proof. Our proof uses anticoncentration results for polynomials, basic tools from algebraic geometry and the existential theory of the reals. 

Fox, Kyle 
STOC '18: "Holiest MinimumCost Paths ..."
Holiest MinimumCost Paths and Flows in Surface Graphs
Jeff Erickson, Kyle Fox, and Luvsandondov Lkhamsuren (University of Illinois at UrbanaChampaign, USA; University of Texas at Dallas, USA; Airbnb, USA) Let G be an edgeweighted directed graph with n vertices embedded on an orientable surface of genus g. We describe a simple deterministic lexicographic perturbation scheme that guarantees uniqueness of minimumcost flows and shortest paths in G. The perturbations take O(gn) time to compute. We use our perturbation scheme in a black box manner to derive a deterministic O(n loglogn) time algorithm for minimum cut in directed edgeweighted planar graphs and a deterministic O(g^{2} n logn) time proprocessing scheme for the multiplesource shortest paths problem of computing a shortest path oracle for all vertices lying on a common face of a surface embedded graph. The latter result yields faster deterministic nearlinear time algorithms for a variety of problems in constant genus surface embedded graphs. Finally, we open the black box in order to generalize a recent lineartime algorithm for multiplesource shortest paths in unweighted undirected planar graphs to work in arbitrary orientable surfaces. Our algorithm runs in O(g^{2} n logg) time in this setting, and it can be used to give improved linear time algorithms for several problems in unweighted undirected surface embedded graphs of constant genus including the computation of minimum cuts, shortest topologically nontrivial cycles, and minimum homology bases. 

Franceschetti, Massimo 
STOC '18: "Shape of Diffusion and Size ..."
Shape of Diffusion and Size of Monochromatic Region of a TwoDimensional Spin System
Hamed Omidvar and Massimo Franceschetti (University of California at San Diego, USA) We consider an agentbased distributed algorithm with exponentially distributed waiting times in which agents with binary states interact locally over a geometric graph, and based on this interaction and on the value of a common intolerance threshold τ, decide whether to change their states. This model is equivalent to an Asynchronous Cellular Automaton (ACA) with extended Moore neighborhoods, a zerotemperature Ising model with Glauber dynamics, or a Schelling model of selforganized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. We prove a shape theorem for the spread of the “affected” nodes during the process dynamics and show that in the steady state, for τ ∈ (τ^{*},1−τ^{*}) ∖ {1/2}, where τ^{*} ≈ 0.488, the size of the “monochromatic region” at the end of the process is at least exponential in the size of the local neighborhood of interaction with probability approaching one as N grows. Combined with previous results on the expected size of the monochromatic region that provide a matching upper bound, this implies that in the steady state the size of the monochromatic region of any agent is exponential with high probability for the mentioned interval of τ. The shape theorem is based on a novel concentration inequality for the spreading time, and provides a precise geometrical description of the process dynamics. The result on the size of the monochromatic region considerably extends our understanding of the steady state. Showing convergence with high probability, it rules out the possibility that only a small fraction of the nodes are eventually contained in large monochromatic regions, which was left open by previous works. 

Franks, Cole 
STOC '18: "Operator Scaling with Specified ..."
Operator Scaling with Specified Marginals
Cole Franks (Rutgers University, USA) The completely positive maps, a generalization of the nonnegative matrices, are a wellstudied class of maps from n× n matrices to m× m matrices. The existence of the operator analogues of doubly stochastic scalings of matrices, the study of which is known as operator scaling, is equivalent to a multitude of problems in computer science and mathematics such rational identity testing in noncommuting variables, noncommutative rank of symbolic matrices, and a basic problem in invariant theory (Garg et. al., 2016). We study operator scaling with specified marginals, which is the operator analogue of scaling matrices to specified row and column sums (or marginals). We characterize the operators which can be scaled to given marginals, much in the spirit of the Gurvits’ algorithmic characterization of the operators that can be scaled to doubly stochastic (Gurvits, 2004). Our algorithm, which is a modified version of Gurvits’ algorithm, produces approximate scalings in time poly(n,m) whenever scalings exist. A central ingredient in our analysis is a reduction from operator scaling with specified marginals to operator scaling in the doubly stochastic setting. Instances of operator scaling with specified marginals arise in diverse areas of study such as the BrascampLieb inequalities, communication complexity, eigenvalues of sums of Hermitian matrices, and quantum information theory. Some of the known theorems in these areas, several of which had no algorithmic proof, are straightforward consequences of our characterization theorem. For instance, we obtain a simple algorithm to find, when it exists, a tuple of Hermitian matrices with given spectra whose sum has a given spectrum. We also prove new theorems such as a generalization of Forster’s theorem (Forster, 2002) concerning radial isotropic position. 

Galanis, Andreas 
STOC '18: "Inapproximability of the Independent ..."
Inapproximability of the Independent Set Polynomial in the Complex Plane
Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, and Daniel Štefankovič (Rochester Institute of Technology, USA; University of Oxford, UK; University of Rochester, USA) We study the complexity of approximating the value of the independent set polynomial Z_{G}(λ) of a graph G with maximum degree Δ when the activity λ is a complex number. When λ is real, the complexity picture is wellunderstood, and is captured by two realvalued thresholds λ^{*} and λ_{c}, which depend on Δ and satisfy 0<λ^{*}<λ_{c}. It is known that if λ is a real number in the interval (−λ^{*},λ_{c}) then there is an FPTAS for approximating Z_{G}(λ) on graphs G with maximum degree at most Δ. On the other hand, if λ is a real number outside of the (closed) interval, then approximation is NPhard. The key to establishing this picture was the interpretation of the thresholds λ^{*} and λ_{c} on the Δregular tree. The ”occupation ratio” of a Δregular tree T is the contribution to Z_{T}(λ) from independent sets containing the root of the tree, divided by Z_{T}(λ) itself. This occupation ratio converges to a limit, as the height of the tree grows, if and only if λ∈ [−λ^{*},λ_{c}]. Unsurprisingly, the case where λ is complex is more challenging. It is known that there is an FPTAS when λ is a complex number with norm at most λ^{*} and also when λ is in a small strip surrounding the real interval [0,λ_{c}). However, neither of these results is believed to fully capture the truth about when approximation is possible. Peters and Regts identified the values of λ for which the occupation ratio of the Δregular tree converges. These values carve a cardioidshaped region Λ_{Δ} in the complex plane, whose boundary includes the critical points −λ^{*} and λ_{c}. Motivated by the picture in the real case, they asked whether Λ_{Δ} marks the true approximability threshold for general complex values λ. Our main result shows that for every λ outside of Λ_{Δ}, the problem of approximating Z_{G}(λ) on graphs G with maximum degree at most Δ is indeed NPhard. In fact, when λ is outside of Λ_{Δ} and is not a positive real number, we give the stronger result that approximating Z_{G}(λ) is actually #Phard. Further, on the negative real axis, when λ<−λ^{*}, we show that it is #Phard to even decide whether Z_{G}(λ)>0, resolving in the affirmative a conjecture of Harvey, Srivastava and Vondrak. Our proof techniques are based around tools from complex analysis — specifically the study of iterative multivariate rational maps. 

Garg, Ankit 
STOC '18: "A Matrix Expander Chernoff ..."
A Matrix Expander Chernoff Bound
Ankit Garg, Yin Tat Lee, Zhao Song, and Nikhil Srivastava (Microsoft Research, USA; University of Washington, USA; Harvard University, USA; University of Texas at Austin, USA; University of California at Berkeley, USA)
We prove a Chernofftype bound for sums of matrixvalued random variables
sampled via a random walk on an expander, confirming a conjecture due to [Wigderson
and Xiao 06].
Our proof is based on a new multimatrix extension of the GoldenThompson inequality which improves upon the inequality in [Sutter, Berta and Tomamichel 17],
as well as an adaptation of an argument for the scalar case due to [Healy 08]. Our new multimatrix GoldenThompson inequality could be of independent interest. Secondarily, we also provide a generic reduction showing that any
concentration inequality for vectorvalued martingales implies a concentration
inequality for the corresponding expander walk, with a weakening of parameters proportional to the
squared mixing time.
@InProceedings{STOC18p1102,
author = {Ankit Garg and Yin Tat Lee and Zhao Song and Nikhil Srivastava},
title = {A Matrix Expander Chernoff Bound},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {11021114},
doi = {10.1145/3188745.3188890},
year = {2018},
}
Publisher's Version
Article Search
STOC '18: "Operator Scaling via Geodesically ..."
Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing
Zeyuan AllenZhu, Ankit Garg, Yuanzhi Li, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; Princeton University, USA; University of Toronto, Canada; Institute for Advanced Study at Princeton, USA)
We propose a new secondorder method for geodesically convex optimization on the natural hyperbolic metric over positive definite matrices. We apply it to solve the operator scaling problem in time polynomial in the input size and logarithmic in the error. This is an exponential improvement over previous algorithms which were analyzed in the usual Euclidean, "commutative" metric (for which the above problem is not convex). Our method is general and applicable to other settings.
As a consequence, we solve the equivalence problem for the leftright group action underlying the operator scaling problem. This yields a deterministic polynomialtime algorithm for a new class of Polynomial Identity Testing (PIT) problems, which was the original motivation for studying operator scaling.
@InProceedings{STOC18p172,
author = {Zeyuan AllenZhu and Ankit Garg and Yuanzhi Li and Rafael Oliveira and Avi Wigderson},
title = {Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {172181},
doi = {10.1145/3188745.3188942},
year = {2018},
}
Publisher's Version
Article Search
STOC '18: "Monotone Circuit Lower Bounds ..."
Monotone Circuit Lower Bounds from Resolution
Ankit Garg, Mika Göös, Pritish Kamath, and Dmitry Sokolov (Microsoft Research, USA; Harvard University, USA; Massachusetts Institute of Technology, USA; KTH, Sweden) For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadgetcomposed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols—or, equivalently, that a monotone function associated with F has large monotone circuit complexity. Our result extends to monotone real circuits, which yields new lower bounds for the Cutting Planes proof system. 

Garg, Shashwat 
STOC '18: "The GramSchmidt Walk: A Cure ..."
The GramSchmidt Walk: A Cure for the Banaszczyk Blues
Nikhil Bansal, Daniel Dadush, Shashwat Garg, and Shachar Lovett (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of California at San Diego, USA) An important result in discrepancy due to Banaszczyk states that for any set of n vectors in ℝ^{m} of ℓ_{2} norm at most 1 and any convex body K in ℝ^{m} of Gaussian measure at least half, there exists a ± 1 combination of these vectors which lies in 5K. This result implies the best known bounds for several problems in discrepancy. Banaszczyk’s proof of this result is nonconstructive and an open problem has been to give an efficient algorithm to find such a ± 1 combination of the vectors. In this paper, we resolve this question and give an efficient randomized algorithm to find a ± 1 combination of the vectors which lies in cK for c>0 an absolute constant. This leads to new efficient algorithms for several problems in discrepancy theory. 

Garg, Sumegha 
STOC '18: "ExtractorBased TimeSpace ..."
ExtractorBased TimeSpace Lower Bounds for Learning
Sumegha Garg, Ran Raz, and Avishay Tal (Princeton University, USA; Stanford University, USA) A matrix M: A × X → {−1,1} corresponds to the following learning problem: An unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_{1}, b_{1}), (a_{2}, b_{2}) …, where for every i, a_{i} ∈ A is chosen uniformly at random and b_{i} = M(a_{i},x). Assume that k, l, r are such that any submatrix of M of at least 2^{−k} · A rows and at least 2^{−l} · X columns, has a bias of at most 2^{−r}. We show that any learning algorithm for the learning problem corresponding to M requires either a memory of size at least Ω(k · l ), or at least 2^{Ω(r)} samples. The result holds even if the learner has an exponentially small success probability (of 2^{−Ω(r)}). In particular, this shows that for a large class of learning problems, any learning algorithm requires either a memory of size at least Ω((logX) · (logA)) or an exponential number of samples, achieving a tight Ω((logX) · (logA)) lower bound on the size of the memory, rather than a bound of Ω(min{(logX)^{2},(logA)^{2}}) obtained in previous works by Raz [FOCS’17] and Moshkovitz and Moshkovitz [ITCS’18]. Moreover, our result implies all previous memorysamples lower bounds, as well as a number of new applications. Our proof builds on the work of Raz [FOCS’17] that gave a general technique for proving memory samples lower bounds. Mark Braverman, Gil Cohen, and Sumegha Garg (Princeton University, USA) Nisan (Combinatorica’92) constructed a pseudorandom generator for length n, width n readonce branching programs (ROBPs) with error ε and seed length O(log^{2}n + logn · log(1/ε)). A major goal in complexity theory is to reduce the seed length, hopefully, to the optimal O(logn+log(1/ε)), or to construct improved hitting sets, as these would yield stronger derandomization of BPL and RL, respectively. In contrast to a successful line of work in restricted settings, no progress has been made for general, unrestricted, ROBPs. Indeed, Nisan’s construction is the best pseudorandom generator and, prior to this work, also the best hitting set for unrestricted ROBPs. In this work, we make the first improvement for the general case by constructing a hitting set with seed length O(log^{2}n+log(1/ε)). That is, we decouple ε and n, and obtain nearoptimal dependence on the former. The regime of parameters in which our construction strictly improves upon prior works, namely, log(1/ε) ≫ logn, is wellmotivated by the work of Saks and Zhou (J.CSS’99) who use pseudorandom generators with error ε = 2^{−(logn)2} in their proof for BPL ⊆ L^{3/2}. We further suggest a research program towards proving that BPL ⊆ L^{4/3} in which our result achieves one step. As our main technical tool, we introduce and construct a new type of primitive we call pseudorandom pseudodistributions. Informally, this is a generalization of pseudorandom generators in which one may assign negative and unbounded weights to paths as opposed to working with probability distributions. We show that such a primitive yields hitting sets and, for derandomization purposes, can be used to derandomize twosided error algorithms. 

Ghaffari, Mohsen 
STOC '18: "Improved Distributed Algorithms ..."
Improved Distributed Algorithms for Exact Shortest Paths
Mohsen Ghaffari and Jason Li (ETH Zurich, Switzerland; Carnegie Mellon University, USA) Computing shortest paths is one of the central problems in the theory of distributed computing. For the last few years, substantial progress has been made on the approximate single source shortest paths problem, culminating in an algorithm of Henzinger, Krinninger, and Nanongkai [STOC’16] which deterministically computes (1+o(1))approximate shortest paths in Õ(D+√n) time, where D is the hopdiameter of the graph. Up to logarithmic factors, this time complexity is optimal, matching the lower bound of Elkin [STOC’04]. The question of exact shortest paths however saw no algorithmic progress for decades, until the recent breakthrough of Elkin [STOC’17], which established a sublineartime algorithm for exact single source shortest paths on undirected graphs. Shortly after, Huang et al. [FOCS’17] provided improved algorithms for exact all pairs shortest paths problem on directed graphs. In this paper, we provide an alternative singlesource shortest path algorithm with complexity Õ(n^{3/4}D^{1/4}). For polylogarithmic D, this improves on Elkin’s Õ(n^{5/6}) bound and gets closer to the Ω(n^{1/2}) lower bound of Elkin [STOC’04]. For larger values of D, we present an improved variant of our algorithm which achieves complexity Õ(max{ n^{3/4+o(1)} , n^{3/4}D^{1/6}} + D ), and thus compares favorably with Elkin’s bound of Õ(max{ n^{5/6}, n^{2/3}D^{1/3}} + D ) in essentially the entire range of parameters. This algorithm provides also a qualitative improvement, because it works for the more challenging case of directed graph (i.e., graphs where the two directions of an edge can have different weights), constituting the first sublineartime algorithm for directed graphs. Our algorithm also extends to the case of exact rsource shortest paths, in which we provide the fastest algorithm for moderately small r and D, improving on those of Huang et al. Mohsen Ghaffari, Fabian Kuhn, Yannic Maus, and Jara Uitto (ETH Zurich, Switzerland; University of Freiburg, Germany) We present a deterministic distributed algorithm, in the LOCAL model, that computes a (1+o(1))Δedgecoloring in polylogarithmictime, so long as the maximum degree Δ=Ω(logn). For smaller Δ, we give a polylogarithmictime 3Δ/2edgecoloring. These are the first deterministic algorithms to go below the natural barrier of 2Δ−1 colors, and they improve significantly on the recent polylogarithmictime (2Δ−1)(1+o(1))edgecoloring of Ghaffari and Su [SODA’17] and the (2Δ−1)edgecoloring of Fischer, Ghaffari, and Kuhn [FOCS’17], positively answering the main open question of the latter. The key technical ingredient of our algorithm is a simple and novel gradual packing of judiciously chosen nearmaximum matchings, each of which becomes one of the color classes. 

Gharan, Shayan Oveis 
STOC '18: "A Simply Exponential Upper ..."
A Simply Exponential Upper Bound on the Maximum Number of Stable Matchings
Anna R. Karlin, Shayan Oveis Gharan, and Robbie Weber (University of Washington, USA) Stable matching is a classical combinatorial problem that has been the subject of intense theoretical and empirical study since its introduction in 1962 in a seminal paper by Gale and Shapley. In this paper, we provide a new upper bound on f(n), the maximum number of stable matchings that a stable matching instance with n men and n women can have. It has been a longstanding open problem to understand the asymptotic behavior of f(n) as n→∞, first posed by Donald Knuth in the 1970s. Until now the best lower bound was approximately 2.28^{n}, and the best upper bound was 2^{nlogn− O(n)}. In this paper, we show that for all n, f(n) ≤ c^{n} for some universal constant c. This matches the lower bound up to the base of the exponent. Our proof is based on a reduction to counting the number of downsets of a family of posets that we call “mixing”. The latter might be of independent interest. 

Ghosh, Sumanta 
STOC '18: "Bootstrapping Variables in ..."
Bootstrapping Variables in Algebraic Circuits
Manindra Agrawal, Sumanta Ghosh, and Nitin Saxena (IIT Kanpur, India) We show that for the blackbox polynomial identity testing (PIT) problem it suffices to study circuits that depend only on the first extremely few variables. One only need to consider sizes degrees circuits that depend on the first log^{∘ c} s variables (where c is a constant and we are composing c logarithms). Thus, hittingset generator (hsg) manifests a bootstrapping behavior— a partial hsg against very few variables can be efficiently grown to a complete hsg. A boolean analog, or a pseudorandom generator property of this type, is unheardof. Our idea is to use the partial hsg and its annihilator polynomial to efficiently bootstrap the hsg exponentially wrt variables. This is repeated c times in an efficient way. Pushing the envelope further we show that: (1) a quadratictime blackbox PIT for 6913variate degrees sizes polynomials, will lead to a “near”complete derandomization of PIT, and (2) a blackbox PIT for nvariate degrees sizes circuits in s^{nδ}time, for δ<1/2, will lead to a “near”complete derandomization of PIT (in contrast, s^{n}time is trivial). Our second idea is to study depth4 circuits that depend on constantly many variables. We show that a polynomialtime computable, O(s^{1.49})degree hsg for trivariate depth4 circuits bootstraps to a quasipolynomial time hsg for general polydegree circuits, and implies a lower bound that is a bit stronger than KabanetsImpagliazzo (STOC 2003). 

Gishboliner, Lior 
STOC '18: "A Generalized Turán Problem ..."
A Generalized Turán Problem and Its Applications
Lior Gishboliner and Asaf Shapira (Tel Aviv University, Israel) Our first theorem in this paper is a hierarchy theorem for the query complexity of testing graph properties with 1sided error; more precisely, we show that for every sufficiently fastgrowing function f, there is a graph property whose 1sidederror query complexity is precisely f(Θ(1/ε)). No result of this type was previously known for any f which is superpolynomial. Goldreich [ECCC 2005] asked to exhibit a graph property whose query complexity is 2^{Θ(1/ε)}. Our hierarchy theorem partially resolves this problem by exhibiting a property whose 1sidederror query complexity is 2^{Θ(1/ε)}. We also use our hierarchy theorem in order to resolve a problem raised by the second author and Alon [STOC 2005] regarding testing relaxed versions of bipartiteness. Our second theorem states that for any function f there is a graph property whose 1sidederror query complexity is f(Θ(1/ε)) while its 2sidederror query complexity is only poly(1/ε). This is the first indication of the surprising power that 2sidederror testing algorithms have over 1sidederror ones, even when restricted to properties that are testable with 1sided error. Again, no result of this type was previously known for any f that is super polynomial. The above theorems are derived from a graph theoretic result which we think is of independent interest, and might have further applications. Alon and Shikhelman [JCTB 2016] introduced the following generalized Turán problem: for fixed graphs H and T, and an integer n, what is the maximum number of copies of T, denoted by ex(n,T,H), that can appear in an nvertex Hfree graph? This problem received a lot of attention recently, with an emphasis on ex(n,C_{3},C_{2ℓ +1}). Our third theorem in this paper gives tight bounds for ex(n,C_{k},C_{ℓ}) for all the remaining values of k and ℓ. 

Göös, Mika 
STOC '18: "Monotone Circuit Lower Bounds ..."
Monotone Circuit Lower Bounds from Resolution
Ankit Garg, Mika Göös, Pritish Kamath, and Dmitry Sokolov (Microsoft Research, USA; Harvard University, USA; Massachusetts Institute of Technology, USA; KTH, Sweden) For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadgetcomposed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols—or, equivalently, that a monotone function associated with F has large monotone circuit complexity. Our result extends to monotone real circuits, which yields new lower bounds for the Cutting Planes proof system. 

Goldberg, Leslie Ann 
STOC '18: "Inapproximability of the Independent ..."
Inapproximability of the Independent Set Polynomial in the Complex Plane
Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, and Daniel Štefankovič (Rochester Institute of Technology, USA; University of Oxford, UK; University of Rochester, USA) We study the complexity of approximating the value of the independent set polynomial Z_{G}(λ) of a graph G with maximum degree Δ when the activity λ is a complex number. When λ is real, the complexity picture is wellunderstood, and is captured by two realvalued thresholds λ^{*} and λ_{c}, which depend on Δ and satisfy 0<λ^{*}<λ_{c}. It is known that if λ is a real number in the interval (−λ^{*},λ_{c}) then there is an FPTAS for approximating Z_{G}(λ) on graphs G with maximum degree at most Δ. On the other hand, if λ is a real number outside of the (closed) interval, then approximation is NPhard. The key to establishing this picture was the interpretation of the thresholds λ^{*} and λ_{c} on the Δregular tree. The ”occupation ratio” of a Δregular tree T is the contribution to Z_{T}(λ) from independent sets containing the root of the tree, divided by Z_{T}(λ) itself. This occupation ratio converges to a limit, as the height of the tree grows, if and only if λ∈ [−λ^{*},λ_{c}]. Unsurprisingly, the case where λ is complex is more challenging. It is known that there is an FPTAS when λ is a complex number with norm at most λ^{*} and also when λ is in a small strip surrounding the real interval [0,λ_{c}). However, neither of these results is believed to fully capture the truth about when approximation is possible. Peters and Regts identified the values of λ for which the occupation ratio of the Δregular tree converges. These values carve a cardioidshaped region Λ_{Δ} in the complex plane, whose boundary includes the critical points −λ^{*} and λ_{c}. Motivated by the picture in the real case, they asked whether Λ_{Δ} marks the true approximability threshold for general complex values λ. Our main result shows that for every λ outside of Λ_{Δ}, the problem of approximating Z_{G}(λ) on graphs G with maximum degree at most Δ is indeed NPhard. In fact, when λ is outside of Λ_{Δ} and is not a positive real number, we give the stronger result that approximating Z_{G}(λ) is actually #Phard. Further, on the negative real axis, when λ<−λ^{*}, we show that it is #Phard to even decide whether Z_{G}(λ)>0, resolving in the affirmative a conjecture of Harvey, Srivastava and Vondrak. Our proof techniques are based around tools from complex analysis — specifically the study of iterative multivariate rational maps. 

Goldberg, Paul W. 
STOC '18: "Consensus Halving Is PPAComplete ..."
Consensus Halving Is PPAComplete
Aris FilosRatsikas and Paul W. Goldberg (University of Oxford, UK)
We show that the computational problem Consensus Halving is PPAComplete, the first PPACompleteness result for a problem whose definition does not involve an explicit circuit. We also show that an approximate version of this problem is polynomialtime equivalent to Necklace Splitting, which establishes PPADhardness for Necklace Splitting and suggests that it is also PPAComplete.
@InProceedings{STOC18p51,
author = {Aris FilosRatsikas and Paul W. Goldberg},
title = {Consensus Halving Is PPAComplete},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {5164},
doi = {10.1145/3188745.3188880},
year = {2018},
}
Publisher's Version
Article Search


Gonczarowski, Yannai A. 
STOC '18: "Bounding the MenuSize of ..."
Bounding the MenuSize of Approximately Optimal Auctions via OptimalTransport Duality
Yannai A. Gonczarowski (Hebrew University of Jerusalem, Israel; Microsoft Research, Israel) The question of the minimum menusize for approximate (i.e., uptoε) Bayesian revenue maximization when selling two goods to an additive riskneutral quasilinear buyer was introduced by Hart and Nisan [2013], who give an upper bound of O(1/ε^{4}) for this problem. Using the optimaltransport duality framework of Daskalakis, Deckelbaum, and Tzamos [2013, 2015], we derive the first lower bound for this problem — of Ω(1/∜ε), even when the values for the two goods are drawn i.i.d. from ”nice” distributions, establishing how to reason about approximately optimal mechanisms via this duality framework. This bound implies, for any fixed number of goods, a tight bound of Θ(log1/ε) on the minimum deterministic communication complexity guaranteed to suffice for running some approximately revenuemaximizing mechanism, thereby completely resolving this problem. As a secondary result, we show that under standard economic assumptions on distributions, the above upper bound of Hart and Nisan [2013] can be strengthened to O(1/ε^{2}). 

Goyal, Rishab 
STOC '18: "Collusion Resistant Traitor ..."
Collusion Resistant Traitor Tracing from Learning with Errors
Rishab Goyal, Venkata Koppula, and Brent Waters (University of Texas at Austin, USA)
In this work we provide a traitor tracing construction with ciphertexts that grow polynomially in log(n) where n is the number of users and prove it secure under the Learning with Errors (LWE) assumption. This is the first traitor tracing scheme with such parameters provably secure from a standard assumption. In addition to achieving new traitor tracing results, we believe our techniques push forward the broader area of computing on encrypted data under standard assumptions. Notably, traitor tracing is substantially different problem from other cryptography primitives that have seen recent progress in LWE solutions.
We achieve our results by first conceiving a novel approach to building traitor tracing that starts with a new form of Functional Encryption that we call Mixed FE. In a Mixed FE system the encryption algorithm is bimodal and works with either a public key or master secret key. Ciphertexts encrypted using the public key can only encrypt one type of functionality. On the other hand the secret key encryption can be used to encode many different types of programs, but is only secure as long as the attacker sees a bounded number of such ciphertexts.
We first show how to combine Mixed FE with AttributeBased Encryption to achieve traitor tracing. Second we build Mixed FE systems for polynomial sized branching programs (which corresponds to the complexity class logspace) by relying on the polynomial hardness of the LWE assumption with superpolynomial modulustonoise ratio.
@InProceedings{STOC18p660,
author = {Rishab Goyal and Venkata Koppula and Brent Waters},
title = {Collusion Resistant Traitor Tracing from Learning with Errors},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {660670},
doi = {10.1145/3188745.3188844},
year = {2018},
}
Publisher's Version
Article Search


Goyal, Vipul 
STOC '18: "Nonmalleable Secret Sharing ..."
Nonmalleable Secret Sharing
Vipul Goyal and Ashutosh Kumar (Carnegie Mellon University, USA; University of California at Los Angeles, USA) A number of works have focused on the setting where an adversary tampers with the shares of a secret sharing scheme. This includes literature on verifiable secret sharing, algebraic manipulation detection(AMD) codes, and, error correcting or detecting codes in general. In this work, we initiate a systematic study of what we call nonmalleable secret sharing. Very roughly, the guarantee we seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is “destroyed” and the reconstruction outputs a string which is completely “unrelated” to the original secret. Recent exciting work on nonmalleable codes in the splitstate model led to constructions which can be seen as 2outof2 nonmalleable secret sharing schemes. These constructions have already found a number of applications in cryptography. We investigate the natural question of constructing toutofn nonmalleable secret sharing schemes. Such a secret sharing scheme ensures that only a set consisting of t or more shares can reconstruct the secret, and, additionally guarantees nonmalleability under an attack where potentially every share maybe tampered with. Techniques used for obtaining splitstate nonmalleable codes (or 2outof2 nonmalleable secret sharing) are (in some form) based on twosource extractors and seem not to generalize to our setting. Our first result is the construction of a toutofn nonmalleable secret sharing scheme against an adversary who arbitrarily tampers each of the shares independently. Our construction is unconditional and features statistical nonmalleability. As our main technical result, we present toutofn nonmalleable secret sharing scheme in a stronger adversarial model where an adversary may jointly tamper multiple shares. Our construction is unconditional and the adversary is allowed to jointlytamper subsets of up to (t−1) shares. We believe that the techniques introduced in our construction may be of independent interest. Inspired by the well studied problem of perfectly secure message transmission introduced in the seminal work of Dolev et. al (J. of ACM’93), we also initiate the study of nonmalleable message transmission. Nonmalleable message transmission can be seen as a natural generalization in which the goal is to ensure that the receiver either receives the original message, or, the original message is essentially destroyed and the receiver receives an “unrelated” message, when the network is under the influence of an adversary who can byzantinely corrupt all the nodes in the network. As natural applications of our nonmalleable secret sharing schemes, we propose constructions for nonmalleable message transmission. 

Grandoni, Fabrizio 
STOC '18: "Improved Approximation for ..."
Improved Approximation for Tree Augmentation: Saving by Rewiring
Fabrizio Grandoni, Christos Kalaitzis, and Rico Zenklusen (University of Lugano, Switzerland; ETH Zurich, Switzerland) The Tree Augmentation Problem (TAP) is a fundamental network design problem in which we are given a tree and a set of additional edges, also called links. The task is to find a set of links, of minimum size, whose addition to the tree leads to a 2edgeconnected graph. A long line of results on TAP culminated in the previously best known approximation guarantee of 1.5 achieved by a combinatorial approach due to Kortsarz and Nutov [ACM Transactions on Algorithms 2016], and also by an SDPbased approach by Cheriyan and Gao [Algorithmica 2017]. Moreover, an elegant LPbased (1.5+є)approximation has also been found very recently by Fiorini, Groß, K'onemann, and Sanitá [SODA 2018]. In this paper, we show that an approximation factor below 1.5 can be achieved, by presenting a 1.458approximation that is based on several new techniques. By extending prior results of Adjiashvili [SODA 2017], we first present a blackbox reduction to a very structured type of instance, which played a crucial role in recent development on the problem, and which we call kwide. Our main contribution is a new approximation algorithm for O(1)wide tree instances with approximation guarantee strictly below 1.458, based on one of their fundamental properties: wide trees naturally decompose into smaller subtrees with a constant number of leaves. Previous approaches in similar settings rounded each subtree independently and simply combined the obtained solutions. We show that additionally, when starting with a wellchosen LP, the combined solution can be improved through a new “rewiring” technique, showing that one can replace some pairs of used links by a single link. We can rephrase the rewiring problem as a stochastic version of a matching problem, which may be of independent interest. By showing that large matchings can be obtained in this problem, we obtain that a significant number of rewirings are possible, thus leading to an approximation factor below 1.5. Fabrizio Grandoni, Tobias Mömke, Andreas Wiese, and Hang Zhou (University of Lugano, Switzerland; University of Bremen, Germany; Saarland University, Germany; University of Chile, Chile; École Polytechnique, France) In the unsplittable flow on a path problem (UFP) we are given a path with edge capacities and a collection of tasks. Each task is characterized by a subpath, a profit, and a demand. Our goal is to compute a maximum profit subset of tasks such that, for each edge e, the total demand of selected tasks that use e does not exceed the capacity of e. The current best polynomialtime approximation factor for this problem is 2+є for any constant є>0 [Anagostopoulos et al.SODA 2014]. This is the best known factor even in the case of uniform edge capacities [Călinescu et al.IPCO 2002, TALG 2011]. These results, likewise most prior work, are based on a partition of tasks into large and small depending on their ratio of demand to capacity over their respective edges: these algorithms invoke (1+є)approximations for large and small tasks separately. The known techniques do not seem to be able to combine a big fraction of large and small tasks together (apart from some special cases and quasipolynomialtime algorithms). The main contribution of this paper is to overcome this critical barrier. Namely, we present a polynomialtime algorithm that obtains roughly all profit from the optimal large tasks plus one third of the profit from the optimal small tasks. In combination with known results, this implies a polynomialtime (5/3+є)approximation algorithm for UFP. Our algorithm is based on two main ingredients. First, we prove that there exist certain suboptimal solutions where, roughly speaking, small tasks are packed into boxes. To prove that such solutions can yield high profit we introduce a horizontal slicing lemma which yields a novel geometric interpretation of certain solutions. The resulting boxed structure has polynomial complexity, hence cannot be guessed directly. Therefore, our second contribution is a dynamic program that guesses this structure (plus a packing of large and small tasks) on the fly, while losing at most one third of the profit of the remaining small tasks. 

Grospellier, Antoine 
STOC '18: "Efficient Decoding of Random ..."
Efficient Decoding of Random Errors for Quantum Expander Codes
Omar Fawzi, Antoine Grospellier, and Anthony Leverrier (ENS Lyon, France; Inria, France) We show that quantum expander codes, a constantrate family of quantum lowdensity parity check (LDPC) codes, with the quasilinear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constantrate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman’s construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of αpercolation: for a random subset E of vertices of a given graph, we consider the size of the largest connected αsubset of E, where X is an αsubset of E if X ∩ E ≥ α X. 

Guo, Heng 
STOC '18: "Counting Hypergraph Colourings ..."
Counting Hypergraph Colourings in the Local Lemma Regime
Heng Guo, Chao Liao, Pinyan Lu, and Chihao Zhang (University of Edinburgh, UK; Shanghai Jiao Tong University, China; Shanghai University of Finance and Economics, China; Chinese University of Hong Kong, China) We give a fully polynomialtime approximation scheme (FPTAS) to count the number of qcolorings for kuniform hypergraphs with maximum degree Δ if k≥ 28 and q > 315Δ^{14/k−14}. We also obtain a polynomialtime almost uniform sampler if q>798Δ^{16/k−16/3}. These are the first approximate counting and sampling algorithms in the regime q≪Δ (for large Δ and k) without any additional assumptions. Our method is based on the recent work of Moitra (STOC, 2017). One important contribution of ours is to remove the dependency of k and Δ in Moitra’s approach. 

Gupta, Anupam 
STOC '18: "Metric Embedding via Shortest ..."
Metric Embedding via Shortest Path Decompositions
Ittai Abraham, Arnold Filtser, Anupam Gupta, and Ofer Neiman (VMware, USA; BenGurion University of the Negev, Israel; Carnegie Mellon University, USA) We study the problem of embedding weighted graphs of pathwidth k into ℓ_{p} spaces. Our main result is an O(k^{min{1p,12}})distortion embedding. For p=1, this is a superexponential improvement over the best previous bound of Lee and Sidiropoulos. Our distortion bound is asymptotically tight for any fixed p >1. Our result is obtained via a novel embedding technique that is based on low depth decompositions of a graph via shortest paths. The core new idea is that given a geodesic shortest path P, we can probabilistically embed all points into 2 dimensions with respect to P. For p>2 our embedding also implies improved distortion on bounded treewidth graphs (O((klogn)^{1p})). For asymptotically large p, our results also implies improved distortion on graphs excluding a minor. 

Guruswami, Venkatesan 
STOC '18: "General Strong Polarization ..."
General Strong Polarization
Jarosław Błasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, and Madhu Sudan (Harvard University, USA; Carnegie Mellon University, USA; SUNY Buffalo, USA) Arikan’s exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constantsized) invertible matrix M, a family of polar codes can be associated with this matrix and its ability to approach capacity follows from the polarization of an associated [0,1]bounded martingale, namely its convergence in the limit to either 0 or 1 with probability 1. Arikan showed appropriate polarization of the martingale associated with the matrix G_{2} = (
) to get capacity achieving codes. His analysis was later extended to all matrices M which satisfy an obvious necessary condition for polarization. While Arikan’s theorem does not guarantee that the codes achieve capacity at small blocklengths (specifically in length which is a polynomial in 1/є where є is the difference between the capacity of a channel and the rate of the code), it turns out that a “strong” analysis of the polarization of the underlying martingale would lead to such constructions. Indeed for the martingale associated with G_{2} such a strong polarization was shown in two independent works ([Guruswami and Xia, IEEE IT ’15] and [Hassani et al., IEEE IT’14]), thereby resolving a major theoretical challenge associated with the efficient attainment of Shannon capacity. In this work we extend the result above to cover martingales associated with all matrices that satisfy the necessary condition for (weak) polarization. In addition to being vastly more general, our proofs of strong polarization are (in our view) also much simpler and modular. Key to our proof is a notion of local polarization that only depends on the evolution of the martingale in a single time step. We show that local polarization always implies strong polarization. We then apply relatively simple reasoning about conditional entropies to prove local polarization in very general settings. Specifically, our result shows strong polarization over all prime fields and leads to efficient capacityachieving source codes for compressing arbitrary i.i.d. sources, and capacityachieving channel codes for arbitrary symmetric memoryless channels. 

Haddock, Jamie 
STOC '18: "The Minimum EuclideanNorm ..."
The Minimum EuclideanNorm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm Is Exponential
Jesús A. De Loera, Jamie Haddock, and Luis Rademacher (University of California at Davis, USA)
The complexity of Philip Wolfe’s method for the minimum Euclideannorm point problem over a convex polytope has remained unknown since he proposed the method in 1974. We present the first example that Wolfe’s method takes exponential time. Additionally, we improve previous results to show that linear programming reduces in stronglypolynomial time to the minimum norm point problem over a simplex
@InProceedings{STOC18p545,
author = {Jesús A. De Loera and Jamie Haddock and Luis Rademacher},
title = {The Minimum EuclideanNorm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm Is Exponential},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {545553},
doi = {10.1145/3188745.3188820},
year = {2018},
}
Publisher's Version
Article Search


Haeupler, Bernhard 
STOC '18: "Synchronization Strings: Explicit ..."
Synchronization Strings: Explicit Constructions, Local Decoding, and Applications
Bernhard Haeupler and Amirbehshad Shahrasbi (Carnegie Mellon University, USA) This paper gives new results for synchronization strings, a powerful combinatorial object introduced by [Haeupler, Shahrasbi; STOC’17] that allows to efficiently deal with insertions and deletions in various communication problems:  We give a deterministic, linear time synchronization string construction, improving over an O(n^{5}) time randomized construction. Independently of this work, a deterministic O(n log^{2} logn) time construction was proposed by Cheng, Li, and Wu.  We give a deterministic construction of an infinite synchronization string which outputs the first n symbols in O(n) time. Previously it was not known whether such a string was computable.  Both synchronization string constructions are highly explicit, i.e., the ith symbol can be deterministically computed in O(logi) time.  This paper also introduces a generalized notion we call longdistance synchronization strings. Such strings allow for local and very fast decoding. In particular only O(log^{3} n) time and access to logarithmically many symbols is required to decode any index. The paper also provides several applications for these improved synchronization strings:  For any δ < 1 and є > 0 we provide an insdel error correcting block code with rate 1 − δ − є which can correct any δ/3 fraction of insertion and deletion errors in O(n log^{3} n) time. This near linear computational efficiency is surprising given that we do not even know how to compute the (edit) distance between the decoding input and output in subquadratic time.  We show that local decodability implies that error correcting codes constructed with longdistance synchronization strings can not only efficiently recover from δ fraction of insdel errors but, similar to [Schulman, Zuckerman; TransInf’99], also from any O(δ / logn) fraction of block transpositions and block replications. These block corruptions allow arbitrarily long substrings to be swapped or replicated anywhere.  We show that highly explicitness and local decoding allow for infinite channel simulations with exponentially smaller memory and decoding time requirements. These simulations can then be used to give the first near linear time interactive coding scheme for insdel errors, similar to the result of [Brakerski, Naor; SODA’13] for Hamming errors. Gil Cohen, Bernhard Haeupler, and Leonard J. Schulman (Princeton University, USA; Carnegie Mellon University, USA; California Institute of Technology, USA) This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size. We give an explicit binary tree code with constant distance and alphabet size poly(logn), where n is the depth of the tree. This is the first improvement over a twodecadeold construction that has an exponentially larger alphabet of size poly(n). At the core of our construction is the first explicit tree code with constant rate and constant distance, though with nonconstant arity  a result of independent interest. This construction adapts the polynomial interpolation framework to the online setting. 

Hajiaghayi, MohammadTaghi 
STOC '18: "Fast Algorithms for Knapsack ..."
Fast Algorithms for Knapsack via Convolution and Prediction
MohammadHossein Bateni, MohammadTaghi Hajiaghayi, Saeed Seddighin, and Cliff Stein (Google, USA; University of Maryland, USA; Columbia University, USA) The knapsack problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most wellknown NPhard problems. The goal is to pack a knapsack of size t with the maximum value from a collection of n items with given sizes and values. Recent evidence suggests that a classic O(nt) dynamicprogramming solution for the knapsack problem might be the fastest in the worst case. In fact, solving the knapsack problem was shown to be computationally equivalent to the (min, +) convolution problem, which is thought to be facing a quadratictime barrier. This hardness is in contrast to the more famous (+, ·) convolution (generally known as polynomial multiplication), that has an O(nlogn)time solution via Fast Fourier Transform. Our main results are algorithms with nearlinear running times (in terms of the size of the knapsack and the number of items) for the knapsack problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by , the running time of our algorithm is Õ((n+t)). If the item values are integers bounded by , our algorithm runs in time Õ(n+t). Best previously known running times were O(nt), O(n^{2}) and O(n) (Pisinger, J. of Alg., 1999). At the core of our algorithms lies the prediction technique: Roughly speaking, this new technique enables us to compute the convolution of two vectors in time (n) when an approximation of the solution within an additive error of is available. Our results also improve the best known strongly polynomial time solutions for knapsack. In the limited size setting, when the items have multiplicities, the fastest strongly polynomial time algorithms for knapsack run in time O(n^{2} ^{2}) and O(n^{3} ^{2}) for the cases of infinite and given multiplicities, respectively. Our results improve both running times by a factor of (n max{1, n/}). 

Hatami, Pooya 
STOC '18: "Improved Pseudorandomness ..."
Improved Pseudorandomness for Unordered Branching Programs through Local Monotonicity
Eshan Chattopadhyay, Pooya Hatami, Omer Reingold, and Avishay Tal (Cornell University, USA; Institute for Advanced Study at Princeton, USA; University of Texas at Austin, USA; Stanford University, USA) We present an explicit pseudorandom generator with seed length Õ((logn)^{w+1}) for readonce, oblivious, width w branching programs that can read their input bits in any order. This improves upon the work of Impagliazzo, Meka and Zuckerman (FOCS’12) where they required seed length n^{1/2+o(1)}. A central ingredient in our work is the following bound that we prove on the Fourier spectrum of branching programs. For any width w readonce, oblivious branching program B:{0,1}^{n}→ {0,1}, any k ∈ {1,…,n},
This settles a conjecture posed by Reingold, Steinke and Vadhan (RANDOM’13). Our analysis crucially uses a notion of local monotonicity on the edge labeling of the branching program. We carry critical parts of our proof under the assumption of local monotonicity and show how to deduce our results for unrestricted branching programs. 

Hirvonen, Juho 
STOC '18: "New Classes of Distributed ..."
New Classes of Distributed Time Complexity
Alkida Balliu, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Dennis Olivetti, and Jukka Suomela (Aalto University, Finland; University of Freiburg, Germany) A number of recent papers – e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) – have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π in which a solution can be verified by checking all radiusO(1) neighbourhoods, and the question is what is the smallest T such that a solution can be computed so that each node chooses its own output based on its radiusT neighbourhood. Here T is the distributed time complexity of Π. The time complexity classes for deterministic algorithms in boundeddegree graphs that are known to exist by prior work are Θ(1), Θ(log^{*} n), Θ(logn), Θ(n^{1/k}), and Θ(n). It is also known that there are two gaps: one between ω(1) and o(loglog^{*} n), and another between ω(log^{*} n) and o(logn). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple – indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(log^{α} n) for any α ≥ 1, 2^{Θ(logα n)} for any α ≤ 1, and Θ(n^{α}) for any α < 1/2 in the high end of the complexity spectrum, and Θ(log^{α} log^{*} n) for any α ≥ 1, 2^{Θ(logα log* n)} for any α ≤ 1, and Θ((log^{*} n)^{α}) for any α ≤ 1 in the low end of the complexity spectrum; here α is a positive rational number. 

Hopkins, Samuel B. 
STOC '18: "Mixture Models, Robustness, ..."
Mixture Models, Robustness, and Sum of Squares Proofs
Samuel B. Hopkins and Jerry Li (Cornell University, USA; Massachusetts Institute of Technology, USA) We use the Sum of Squares method to develop new efficient algorithms for learning wellseparated mixtures of Gaussians and robust mean estimation, both in high dimensions, that substantially improve upon the statistical guarantees achieved by previous efficient algorithms. Our contributions are: Mixture models with separated means: We study mixtures of poly(k)many kdimensional distributions where the means of every pair of distributions are separated by at least k^{ε}. In the special case of spherical Gaussian mixtures, we give a k^{O(1/ε)}time algorithm that learns the means assuming separation at least k^{ε}, for any ε> 0. This is the first algorithm to improve on greedy (“singlelinkage”) and spectral clustering, breaking a longstanding barrier for efficient algorithms at separation k^{1/4}. Robust estimation: When an unknown (1−ε)fraction of X_{1},…,X_{n} are chosen from a subGaussian distribution with mean µ but the remaining points are chosen adversarially, we give an algorithm recovering µ to error ε^{1−1/t} in time k^{O(t)}, so long as subGaussianness up to O(t) moments can be certified by a Sum of Squares proof. This is the first polynomialtime algorithm with guarantees approaching the informationtheoretic limit for nonGaussian distributions. Previous algorithms could not achieve error better than ε^{1/2}. As a corollary, we achieve similar results for robust covariance estimation. Both of these results are based on a unified technique. Inspired by recent algorithms of Diakonikolas et al. in robust statistics, we devise an SDP based on the Sum of Squares method for the following setting: given X_{1},…,X_{n} ∈ ℝ^{k} for large k and n = poly(k) with the promise that a subset of X_{1},…,X_{n} were sampled from a probability distribution with bounded moments, recover some information about that distribution. 

Hoy, Darrell 
STOC '18: "A Tighter Welfare Guarantee ..."
A Tighter Welfare Guarantee for FirstPrice Auctions
Darrell Hoy, Samuel Taggart, and Zihe Wang (Tremor Technologies, USA; Oberlin College, USA; Shanghai University of Finance and Economics, China) This paper proves that the welfare of the first price auction in BayesNash equilibrium is at least a .743fraction of the welfare of the optimal mechanism assuming agents’ values are independently distributed. The previous best bound was 1−1/e≈.63, derived using smoothness, the standard technique for reasoning about welfare of games in equilibrium. In the worst known example, the first price auction achieves a ≈.869fraction of the optimal welfare, far better than the theoretical guarantee. Despite this large gap, it was unclear whether the 1−1/e bound was tight. We prove that it is not. Our analysis eschews smoothness, and instead uses the independence assumption on agents’ value distributions to give a more careful accounting of the welfare contribution of agents who win despite not having the highest value. 

Huang, Zhiyi 
STOC '18: "How to Match When All Vertices ..."
How to Match When All Vertices Arrive Online
Zhiyi Huang, Ning Kang, Zhihao Gavin Tang, Xiaowei Wu, Yuhao Zhang, and Xue Zhu (University of Hong Kong, China) We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previouslyarrived vertices are revealed. Each vertex has a deadline that is after all its neighbors’ arrivals. If a vertex remains unmatched until its deadline, the algorithm must then irrevocably either match it to an unmatched neighbor, or leave it unmatched. The model generalizes the existing onesided online model and is motivated by applications including ridesharing platforms, realestate agency, etc. We show that the Ranking algorithm by Karp et al. (STOC 1990) is 0.5211competitive in our fully online model for general graphs. Our analysis brings a novel charging mechanic into the randomized primal dual technique by Devanur et al. (SODA 2013), allowing a vertex other than the two endpoints of a matched edge to share the gain. To our knowledge, this is the first analysis of Ranking that beats 0.5 on general graphs in an online matching problem, a first step towards solving the open problem by Karp et al. (STOC 1990) about the optimality of Ranking on general graphs. If the graph is bipartite, we show that the competitive ratio of Ranking is between 0.5541 and 0.5671. Finally, we prove that the fully online model is strictly harder than the previous model as no online algorithm can be 0.6317 < 1−1/ecompetitive in our model even for bipartite graphs. 

Huiberts, Sophie 
STOC '18: "A Friendly Smoothed Analysis ..."
A Friendly Smoothed Analysis of the Simplex Method
Daniel Dadush and Sophie Huiberts (CWI, Netherlands) Explaining the excellent practical performance of the simplex method for linear programming has been a major topic of research for over 50 years. One of the most successful frameworks for understanding the simplex method was given by Spielman and Teng (JACM ‘04), who the developed the notion of smoothed analysis. Starting from an arbitrary linear program with d variables and n constraints, Spielman and Teng analyzed the expected runtime over random perturbations of the LP (smoothed LP), where variance σ Gaussian noise is added to the LP data. In particular, they gave a twostage shadow vertex simplex algorithm which uses an expected O(n^{86} d^{55} σ^{−30}) number of simplex pivots to solve the smoothed LP. Their analysis and runtime was substantially improved by SpielmanDeshpande (FOCS ‘05) and later Vershynin (SICOMP ‘09). The fastest current algorithm, due to Vershynin, solves the smoothed LP using an expected O(d^{3} log^{3} n σ^{−4} + d^{9}log^{7} n) number of pivots, improving the dependence on n from polynomial to logarithmic. While the original proof of SpielmanTeng has now been substantially simplified, the resulting analyses are still quite long and complex and the parameter dependencies far from optimal. In this work, we make substantial progress on this front, providing an improved and simpler analysis of shadow simplex methods, where our main algorithm requires an expected O(d^{2} √logn σ^{−2} + d^{5} log^{3/2} n) number of simplex pivots. We obtain our results via an improved shadow bound, key to earlier analyses as well, combined with algorithmic techniques of Borgwardt (ZOR ‘82) and Vershynin. As an added bonus, our analysis is completely modular, allowing us to obtain nontrivial bounds for perturbations beyond Gaussians, such as Laplace perturbations. 

Husfeldt, Thore 
STOC '18: "ExtensorCoding ..."
ExtensorCoding
Cornelius Brand, Holger Dell, and Thore Husfeldt (Saarland University, Germany; M2CI, Germany; Lund University, Sweden; IT University of Copenhagen, Denmark) We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time ε^{−2} 4^{k}(n+m) poly(k). The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2^{k}·poly(n) time algorithm to find a kpath in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect kmultilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants. 

Ikenmeyer, Christian 
STOC '18: "Generalized Matrix Completion ..."
Generalized Matrix Completion and Algebraic Natural Proofs
Markus Bläser, Christian Ikenmeyer, Gorav Jindal, and Vladimir Lysikov (Saarland University, Germany; Max Planck Institute for Informatics, Germany) Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudorandom generators. Unfortunately, there is no known analogous theory of pseudorandomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NPhard problem. As our first main result, we prove that it is also NPhard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a wellestablished and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P^{#P} ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case. Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah (Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Newcastle University, UK; IIT Hyderabad, India)
The problem of constructing hazardfree Boolean circuits dates back to the 1940s and is an important problem in circuit design.
Our main lowerbound result unconditionally shows the existence of
functions whose circuit complexity is polynomially bounded while every
hazardfree implementation is provably of exponential size.
Previous lower bounds on the hazardfree complexity were only valid for depth 2 circuits.
The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards.
These results follow from a crucial structural insight:
Hazardfree complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions.
Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazardfree complexity.
We also lift these methods from the monotone setting to prove exponential hazardfree complexity lower bounds for nonmonotone functions.
As our main upperbound result we show how to efficiently convert a Boolean circuit into a boundedbit hazardfree circuit with only a polynomially large blowup in the number of gates.
Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement.
As a side result we establish the NPcompleteness of several hazard detection problems.
@InProceedings{STOC18p878,
author = {Christian Ikenmeyer and Balagopal Komarath and Christoph Lenzen and Vladimir Lysikov and Andrey Mokhov and Karteek Sreenivasaiah},
title = {On the Complexity of HazardFree Circuits},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {878889},
doi = {10.1145/3188745.3188912},
year = {2018},
}
Publisher's Version
Article Search


Im, Sungjin 
STOC '18: "Online Load Balancing on Related ..."
Online Load Balancing on Related Machines
Sungjin Im, Nathaniel Kell, Debmalya Panigrahi, and Maryam Shadloo (University of California at Merced, USA; Duke University, USA) In this paper, we consider the problem of assigning jobs online to machines with nonuniform speeds (also called related machines) so to optimize a given norm of the machine loads. A long line of work, starting with the seminal work of Graham in the 1960s, has led to tight competitive ratios for all ℓ_{q} norms for two scenarios: the special case of identical machines (uniform machine speeds) and the more general setting of unrelated machines (jobs have arbitrary processing times on machines). For nonuniform machine speeds, however, the only known result was a constant competitive competitive ratio for the makespan (ℓ_{∞}) norm, via the socalled slowestfit algorithm (Aspnes, Azar, Fiat, Plotkin, and Waarts, JACM ’97). Our first result in this paper is to obtain the first constantcompetitive algorithm for scheduling on related machines for any arbitrary ℓ_{q} norm. Recent literature has further expanded the scope of this problem to vector scheduling, to capture multidimensional resource requirements in applications such as data centers. As in the scalar case, tight bounds are known for vector scheduling on identical and unrelated machines. Our second set of results is to give tight competitive ratios for vector scheduling on related machines for the makespan and all ℓ_{q} norms. No previous bounds were known, even for the makespan norm, for related machines. We employ a convex relaxation of the ℓ_{q}norm objective and use a continuous greedy algorithm to solve this convex program online. To round the fractional solution, we then use a novel restructuring of the instance that we call machine smoothing. This is a generic tool that reduces a problem on related machines to a set of problem instances on identical machines, and we hope it will be useful in other settings with nonuniform machine speeds as well. 

Jindal, Gorav 
STOC '18: "Generalized Matrix Completion ..."
Generalized Matrix Completion and Algebraic Natural Proofs
Markus Bläser, Christian Ikenmeyer, Gorav Jindal, and Vladimir Lysikov (Saarland University, Germany; Max Planck Institute for Informatics, Germany) Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudorandom generators. Unfortunately, there is no known analogous theory of pseudorandomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NPhard problem. As our first main result, we prove that it is also NPhard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a wellestablished and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P^{#P} ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case. 

Kakade, Sham 
STOC '18: "Prediction with a Short Memory ..."
Prediction with a Short Memory
Vatsal Sharan, Sham Kakade, Percy Liang, and Gregory Valiant (Stanford University, USA; University of Washington, USA) We consider the problem of predicting the next observation given a sequence of past observations, and consider the extent to which accurate prediction requires complex algorithms that explicitly leverage longrange dependencies. Perhaps surprisingly, our positive results show that for a broad class of sequences, there is an algorithm that predicts well on average, and bases its predictions only on the most recent few observation together with a set of simple summary statistics of the past observations. Specifically, we show that for any distribution over observations, if the mutual information between past observations and future observations is upper bounded by I, then a simple Markov model over the most recent I/є observations obtains expected KL error є—and hence ℓ_{1} error √є—with respect to the optimal predictor that has access to the entire past and knows the data generating distribution. For a Hidden Markov Model with n hidden states, I is bounded by logn, a quantity that does not depend on the mixing time, and we show that the trivial prediction algorithm based on the empirical frequencies of length O(logn/є) windows of observations achieves this error, provided the length of the sequence is d^{Ω(logn/є)}, where d is the size of the observation alphabet. We also establish that this result cannot be improved upon, even for the class of HMMs, in the following two senses: First, for HMMs with n hidden states, a window length of logn/є is informationtheoretically necessary to achieve expected KL error є, or ℓ_{1} error √є. Second, the d^{Θ(logn/є)} samples required to accurately estimate the Markov model when observations are drawn from an alphabet of size d is necessary for any computationally tractable learning/prediction algorithm, assuming the hardness of strongly refuting a certain class of CSPs. 

Kalai, Yael Tauman 
STOC '18: "Multicollision Resistance: ..."
Multicollision Resistance: A Paradigm for Keyless Hash Functions
Nir Bitansky, Yael Tauman Kalai, and Omer Paneth (Tel Aviv University, Israel; Microsoft, USA; Massachusetts Institute of Technology, USA)
We introduce a new notion of multicollision resistance for keyless hash functions. This is a natural relaxation of collision resistance where it is hard to find multiple inputs with the same hash in the following sense. The number of colliding inputs that a polynomialtime nonuniform adversary can find is not much larger than its advice. We discuss potential candidates for this notion and study its applications.
Assuming the existence of such hash functions, we resolve the longstanding question of the round complexity of zero knowledge protocols  we construct a 3message zero knowledge argument against arbitrary polynomialsize nonuniform adversaries. We also improve the round complexity in several other central applications, including a 3message succinct argument of knowledge for NP, a 4message zeroknowledge proof, and a 5message publiccoin zeroknowledge argument. Our techniques can also be applied in the keyed setting, where we match the round complexity of known protocols while relaxing the underlying assumption from collisionresistance to keyed multicollision resistance.
The core technical contribution behind our results is a domain extension transformation from multicollisionresistant hash functions for a fixed input length to ones with an arbitrary input length and a local opening property. The transformation is based on a combination of classical domain extension techniques, together with new informationtheoretic tools. In particular, we define and construct a new variant of listrecoverable codes, which may be of independent interest.
@InProceedings{STOC18p671,
author = {Nir Bitansky and Yael Tauman Kalai and Omer Paneth},
title = {Multicollision Resistance: A Paradigm for Keyless Hash Functions},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {671684},
doi = {10.1145/3188745.3188870},
year = {2018},
}
Publisher's Version
Article Search
STOC '18: "Succinct Delegation for LowSpace ..."
Succinct Delegation for LowSpace Nondeterministic Computation
Saikrishna Badrinarayanan, Yael Tauman Kalai, Dakshita Khurana, Amit Sahai, and Daniel Wichs (University of California at Los Angeles, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA; Northeastern University, USA)
We construct a delegation scheme for verifying nondeterministic computations, with complexity proportional only to the nondeterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in nondeterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)).
Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a subexponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, subexponential) cryptographic assumptions, such as the subexponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to noninteractively prove the correctness of any adaptively chosen nondeterministic computation. Such a scheme is referred to as a noninteractive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs.
Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions.
Our results yield succinct noninteractive arguments based on subexponential LWE, for many natural languages believed to be outside of P.
@InProceedings{STOC18p709,
author = {Saikrishna Badrinarayanan and Yael Tauman Kalai and Dakshita Khurana and Amit Sahai and Daniel Wichs},
title = {Succinct Delegation for LowSpace Nondeterministic Computation},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {709721},
doi = {10.1145/3188745.3188924},
year = {2018},
}
Publisher's Version
Article Search
Info


Kalaitzis, Christos 
STOC '18: "Improved Approximation for ..."
Improved Approximation for Tree Augmentation: Saving by Rewiring
Fabrizio Grandoni, Christos Kalaitzis, and Rico Zenklusen (University of Lugano, Switzerland; ETH Zurich, Switzerland) The Tree Augmentation Problem (TAP) is a fundamental network design problem in which we are given a tree and a set of additional edges, also called links. The task is to find a set of links, of minimum size, whose addition to the tree leads to a 2edgeconnected graph. A long line of results on TAP culminated in the previously best known approximation guarantee of 1.5 achieved by a combinatorial approach due to Kortsarz and Nutov [ACM Transactions on Algorithms 2016], and also by an SDPbased approach by Cheriyan and Gao [Algorithmica 2017]. Moreover, an elegant LPbased (1.5+є)approximation has also been found very recently by Fiorini, Groß, K'onemann, and Sanitá [SODA 2018]. In this paper, we show that an approximation factor below 1.5 can be achieved, by presenting a 1.458approximation that is based on several new techniques. By extending prior results of Adjiashvili [SODA 2017], we first present a blackbox reduction to a very structured type of instance, which played a crucial role in recent development on the problem, and which we call kwide. Our main contribution is a new approximation algorithm for O(1)wide tree instances with approximation guarantee strictly below 1.458, based on one of their fundamental properties: wide trees naturally decompose into smaller subtrees with a constant number of leaves. Previous approaches in similar settings rounded each subtree independently and simply combined the obtained solutions. We show that additionally, when starting with a wellchosen LP, the combined solution can be improved through a new “rewiring” technique, showing that one can replace some pairs of used links by a single link. We can rephrase the rewiring problem as a stochastic version of a matching problem, which may be of independent interest. By showing that large matchings can be obtained in this problem, we obtain that a significant number of rewirings are possible, thus leading to an approximation factor below 1.5. 

Kamath, Pritish 
STOC '18: "Monotone Circuit Lower Bounds ..."
Monotone Circuit Lower Bounds from Resolution
Ankit Garg, Mika Göös, Pritish Kamath, and Dmitry Sokolov (Microsoft Research, USA; Harvard University, USA; Massachusetts Institute of Technology, USA; KTH, Sweden) For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadgetcomposed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols—or, equivalently, that a monotone function associated with F has large monotone circuit complexity. Our result extends to monotone real circuits, which yields new lower bounds for the Cutting Planes proof system. 

Kamma, Lior 
STOC '18: "Tight Cell Probe Bounds for ..."
Tight Cell Probe Bounds for Succinct Boolean MatrixVector Multiplication
Diptarka Chakraborty, Lior Kamma, and Kasper Green Larsen (Charles University in Prague, Czechia; Aarhus University, Denmark) The conjectured hardness of Boolean matrixvector multiplication has been used with great success to prove conditional lower bounds for numerous important data structure problems, see Henzinger et al. [STOC’15]. In recent work, Larsen and Williams [SODA’17] attacked the problem from the upper bound side and gave a surprising cell probe data structure (that is, we only charge for memory accesses, while computation is free). Their cell probe data structure answers queries in Õ(n^{7/4}) time and is succinct in the sense that it stores the input matrix in readonly memory, plus an additional Õ(n^{7/4}) bits on the side. In this paper, we essentially settle the cell probe complexity of succinct Boolean matrixvector multiplication. We present a new cell probe data structure with query time Õ(n^{3/2}) storing just Õ(n^{3/2}) bits on the side. We then complement our data structure with a lower bound showing that any data structure storing r bits on the side, with n < r < n^{2} must have query time t satisfying t r = Ω(n^{3}). For r ≤ n, any data structure must have t = Ω(n^{2}). Since lower bounds in the cell probe model also apply to classic wordRAM data structures, the lower bounds naturally carry over. We also prove similar lower bounds for matrixvector multiplication over F_{2}. 

Kane, Daniel M. 
STOC '18: "ListDecodable Robust Mean ..."
ListDecodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians
Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (University of Southern California, USA; University of California at San Diego, USA) We study the problem of listdecodable (robust) Gaussian mean estimation and the related problem of learning mixtures of separated spherical Gaussians. In the former problem, we are given a set T of points in ^{n} with the promise that an αfraction of points in T, where 0< α < 1/2, are drawn from an unknown mean identity covariance Gaussian G, and no assumptions are made about the remaining points. The goal is to output a small list of candidate vectors with the guarantee that at least one of the candidates is close to the mean of G. In the latter problem, we are given samples from a kmixture of spherical Gaussians on ^{n} and the goal is to estimate the unknown model parameters up to small accuracy. We develop a set of techniques that yield new efficient algorithms with significantly improved guarantees for these problems. Specifically, our main contributions are as follows:
ListDecodable Mean Estimation. Fix any d ∈ _{+} and 0< α <1/2. We design an algorithm with sample complexity O_{d} ((n^{d}/α)) and runtime O_{d} ((n/α)^{d}) that outputs a list of O(1/α) many candidate vectors such that with high probability one of the candidates is within ℓ_{2}distance O_{d}(α^{−1/(2d)}) from the mean of G. The only previous algorithm for this problem achieved error Õ(α^{−1/2}) under second moment conditions. For d = O(1/), where >0 is a constant, our algorithm runs in polynomial time and achieves error O(α). For d = Θ(log(1/α)), our algorithm runs in time (n/α)^{O(log(1/α))} and achieves error O(log^{3/2}(1/α)), almost matching the informationtheoretically optimal bound of Θ(log^{1/2}(1/α)) that we establish. We also give a Statistical Query (SQ) lower bound suggesting that the complexity of our algorithm is qualitatively close to best possible.
Learning Mixtures of Spherical Gaussians. We give a learning algorithm for mixtures of spherical Gaussians, with unknown spherical covariances, that succeeds under significantly weaker separation assumptions compared to prior work. For the prototypical case of a uniform kmixture of identity covariance Gaussians we obtain the following: For any >0, if the pairwise separation between the means is at least Ω(k+√log(1/δ)), our algorithm learns the unknown parameters within accuracy δ with sample complexity and running time (n, 1/δ, (k/)^{1/}). Moreover, our algorithm is robust to a small dimensionindependent fraction of corrupted data. The previously best known polynomial time algorithm required separation at least k^{1/4} (k/δ). Finally, our algorithm works under separation of Õ(log^{3/2}(k)+√log(1/δ)) with sample complexity and running time (n, 1/δ, k^{logk}). This bound is close to the informationtheoretically minimum separation of Ω(√logk).
Our main technical contribution is a new technique, using degreed multivariate polynomials, to remove outliers from highdimensional datasets where the majority of the points are corrupted. Daniel M. Kane, Shachar Lovett, and Shay Moran (University of California at San Diego, USA; Institute for Advanced Study at Princeton, USA) We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the kSUM problem on n elements using O(n log^{2} n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two ksubsets; when viewed as linear queries, comparison queries are 2ksparse and have only {−1,0,1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSETSUM problem, both with optimal number of queries, up to polylogarithmic terms. Our constructions are based on the notion of “inference dimension”, recently introduced by the authors in the context of active classification with comparison queries. This can be viewed as another contribution to the fruitful link between machine learning and discrete geometry, which goes back to the discovery of the VC dimension. Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (University of Southern California, USA; University of California at San Diego, USA) We study the efficient learnability of geometric concept classes — specifically, lowdegree polynomial threshold functions (PTFs) and intersections of halfspaces — when a fraction of the training data is adversarially corrupted. We give the first polynomialtime PAC learning algorithms for these concept classes with dimensionindependent error guarantees in the presence of nasty noise under the Gaussian distribution. In the nasty noise model, an omniscient adversary can arbitrarily corrupt a small fraction of both the unlabeled data points and their labels. This model generalizes wellstudied noise models, including the malicious noise model and the agnostic (adversarial label noise) model. Prior to our work, the only concept class for which efficient malicious learning algorithms were known was the class of origincentered halfspaces. At the core of our results is an efficient algorithm to approximate the lowdegree Chowparameters of any bounded function in the presence of nasty noise. Our robust approximation algorithm for the Chow parameters provides nearoptimal error guarantees for a range of distribution families satisfying mild concentration bounds and moment conditions. At the technical level, this algorithm employs an iterative “spectral” technique for outlier detection and removal inspired by recent work in robust unsupervised learning, which makes essential use of lowdegree multivariate polynomials. Our robust learning algorithm for lowdegree PTFs provides dimensionindependent error guarantees for a class of tame distributions, including Gaussians and, more generally, any logconcave distribution with (approximately) known lowdegree moments. For LTFs under the Gaussian distribution, using a refinement of the localization technique, we give a polynomialtime algorithm that achieves a nearoptimal error of O(є), where є is the noise rate. Our robust learning algorithm for intersections of halfspaces proceeds by projecting down to an appropriate lowdimensional subspace. Its correctness makes essential use of a novel robust inverse independence lemma that is of independent interest. Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (Stanford University, USA; University of Southern California, USA; University of California at San Diego, USA) We study the problem of testing *conditional independence* for discrete distributions. Specifically, given samples from a discrete random variable (X, Y, Z) on domain [ℓ_{1}]×[ℓ_{2}] × [n], we want to distinguish, with probability at least 2/3, between the case that X and Y are conditionally independent given Z from the case that (X, Y, Z) is єfar, in ℓ_{1}distance, from every distribution that has this property. Conditional independence is a concept of central importance in probability and statistics with important applications in various scientific domains. As such, the statistical task of testing conditional independence has been extensively studied in various forms within the statistics and econometrics community for nearly a century. Perhaps surprisingly, this problem has not been previously considered in the framework of distribution property testing and in particular no tester with *sublinear* sample complexity is known, even for the important special case that the domains of X and Y are binary. The main algorithmic result of this work is the first conditional independence tester with sublinear sample complexity. To complement our upper bound, we prove informationtheoretic lower bounds establishing that the sample complexity of our algorithm is optimal, up to constant factors, for a number of settings. Specifically, for the prototypical setting when ℓ_{1}, ℓ_{2} = O(1), we show that the sample complexity of testing conditional independence (upper bound and matching lower bound) is
We also achieve sublinear sample complexity for the general case of arbitrary ℓ_{1}, ℓ_{2}, and n. To obtain our tester, we employ a variety of tools, including (1) a subtle weighted adaptation of the ”flattening” technique, and (2) the design and analysis of an optimal (unbiased) estimator for the following statistical problem of independent interest: Given a degreed polynomial Q∶ℝ^{n} → ℝ and sample access to a distribution p over [n], estimate Q(p_{1}, …, p_{n}) up to small additive error. Obtaining tight variance analyses for specific estimators of this form has been a major technical hurdle in distribution testing. As an important contribution of this work, we develop a general theory providing tight variance bounds for *all* such estimators. 

Kang, Ning 
STOC '18: "How to Match When All Vertices ..."
How to Match When All Vertices Arrive Online
Zhiyi Huang, Ning Kang, Zhihao Gavin Tang, Xiaowei Wu, Yuhao Zhang, and Xue Zhu (University of Hong Kong, China) We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previouslyarrived vertices are revealed. Each vertex has a deadline that is after all its neighbors’ arrivals. If a vertex remains unmatched until its deadline, the algorithm must then irrevocably either match it to an unmatched neighbor, or leave it unmatched. The model generalizes the existing onesided online model and is motivated by applications including ridesharing platforms, realestate agency, etc. We show that the Ranking algorithm by Karp et al. (STOC 1990) is 0.5211competitive in our fully online model for general graphs. Our analysis brings a novel charging mechanic into the randomized primal dual technique by Devanur et al. (SODA 2013), allowing a vertex other than the two endpoints of a matched edge to share the gain. To our knowledge, this is the first analysis of Ranking that beats 0.5 on general graphs in an online matching problem, a first step towards solving the open problem by Karp et al. (STOC 1990) about the optimality of Ranking on general graphs. If the graph is bipartite, we show that the competitive ratio of Ranking is between 0.5541 and 0.5671. Finally, we prove that the fully online model is strictly harder than the previous model as no online algorithm can be 0.6317 < 1−1/ecompetitive in our model even for bipartite graphs. 

Karlin, Anna R. 
STOC '18: "A Simply Exponential Upper ..."
A Simply Exponential Upper Bound on the Maximum Number of Stable Matchings
Anna R. Karlin, Shayan Oveis Gharan, and Robbie Weber (University of Washington, USA) Stable matching is a classical combinatorial problem that has been the subject of intense theoretical and empirical study since its introduction in 1962 in a seminal paper by Gale and Shapley. In this paper, we provide a new upper bound on f(n), the maximum number of stable matchings that a stable matching instance with n men and n women can have. It has been a longstanding open problem to understand the asymptotic behavior of f(n) as n→∞, first posed by Donald Knuth in the 1970s. Until now the best lower bound was approximately 2.28^{n}, and the best upper bound was 2^{nlogn− O(n)}. In this paper, we show that for all n, f(n) ≤ c^{n} for some universal constant c. This matches the lower bound up to the base of the exponent. Our proof is based on a reduction to counting the number of downsets of a family of posets that we call “mixing”. The latter might be of independent interest. 

Kaufman, Tali 
STOC '18: "Construction of New Local ..."
Construction of New Local Spectral High Dimensional Expanders
Tali Kaufman and Izhar Oppenheim (BarIlan University, Israel; BenGurion University of the Negev, Israel)
High dimensional expanders is a vibrant emerging field of study. Nevertheless, the only known construction of bounded degree high dimensional expanders is based on Ramanujan complexes, whereas one dimensional bounded degree expanders are abundant.
In this work we construct new families of bounded degree high dimensional expanders obeying the local spectral expansion property. A property that implies, geometric overlapping, fast mixing of high dimensional random walks, agreement testing and agreement expansion.
The construction also yields new families of expander graphs.
The construction is quite elementary and it is presented in a self contained manner; This is in contrary to the highly involved construction of the Ramanujan complexes. The construction is also strongly symmetric; The symmetry of the construction could be used, for example, to obtain good symmetric LDPC codes that were previously based on Ramanujan graphs.
@InProceedings{STOC18p773,
author = {Tali Kaufman and Izhar Oppenheim},
title = {Construction of New Local Spectral High Dimensional Expanders},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {773786},
doi = {10.1145/3188745.3188782},
year = {2018},
}
Publisher's Version
Article Search


Kell, Nathaniel 
STOC '18: "Online Load Balancing on Related ..."
Online Load Balancing on Related Machines
Sungjin Im, Nathaniel Kell, Debmalya Panigrahi, and Maryam Shadloo (University of California at Merced, USA; Duke University, USA) In this paper, we consider the problem of assigning jobs online to machines with nonuniform speeds (also called related machines) so to optimize a given norm of the machine loads. A long line of work, starting with the seminal work of Graham in the 1960s, has led to tight competitive ratios for all ℓ_{q} norms for two scenarios: the special case of identical machines (uniform machine speeds) and the more general setting of unrelated machines (jobs have arbitrary processing times on machines). For nonuniform machine speeds, however, the only known result was a constant competitive competitive ratio for the makespan (ℓ_{∞}) norm, via the socalled slowestfit algorithm (Aspnes, Azar, Fiat, Plotkin, and Waarts, JACM ’97). Our first result in this paper is to obtain the first constantcompetitive algorithm for scheduling on related machines for any arbitrary ℓ_{q} norm. Recent literature has further expanded the scope of this problem to vector scheduling, to capture multidimensional resource requirements in applications such as data centers. As in the scalar case, tight bounds are known for vector scheduling on identical and unrelated machines. Our second set of results is to give tight competitive ratios for vector scheduling on related machines for the makespan and all ℓ_{q} norms. No previous bounds were known, even for the makespan norm, for related machines. We employ a convex relaxation of the ℓ_{q}norm objective and use a continuous greedy algorithm to solve this convex program online. To round the fractional solution, we then use a novel restructuring of the instance that we call machine smoothing. This is a generic tool that reduces a problem on related machines to a set of problem instances on identical machines, and we hope it will be useful in other settings with nonuniform machine speeds as well. 

Kempa, Dominik 
STOC '18: "At the Roots of Dictionary ..."
At the Roots of Dictionary Compression: String Attractors
Dominik Kempa and Nicola Prezza (University of Helsinki, Finland; DTU, Denmark; University of Pisa, Italy) A wellknown fact in the field of lossless text compression is that highorder entropy is a weak model when the input contains long repetitions. Motivated by this fact, decades of research have generated myriads of socalled dictionary compressors: algorithms able to reduce the text’s size by exploiting its repetitiveness. LempelZiv 77 is one of the most successful and wellknown tools of this kind, followed by straightline programs, runlength BurrowsWheeler transform, macro schemes, collage systems, and the compact directed acyclic word graph. In this paper, we show that these techniques are different solutions to the same, elegant, combinatorial problem: to find a small set of positions capturing all distinct text’s substrings. We call such a set a string attractor. We first show reductions between dictionary compressors and string attractors. This gives the approximation ratios of dictionary compressors with respect to the smallest string attractor and allows us to uncover new asymptotic relations between the output sizes of different dictionary compressors. We then show that the kattractor problem — deciding whether a text has a sizet set of positions capturing all substrings of length at most k — is NPcomplete for k≥ 3. This, in particular, includes the full string attractor problem. We provide several approximation techniques for the smallest kattractor, show that the problem is APXcomplete for constant k, and give strong inapproximability results. To conclude, we provide matching lower and upper bounds for the random access problem on string attractors. The upper bound is proved by showing a data structure supporting queries in optimal time. Our data structure is universal: by our reductions to string attractors, it supports random access on any dictionarycompression scheme. In particular, it matches the lower bound also on LZ77, straightline programs, collage systems, and macro schemes, and therefore essentially closes (at once) the random access problem for all these compressors. 

Khot, Subhash 
STOC '18: "Towards a Proof of the 2to1 ..."
Towards a Proof of the 2to1 Games Conjecture?
Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We present a polynomial time reduction from gap3LIN to label cover with 2to1 constraints. In the “yes” case the fraction of satisfied constraints is at least 1 −ε, and in the “no” case we show that this fraction is at most ε, assuming a certain (new) combinatorial hypothesis on the Grassmann graph. In other words, we describe a combinatorial hypothesis that implies the 2to1 conjecture with imperfect completeness. The companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] makes some progress towards proving this hypothesis. Our work builds on earlier work by a subset of the authors [Khot, Minzer and Safra, STOC 2017] where a slightly different hypothesis was used to obtain hardness of approximating vertex cover to within factor of √2−ε. The most important implication of this work is (assuming the hypothesis) an NPhardness gap of 1/2−ε vs. ε for unique games. In addition, we derive optimal NPhardness for approximating the maxcutgain problem, NPhardness of coloring an almost 4colorable graph with any constant number of colors, and the same √2−ε NPhardness for approximate vertex cover that was already obtained based on a slightly different hypothesis. Recent progress towards proving our hypothesis [Barak, Kothari and Steurer, ECCC TR18077], [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] directly implies some new unconditional NPhardness results. These include new points of NPhardness for unique games and for 2to1 and 2to2 games. More recently, the full version of our hypothesis was proven [Khot, Minzer and Safra, ECCC TR18006]. Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We study the structure of nonexpanding sets in the Grassmann graph. We put forth a hypothesis stating that every small set whose expansion is smaller than 1−δ must be correlated with one of a specified list of sets which are isomorphic to smaller Grassmann graphs. We develop a framework of Fourier analysis for analyzing functions over the Grassmann graph, and prove that our hypothesis holds for all sets whose expansion is below 7/8. In the companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018], the authors show that a linearity agreement hypothesis implies an NPhardness gap of 1/2− vs for unique games and other inapproximability results. In [Barak, Kothari and Steurer, ECCC TR18077], the authors show that the hypothesis in this work implies the linearity agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018]. Combined with our main theorem here this proves a version of the linearity agreement hypothesis with certain specific parameters. Short of proving the entire hypothesis, this nevertheless suffices for getting new unconditional NP hardness gaps for label cover with 2to1 and unique constraints. Our Expansion Hypothesis has been subsequently proved in its full form [Khot, Minzer and Safra, ECCC TR18006] thereby proving the agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] and completing the proof of the 2to1 Games Conjecture (albeit with imperfect completeness). 

Khurana, Dakshita 
STOC '18: "Succinct Delegation for LowSpace ..."
Succinct Delegation for LowSpace Nondeterministic Computation
Saikrishna Badrinarayanan, Yael Tauman Kalai, Dakshita Khurana, Amit Sahai, and Daniel Wichs (University of California at Los Angeles, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA; Northeastern University, USA)
We construct a delegation scheme for verifying nondeterministic computations, with complexity proportional only to the nondeterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in nondeterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)).
Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a subexponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, subexponential) cryptographic assumptions, such as the subexponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to noninteractively prove the correctness of any adaptively chosen nondeterministic computation. Such a scheme is referred to as a noninteractive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs.
Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions.
Our results yield succinct noninteractive arguments based on subexponential LWE, for many natural languages believed to be outside of P.
@InProceedings{STOC18p709,
author = {Saikrishna Badrinarayanan and Yael Tauman Kalai and Dakshita Khurana and Amit Sahai and Daniel Wichs},
title = {Succinct Delegation for LowSpace Nondeterministic Computation},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {709721},
doi = {10.1145/3188745.3188924},
year = {2018},
}
Publisher's Version
Article Search
Info


Kim, David H. K. 
STOC '18: "Almost Polynomial Hardness ..."
Almost Polynomial Hardness of NodeDisjoint Paths in Grids
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical NodeDisjoint Paths (NDP) problem, we are given an nvertex graph G=(V,E), and a collection M={(s_{1},t_{1}),…,(s_{k},t_{k})} of pairs of its vertices, called sourcedestination, or demand pairs. The goal is to route as many of the demand pairs as possible, where to route a pair we need to select a path connecting it, so that all selected paths are disjoint in their vertices. The best current algorithm for NDP achieves an O(√n)approximation, while, until recently, the best negative result was a factor Ω(log^{1/2−є}n)hardness of approximation, for any constant є, unless NP ⊆ ZPTIME(n^{poly logn}). In a recent work, the authors have shown an improved 2^{Ω(√logn)}hardness of approximation for NDP, unless NP⊆ DTIME(n^{O(logn)}), even if the underlying graph is a subgraph of a grid graph, and all source vertices lie on the boundary of the grid. Unfortunately, this result does not extend to grid graphs. The approximability of the NDP problem on grid graphs has remained a tantalizing open question, with the best current upper bound of Õ(n^{1/4}), and the best current lower bound of APXhardness. In a recent work, the authors showed a 2^{Õ(√logn)}approximation algorithm for NDP in grid graphs, if all source vertices lie on the boundary of the grid – a result that can be seen as suggesting that a subpolynomial approximation may be achievable for NDP in grids. In this paper we show that this is unlikely to be the case, and come close to resolving the approximability of NDP in general, and of NDP in grids in particular. Our main result is that NDP is 2^{Ω(log1−є n)}hard to approximate for any constant є, assuming that NP⊈RTIME(n^{poly logn}), and that it is n^{Ω (1/(loglogn)2)}hard to approximate, assuming that for some constant δ>0, NP ⊈RTIME(2^{nδ}). These results hold even for grid graphs and wall graphs, and extend to the closely related EdgeDisjoint Paths problem, even in wall graphs. Our hardness proof performs a reduction from the 3COL(5) problem to NDP, using a new graph partitioning problem as a proxy. Unlike the more standard approach of employing Karp reductions to prove hardness of approximation, our proof is a Cooktype reduction, where, given an input instance of 3COL(5), we produce a large number of instances of NDP, and apply an approximation algorithm for NDP to each of them. The construction of each new instance of NDP crucially depends on the solutions to the previous instances that were found by the approximation algorithm. 

Kindler, Guy 
STOC '18: "Towards a Proof of the 2to1 ..."
Towards a Proof of the 2to1 Games Conjecture?
Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We present a polynomial time reduction from gap3LIN to label cover with 2to1 constraints. In the “yes” case the fraction of satisfied constraints is at least 1 −ε, and in the “no” case we show that this fraction is at most ε, assuming a certain (new) combinatorial hypothesis on the Grassmann graph. In other words, we describe a combinatorial hypothesis that implies the 2to1 conjecture with imperfect completeness. The companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] makes some progress towards proving this hypothesis. Our work builds on earlier work by a subset of the authors [Khot, Minzer and Safra, STOC 2017] where a slightly different hypothesis was used to obtain hardness of approximating vertex cover to within factor of √2−ε. The most important implication of this work is (assuming the hypothesis) an NPhardness gap of 1/2−ε vs. ε for unique games. In addition, we derive optimal NPhardness for approximating the maxcutgain problem, NPhardness of coloring an almost 4colorable graph with any constant number of colors, and the same √2−ε NPhardness for approximate vertex cover that was already obtained based on a slightly different hypothesis. Recent progress towards proving our hypothesis [Barak, Kothari and Steurer, ECCC TR18077], [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] directly implies some new unconditional NPhardness results. These include new points of NPhardness for unique games and for 2to1 and 2to2 games. More recently, the full version of our hypothesis was proven [Khot, Minzer and Safra, ECCC TR18006]. Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We study the structure of nonexpanding sets in the Grassmann graph. We put forth a hypothesis stating that every small set whose expansion is smaller than 1−δ must be correlated with one of a specified list of sets which are isomorphic to smaller Grassmann graphs. We develop a framework of Fourier analysis for analyzing functions over the Grassmann graph, and prove that our hypothesis holds for all sets whose expansion is below 7/8. In the companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018], the authors show that a linearity agreement hypothesis implies an NPhardness gap of 1/2− vs for unique games and other inapproximability results. In [Barak, Kothari and Steurer, ECCC TR18077], the authors show that the hypothesis in this work implies the linearity agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018]. Combined with our main theorem here this proves a version of the linearity agreement hypothesis with certain specific parameters. Short of proving the entire hypothesis, this nevertheless suffices for getting new unconditional NP hardness gaps for label cover with 2to1 and unique constraints. Our Expansion Hypothesis has been subsequently proved in its full form [Khot, Minzer and Safra, ECCC TR18006] thereby proving the agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] and completing the proof of the 2to1 Games Conjecture (albeit with imperfect completeness). 

KisfaludiBak, Sándor 
STOC '18: "A Framework for ETHTight ..."
A Framework for ETHTight Algorithms and Lower Bounds in Geometric Intersection Graphs
Mark de Berg, Hans L. Bodlaender, Sándor KisfaludiBak, Dániel Marx, and Tom C. van der Zanden (Eindhoven University of Technology, Netherlands; Utrecht University, Netherlands; Hungarian Academy of Sciences, Hungary) We give an algorithmic and lowerbound framework that facilitates the construction of subexponential algorithms and matching conditional complexity bounds. It can be applied to a wide range of geometric intersection graphs (intersections of similarly sized fat objects), yielding algorithms with running time 2^{O(n1−1/d)} for any fixed dimension d≥ 2 for many well known graph problems, including Independent Set, rDominating Set for constant r, and Steiner Tree. For most problems, we get improved running times compared to prior work; in some cases, we give the first known subexponential algorithm in geometric intersection graphs. Additionally, most of the obtained algorithms work on the graph itself, i.e., do not require any geometric information. Our algorithmic framework is based on a weighted separator theorem and various treewidth techniques. The lowerbound framework is based on a constructive embedding of graphs into ddimensional grids, and it allows us to derive matching 2^{Ω(n1−1/d)} lower bounds under the Exponential Time Hypothesis even in the much more restricted class of ddimensional induced grid graphs. 

Kol, Gillat 
STOC '18: "Interactive Coding over the ..."
Interactive Coding over the Noisy Broadcast Channel
Klim Efremenko, Gillat Kol, and Raghuvansh Saxena (BenGurion University of the Negev, Israel; Princeton University, USA) A set of n players, each holding a private input bit, communicate over a noisy broadcast channel. Their mutual goal is for all players to learn all inputs. At each round one of the players broadcasts a bit to all the other players, and the bit received by each player is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? This problem was first suggested by El Gamal in 1984. In 1988, Gallager gave an elegant noiseresistant protocol requiring only O(n loglogn) rounds. The problem got resolved in 2005 by a seminal paper of Goyal, Kindler, and Saks, proving that Gallager’s protocol is essentially optimal. We revisit the above noisy broadcast problem and show that O(n) rounds suffice. This is possible due to a relaxation of the model assumed by the previous works. We no longer demand that exactly one player broadcasts in every round, but rather allow any number of players to broadcast. However, if it is not the case that exactly one player chooses to broadcast, each of the other players gets an adversely chosen bit. We generalized the above result and initiate the study of interactive coding over the noisy broadcast channel. We show that any interactive protocol that works over the noiseless broadcast channel can be simulated over our restrictive noisy broadcast model with constant blowup of the communication. Our results also establish that modern techniques for interactive coding can help us make progress on the classical problems. Mark Braverman and Gillat Kol (Princeton University, USA) We describe a new way of compressing twoparty communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants’ private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) · loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost. 

Komarath, Balagopal 
STOC '18: "On the Complexity of HazardFree ..."
On the Complexity of HazardFree Circuits
Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah (Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Newcastle University, UK; IIT Hyderabad, India)
The problem of constructing hazardfree Boolean circuits dates back to the 1940s and is an important problem in circuit design.
Our main lowerbound result unconditionally shows the existence of
functions whose circuit complexity is polynomially bounded while every
hazardfree implementation is provably of exponential size.
Previous lower bounds on the hazardfree complexity were only valid for depth 2 circuits.
The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards.
These results follow from a crucial structural insight:
Hazardfree complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions.
Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazardfree complexity.
We also lift these methods from the monotone setting to prove exponential hazardfree complexity lower bounds for nonmonotone functions.
As our main upperbound result we show how to efficiently convert a Boolean circuit into a boundedbit hazardfree circuit with only a polynomially large blowup in the number of gates.
Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement.
As a side result we establish the NPcompleteness of several hazard detection problems.
@InProceedings{STOC18p878,
author = {Christian Ikenmeyer and Balagopal Komarath and Christoph Lenzen and Vladimir Lysikov and Andrey Mokhov and Karteek Sreenivasaiah},
title = {On the Complexity of HazardFree Circuits},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {878889},
doi = {10.1145/3188745.3188912},
year = {2018},
}
Publisher's Version
Article Search


Koppula, Venkata 
STOC '18: "Collusion Resistant Traitor ..."
Collusion Resistant Traitor Tracing from Learning with Errors
Rishab Goyal, Venkata Koppula, and Brent Waters (University of Texas at Austin, USA)
In this work we provide a traitor tracing construction with ciphertexts that grow polynomially in log(n) where n is the number of users and prove it secure under the Learning with Errors (LWE) assumption. This is the first traitor tracing scheme with such parameters provably secure from a standard assumption. In addition to achieving new traitor tracing results, we believe our techniques push forward the broader area of computing on encrypted data under standard assumptions. Notably, traitor tracing is substantially different problem from other cryptography primitives that have seen recent progress in LWE solutions.
We achieve our results by first conceiving a novel approach to building traitor tracing that starts with a new form of Functional Encryption that we call Mixed FE. In a Mixed FE system the encryption algorithm is bimodal and works with either a public key or master secret key. Ciphertexts encrypted using the public key can only encrypt one type of functionality. On the other hand the secret key encryption can be used to encode many different types of programs, but is only secure as long as the attacker sees a bounded number of such ciphertexts.
We first show how to combine Mixed FE with AttributeBased Encryption to achieve traitor tracing. Second we build Mixed FE systems for polynomial sized branching programs (which corresponds to the complexity class logspace) by relying on the polynomial hardness of the LWE assumption with superpolynomial modulustonoise ratio.
@InProceedings{STOC18p660,
author = {Rishab Goyal and Venkata Koppula and Brent Waters},
title = {Collusion Resistant Traitor Tracing from Learning with Errors},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {660670},
doi = {10.1145/3188745.3188844},
year = {2018},
}
Publisher's Version
Article Search


Korhonen, Janne H. 
STOC '18: "New Classes of Distributed ..."
New Classes of Distributed Time Complexity
Alkida Balliu, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Dennis Olivetti, and Jukka Suomela (Aalto University, Finland; University of Freiburg, Germany) A number of recent papers – e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) – have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π in which a solution can be verified by checking all radiusO(1) neighbourhoods, and the question is what is the smallest T such that a solution can be computed so that each node chooses its own output based on its radiusT neighbourhood. Here T is the distributed time complexity of Π. The time complexity classes for deterministic algorithms in boundeddegree graphs that are known to exist by prior work are Θ(1), Θ(log^{*} n), Θ(logn), Θ(n^{1/k}), and Θ(n). It is also known that there are two gaps: one between ω(1) and o(loglog^{*} n), and another between ω(log^{*} n) and o(logn). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple – indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(log^{α} n) for any α ≥ 1, 2^{Θ(logα n)} for any α ≤ 1, and Θ(n^{α}) for any α < 1/2 in the high end of the complexity spectrum, and Θ(log^{α} log^{*} n) for any α ≥ 1, 2^{Θ(logα log* n)} for any α ≤ 1, and Θ((log^{*} n)^{α}) for any α ≤ 1 in the low end of the complexity spectrum; here α is a positive rational number. 

Kosowski, Adrian 
STOC '18: "Universal Protocols for Information ..."
Universal Protocols for Information Dissemination using Emergent Signals
Bartłomiej Dudek and Adrian Kosowski (University of Wrocław, Poland; Inria, France) We consider a population of n agents which communicate with each other in a decentralized manner, through random pairwise interactions. One or more agents in the population may act as authoritative sources of information, and the objective of the remaining agents is to obtain information from or about these source agents. We study two basic tasks: broadcasting, in which the agents are to learn the bitstate of an authoritative source which is present in the population, and source detection, in which the agents are required to decide if at least one source agent is present in the population or not. We focus on designing protocols which meet two natural conditions: (1) universality, i.e., independence of population size, and (2) rapid convergence to a correct global state after a reconfiguration, such as a change in the state of a source agent. Our main positive result is to show that both of these constraints can be met. For both the broadcasting problem and the source detection problem, we obtain solutions with an expected convergence time of O(logn), from any starting configuration. The solution to broadcasting is exact, which means that all agents reach the state broadcast by the source, while the solution to source detection admits onesided error on a εfraction of the population (which is unavoidable for this problem). Both protocols are easy to implement in practice and are selfstabilizing, in the sense that the stated bounds on convergence time hold starting from any possible initial configuration of the system. Our protocols exploit the properties of selforganizing oscillatory dynamics. On the hardness side, our main structural insight is to prove that any protocol which meets the constraints of universality and of rapid convergence after reconfiguration must display a form of nonstationary behavior (of which oscillatory dynamics are an example). We also observe that the periodicity of the oscillatory behavior of the protocol, when present, must necessarily depend on the number ^{#}X of source agents present in the population. For instance, our protocols inherently rely on the emergence of a signal passing through the population, whose period is Θ(log(n/^{#}X)) rounds for most starting configurations. The design of phase clocks with tunable frequency may be of independent interest, notably in modeling biological networks. 

Kothari, Pravesh K. 
STOC '18: "Robust Moment Estimation and ..."
Robust Moment Estimation and Improved Clustering via Sum of Squares
Pravesh K. Kothari, Jacob Steinhardt, and David Steurer (Princeton University, USA; Institute for Advanced Study at Princeton, USA; Stanford University, USA; ETH Zurich, Switzerland) We develop efficient algorithms for estimating lowdegree moments of unknown distributions in the presence of adversarial outliers and design a new family of convex relaxations for kmeans clustering based on sumofsquares method. As an immediate corollary, for any γ > 0, we obtain an efficient algorithm for learning the means of a mixture of k arbitrary distributions in ^{d} in time d^{O(1/γ)} so long as the means have separation Ω(k^{γ}). This in particular yields an algorithm for learning Gaussian mixtures with separation Ω(k^{γ}), thus partially resolving an open problem of Regev and Vijayaraghavan regev2017learning. The guarantees of our robust estimation algorithms improve in many cases significantly over the best previous ones, obtained in the recent works of [, , ]. We also show that the guarantees of our algorithms match informationtheoretic lowerbounds for the class of distributions we consider. These improved guarantees allow us to give improved algorithms for independent component analysis and learning mixtures of Gaussians in the presence of outliers. We also show a sharp upper bound on the sumofsquares norms for moment tensors of any distribution that satisfies the Poincare inequality. The Poincare inequality is a central inequality in probability theory, and a large class of distributions satisfy it including Gaussians, product distributions, strongly logconcave distributions, and any sum or uniformly continuous transformation of such distributions. As a consequence, this yields that all of the above algorithmic improvements hold for distributions satisfying the Poincare inequality. Pravesh K. Kothari and Ruta Mehta (Princeton University, USA; Institute for Advanced Study at Princeton, USA; University of Illinois at UrbanaChampaign, USA) Computing Nash equilibrium (NE) in twoplayer game is a central question in algorithmic game theory. The main motivation of this work is to understand the power of sumofsquares method in computing equilibria, both exact and approximate. Previous works in this context have focused on hardness of approximating “best” equilibria with respect to some natural quality measure on equilibria such as social welfare. Such results, however, do not directly relate to the complexity of the problem of finding any equilibrium. In this work, we propose a framework of roundings for the sumofsquares algorithm (and convex relaxations in general) applicable to finding approximate/exact equilbria in two player bimatrix games. Specifically, we define the notion of oblivious roundings with verification oracle (OV). These are algorithms that can access a solution to the degree d SoS relaxation to construct a list of candidate (partial) solutions and invoke a verification oracle to check if a candidate in the list gives an (exact or approximate) equilibrium. This framework captures most known approximation algorithms in combinatorial optimization including the celebrated semidefinite programming based algorithms for MaxCut, ConstraintSatisfaction Problems, and the recent works on SoS relaxations for Unique Games/SmallSet Expansion, Best Separable State, and many problems in unsupervised machine learning. Our main results are strong unconditional lower bounds in this framework. Specifically, we show that for є = Θ(1/poly(n)), there’s no algorithm that uses a o(n)degree SoS relaxation to construct a 2^{o(n)}size list of candidates and obtain an єapproximate NE. For some constant є, we show a similar result for degree o(log(n)) SoS relaxation and list size n^{o(log(n))}. Our results can be seen as an unconditional confirmation, in our restricted algorithmic framework, of the recent Exponential Time Hypothesis for PPAD. Our proof strategy involves constructing a family of games that all share a common sumofsquares solution but every (approximate) equilibrium of any game is far from every equilibrium of any other game in the family (in either player’s strategy). Along the way, we strengthen the classical unconditional lower bound against enumerative algorithms for finding approximate equilibria due to DaskalakisPapadimitriou and the classical hardness of computing equilibria due to GilbowZemel. 

Kothari, Robin 
STOC '18: "The Polynomial Method Strikes ..."
The Polynomial Method Strikes Back: Tight Quantum Query Bounds via Dual Polynomials
Mark Bun, Robin Kothari, and Justin Thaler (Princeton University, USA; Microsoft Research, USA; Georgetown University, USA) The approximate degree of a Boolean function f is the least degree of a real polynomial that approximates f pointwise to error at most 1/3. The approximate degree of f is known to be a lower bound on the quantum query complexity of f (Beals et al., FOCS 1998 and J. ACM 2001). We resolve or nearly resolve the approximate degree and quantum query complexities of several basic functions. Specifically, we show the following: *kdistinctness: For any constant k, the approximate degree and quantum query complexity of the kdistinctness function is Ω(n^{3/4−1/(2k)}). This is nearly tight for large k, as Belovs (FOCS 2012) has shown that for any constant k, the approximate degree and quantum query complexity of kdistinctness is O(n^{3/4−1/(2k+2−4)}). *Image Size Testing: The approximate degree and quantum query complexity of testing the size of the image of a function [n] → [n] is Ω(n^{1/2}). This proves a conjecture of Ambainis et al. (SODA 2016), and it implies tight lower bounds on the approximate degree and quantum query complexity of the following natural problems. **kjunta testing: A tight Ω(k^{1/2}) lower bound for kjunta testing, answering the main open question of Ambainis et al. (SODA 2016). **Statistical Distance from Uniform: A tight Ω(n^{1/2}) lower bound for approximating the statistical distance from uniform of a distribution, answering the main question left open by Bravyi et al. (STACS 2010 and IEEE Trans. Inf. Theory 2011). **Shannon entropy: A tight Ω(n^{1/2}) lower bound for approximating Shannon entropy up to a certain additive constant, answering a question of Li and Wu (2017). *Surjectivity: The approximate degree of the Surjectivity function is Ω(n^{3/4}). The best prior lower bound was Ω(n^{2/3}). Our result matches an upper bound of Õ(n^{3/4}) due to Sherstov, which we reprove using different techniques. The quantum query complexity of this function is known to be Θ(n) (Beame and Machmouchi, Quantum Inf. Comput. 2012 and Sherstov, FOCS 2015). Our upper bound for Surjectivity introduces new techniques for approximating Boolean functions by lowdegree polynomials. Our lower bounds are proved by significantly refining techniques recently introduced by Bun and Thaler (FOCS 2017). 

Koucký, Michal 
STOC '18: "Simulation Beats Richness: ..."
Simulation Beats Richness: New DataStructure Lower Bounds
Arkadev Chattopadhyay, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay (TIFR, India; Charles University in Prague, Czechia; INESC TEC, Portugal; University of Porto, Portugal; KTH, Sweden) We develop a new technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is the first simulation theorem in the asymmetric setting, where Alice gets a p × n matrix x over F_{2} and Bob gets a vector y ∈ F_{2}^{n}. Alice and Bob need to evaluate f(x· y) for a Boolean function f: {0,1}^{p} → {0,1}. Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C· n for Alice and C for Bob, if and only if there exists a deterministic/randomized *parity decision tree* of cost Θ(C) for evaluating f. As applications of this technique, we obtain the following results: 1. The first strong lowerbounds against randomized datastructure schemes for the VectorMatrixVector product problem over F_{2}. Moreover, our method yields strong lower bounds even when the datastructure scheme has tiny advantage over random guessing. 2. The first lower bounds against randomized datastructures schemes for two natural Boolean variants of Orthogonal Vector Counting. 3. We construct an asymmetric communication problem and obtain a deterministic lowerbound for it which is provably better than any lowerbound that may be obtained by the classical Richness Method of Miltersen et al. (STOC ’95). This seems to be the first known limitation of the Richness Method in the context of proving deterministic lower bounds. 

Kozma, László 
STOC '18: "Smooth Heaps and a Dual View ..."
Smooth Heaps and a Dual View of SelfAdjusting Data Structures
László Kozma and Thatchaphol Saranurak (Eindhoven University of Technology, Netherlands; KTH, Sweden)
We present a new connection between selfadjusting binary search trees (BSTs) and heaps, two fundamental, extensively studied, and practically relevant families of data structures (Allen,Munro, 1978; Sleator, Tarjan, 1983; Fredman, Sedgewick, Sleator, Tarjan, 1986; Wilber, 1989; Fredman, 1999; Iacono, Özkan, 2014). Roughly speaking, we map an arbitrary heap algorithm within a broad and natural model, to a corresponding BST algorithm with the same cost on a dual sequence of operations (i.e. the same sequence with the roles of time and keyspace switched). This is the first general transformation between the two families of data structures.
There is a rich theory of dynamic optimality for BSTs (i.e. the theory of competitiveness
between BST algorithms). The lack of an analogous theory for heaps has been noted in the literature (e.g. Pettie; 2005, 2008). Through our connection, we transfer all instancespecific lower bounds known for BSTs to a general model of heaps, initiating a theory of dynamic optimality for heaps.
On the algorithmic side, we obtain a new, simple and efficient heap algorithm, which we call the smooth heap. We show the smooth heap to be the heapcounterpart of Greedy, the BST algorithm with the strongest proven and conjectured properties from the literature, conjectured to be instanceoptimal (Lucas, 1988; Munro, 2000; Demaine et al., 2009). Assuming the optimality of Greedy, the smooth heap is also optimal within our model of heap algorithms.
Intriguingly, the smooth heap, although derived from a nonpractical BST algorithm, is
simple and easy to implement (e.g. it stores no auxiliary data besides the keys and tree pointers). It can be seen as a variation on the popular pairing heap data structure, extending it with a “poweroftwochoices” type of heuristic. For the smooth heap we obtain instancespecific upper bounds, with applications in adaptive sorting, and we see it as a promising candidate for the longstanding question of a simpler alternative to Fibonacci heaps.
@InProceedings{STOC18p801,
author = {László Kozma and Thatchaphol Saranurak},
title = {Smooth Heaps and a Dual View of SelfAdjusting Data Structures},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {801814},
doi = {10.1145/3188745.3188864},
year = {2018},
}
Publisher's Version
Article Search


Krishnaswamy, Ravishankar 
STOC '18: "Constant Approximation for ..."
Constant Approximation for kMedian and kMeans with Outliers via Iterative Rounding
Ravishankar Krishnaswamy, Shi Li, and Sai Sandeep (Microsoft Research, India; SUNY Buffalo, USA) In this paper, we present a new iterative rounding framework for many clustering problems. Using this, we obtain an (α_{1} + є ≤ 7.081 + є)approximation algorithm for kmedian with outliers, greatly improving upon the large implicit constant approximation ratio of Chen. For kmeans with outliers, we give an (α_{2}+є ≤ 53.002 + є)approximation, which is the first O(1)approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give α_{1} and (α_{1} + є)approximation algorithms for matroid and knapsack median problems respectively, improving upon the previous best approximations ratios of 8 due to Swamy and 17.46 due to Byrka et al. The natural LP relaxation for the kmedian/kmeans with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almostintegral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almostintegral and fullyintegral solutions. Then, using a preprocessing procedure, we show how to convert an almostintegral solution to a fullyintegral solution losing only a constantfactor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any є > 0. 

Kuhn, Fabian 
STOC '18: "Deterministic Distributed ..."
Deterministic Distributed EdgeColoring with Fewer Colors
Mohsen Ghaffari, Fabian Kuhn, Yannic Maus, and Jara Uitto (ETH Zurich, Switzerland; University of Freiburg, Germany) We present a deterministic distributed algorithm, in the LOCAL model, that computes a (1+o(1))Δedgecoloring in polylogarithmictime, so long as the maximum degree Δ=Ω(logn). For smaller Δ, we give a polylogarithmictime 3Δ/2edgecoloring. These are the first deterministic algorithms to go below the natural barrier of 2Δ−1 colors, and they improve significantly on the recent polylogarithmictime (2Δ−1)(1+o(1))edgecoloring of Ghaffari and Su [SODA’17] and the (2Δ−1)edgecoloring of Fischer, Ghaffari, and Kuhn [FOCS’17], positively answering the main open question of the latter. The key technical ingredient of our algorithm is a simple and novel gradual packing of judiciously chosen nearmaximum matchings, each of which becomes one of the color classes. 

Kumar, Ashutosh 
STOC '18: "Nonmalleable Secret Sharing ..."
Nonmalleable Secret Sharing
Vipul Goyal and Ashutosh Kumar (Carnegie Mellon University, USA; University of California at Los Angeles, USA) A number of works have focused on the setting where an adversary tampers with the shares of a secret sharing scheme. This includes literature on verifiable secret sharing, algebraic manipulation detection(AMD) codes, and, error correcting or detecting codes in general. In this work, we initiate a systematic study of what we call nonmalleable secret sharing. Very roughly, the guarantee we seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is “destroyed” and the reconstruction outputs a string which is completely “unrelated” to the original secret. Recent exciting work on nonmalleable codes in the splitstate model led to constructions which can be seen as 2outof2 nonmalleable secret sharing schemes. These constructions have already found a number of applications in cryptography. We investigate the natural question of constructing toutofn nonmalleable secret sharing schemes. Such a secret sharing scheme ensures that only a set consisting of t or more shares can reconstruct the secret, and, additionally guarantees nonmalleability under an attack where potentially every share maybe tampered with. Techniques used for obtaining splitstate nonmalleable codes (or 2outof2 nonmalleable secret sharing) are (in some form) based on twosource extractors and seem not to generalize to our setting. Our first result is the construction of a toutofn nonmalleable secret sharing scheme against an adversary who arbitrarily tampers each of the shares independently. Our construction is unconditional and features statistical nonmalleability. As our main technical result, we present toutofn nonmalleable secret sharing scheme in a stronger adversarial model where an adversary may jointly tamper multiple shares. Our construction is unconditional and the adversary is allowed to jointlytamper subsets of up to (t−1) shares. We believe that the techniques introduced in our construction may be of independent interest. Inspired by the well studied problem of perfectly secure message transmission introduced in the seminal work of Dolev et. al (J. of ACM’93), we also initiate the study of nonmalleable message transmission. Nonmalleable message transmission can be seen as a natural generalization in which the goal is to ensure that the receiver either receives the original message, or, the original message is essentially destroyed and the receiver receives an “unrelated” message, when the network is under the influence of an adversary who can byzantinely corrupt all the nodes in the network. As natural applications of our nonmalleable secret sharing schemes, we propose constructions for nonmalleable message transmission. 

Kutten, Shay 
STOC '18: "Approximating Generalized ..."
Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency
Yuval Emek, Shay Kutten, Ron Lavi, and Yangguang Shi (Technion, Israel)
In a generalized network design (GND) problem, a set of resources are assigned (nonexclusively) to multiple requests. Each request contributes its weight to the resources it uses and the total load on a resource is then translated to the cost it incurs via a resource specific cost function. Motivated by energy efficiency applications, recently, there is a growing interest in GND using cost functions that exhibit (dis)economies of scale ((D)oS), namely, cost functions that appear subadditive for small loads and superadditive for larger loads.
The current paper advances the existing literature on approximation algorithms for GND problems with (D)oS cost functions in various aspects: (1) while the existing results are restricted to routing requests in undirected graphs, identifying the resources with the graph's edges, the current paper presents a generic approximation framework that yields approximation results for a much wider family of requests (including various types of Steiner tree and Steiner forest requests) in both directed and undirected graphs, where the resources can be identified with either the edges or the vertices; (2) while the existing results assume that a request contributes the same weight to each resource it uses, our approximation framework allows for unrelated weights, thus providing the first nontrivial approximation for the problem of scheduling unrelated parallel machines with (D)oS cost functions; (3) while most of the existing approximation algorithms are based on convex programming, our approximation framework is fully combinatorial and runs in strongly polynomial time; (4) the family of (D)oS cost functions considered in the current paper is more general than the one considered in the existing literature, providing a more accurate abstraction for practical energy conservation scenarios; and (5) we obtain the first approximation ratio for GND with (D)oS cost functions that depends only on the parameters of the resources' technology and does not grow with the number of resources, the number of requests, or their weights. The design of our approximation framework relies heavily on Roughgarden's smoothness toolbox (JACM 2015), thus demonstrating the possible usefulness of this toolbox in the area of approximation algorithms.
@InProceedings{STOC18p598,
author = {Yuval Emek and Shay Kutten and Ron Lavi and Yangguang Shi},
title = {Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {598606},
doi = {10.1145/3188745.3188812},
year = {2018},
}
Publisher's Version
Article Search


Kwok, Tsz Chiu 
STOC '18: "The Paulsen Problem, Continuous ..."
The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis
Tsz Chiu Kwok, Lap Chi Lau, Yin Tat Lee, and Akshay Ramachandran (University of Waterloo, Canada; University of Washington, USA) The Paulsen problem is a basic open problem in operator theory: Given vectors u_{1}, …, u_{n} ∈ ℝ^{d} that are єnearly satisfying the Parseval’s condition and the equal norm condition, is it close to a set of vectors v_{1}, …, v_{n} ∈ ℝ^{d} that exactly satisfy the Parseval’s condition and the equal norm condition? Given u_{1}, …, u_{n}, the squared distance (to the set of exact solutions) is defined as inf_{v} ∑_{i=1}^{n}  u_{i} − v_{i} _{2}^{2} where the infimum is over the set of exact solutions. Previous results show that the squared distance of any єnearly solution is at most O(poly(d,n,є)) and there are єnearly solutions with squared distance at least Ω(d є). The fundamental open question is whether the squared distance can be independent of the number of vectors n. We answer this question affirmatively by proving that the squared distance of any єnearly solution is O(d^{13/2} є). Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any єnearly solution is O(d^{2} n є). Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an єnearly solution is O(d^{5/2} є) when n is large enough and є is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyzing the operator scaling algorithm. 

Kyng, Rasmus 
STOC '18: "Incomplete Nested Dissection ..."
Incomplete Nested Dissection
Rasmus Kyng, Richard Peng, Robert Schwieterman, and Peng Zhang (Simons Institute for the Theory of Computing Berkeley, USA; Georgia Tech, USA) We present an asymptotically faster algorithm for solving linear systems in wellstructured 3dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [DaitchSpielman CSC’07, ShklarskiToledo SIMAX’08]. Given a 3dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense wellconditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O(k^{1/3} n^{5/3} log(1 / є)). This asymptotically improves the running time O(n^{2}) by Nested Dissection for all k ≪ n. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require wellconditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n^{1/44}. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separatorbased Gaussian elimination. 

Łącki, Jakub 
STOC '18: "Round Compression for Parallel ..."
Round Compression for Parallel Matching Algorithms
Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (University of Warwick, UK; Google Research, USA; Massachusetts Institute of Technology, USA; EPFL, Switzerland; IBM Research, USA; University of Warsaw, Poland) For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of largescale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem—one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n^{1+Ω(1)} memory, this problem can also be solved 2approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) nearlinear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+є)approximate maximum matching, for any fixed constant є>0, in O((loglogn)^{2}) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of the MPC model, as compared to the PRAM model. Firstly, we use vertex–based graph partitioning, instead of the edge–based approaches that were utilized so far. Secondly, we develop a technique of round compression. This technique enables one to take a (distributed) algorithm that computes an O(1)approximation of maximum matching in O(logn) independent PRAM phases and implement a superconstant number of these phases in only a constant number of MPC rounds. 

Laekhanukit, Bundit 
STOC '18: "On the Parameterized Complexity ..."
On the Parameterized Complexity of Approximating Dominating Set
Karthik C. S., Bundit Laekhanukit, and Pasin Manurangsi (Weizmann Institute of Science, Israel; Max Planck Institute for Informatics, Germany; Shanghai University of Finance and Economics, China; University of California at Berkeley, USA) We study the parameterized complexity of approximating the kDominating Set (domset) problem where an integer k and a graph G on n vertices are given as input, and the goal is to find a dominating set of size at most F(k) · k whenever the graph G has a dominating set of size k. When such an algorithm runs in time T(k)poly(n) (i.e., FPTtime) for some computable function T, it is said to be an F(k)FPTapproximation algorithm for kdomset. Whether such an algorithm exists is listed in the seminal book of Downey and Fellows (2013) as one of the ”most infamous” open problems in Parameterized Complexity. This work gives an almost complete answer to this question by showing the nonexistence of such an algorithm under W[1]≠FPT and further providing tighter running time lower bounds under stronger hypotheses. Specifically, we prove the following for every computable functions T, F and every constant ε > 0: (i) Assuming W[1]≠FPT, there is no F(k)FPTapproximation algorithm for kdomset, (ii) Assuming the Exponential Time Hypothesis (ETH), there is no F(k)approximation algorithm for kdomset that runs in T(k)n^{o(k)} time, (iii) Assuming the Strong Exponential Time Hypothesis (SETH), for every integer k ≥ 2, there is no F(k)approximation algorithm for kdomset that runs in T(k)n^{k − ε} time, (iv) Assuming the ksum Hypothesis, for every integer k ≥ 3, there is no F(k)approximation algorithm for kdomset that runs in T(k) n^{⌈ k/2 ⌉ − ε} time. Previously, only constant ratio FPTapproximation algorithms were ruled out under W[1]≠FPT and (log^{1/4 − ε} k)FPTapproximation algorithms were ruled out under ETH [Chen and Lin, FOCS 2016]. Recently, the nonexistence of an F(k)FPTapproximation algorithm for any function F was shown under gapETH [Chalermsook et al., FOCS 2017]. Note that, to the best of our knowledge, no running time lower bound of the form n^{δ k} for any absolute constant δ > 0 was known before even for any constant factor inapproximation ratio. Our results are obtained by establishing a connection between communication complexity and hardness of approximation, generalizing the ideas from a recent breakthrough work of Abboud et al. [FOCS 2017]. Specifically, we show that to prove hardness of approximation of a certain parameterized variant of the label cover problem, it suffices to devise a specific protocol for a communication problem that depends on which hypothesis we rely on. Each of these communication problems turns out to be either a well studied problem or a variant of one; this allows us to easily apply known techniques to solve them. 

Lapinskas, John 
STOC '18: "FineGrained Reductions from ..."
FineGrained Reductions from Approximate Counting to Decision
Holger Dell and John Lapinskas (Saarland University, Germany; M2CI, Germany; University of Oxford, UK) In this paper, we introduce a general framework for finegrained reductions of approximate counting problems to their decision versions. (Thus we use an oracle that decides whether any witness exists to multiplicatively approximate the number of witnesses with minimal overhead.) This mirrors a foundational result of Sipser (STOC 1983) and Stockmeyer (SICOMP 1985) in the polynomialtime setting, and a similar result of Müller (IWPEC 2006) in the FPT setting. Using our framework, we obtain such reductions for some of the most important problems in finegrained complexity: the Orthogonal Vectors problem, 3SUM, and the NegativeWeight Triangle problem (which is closely related to AllPairs Shortest Path). While all these problems have simple algorithms over which it is conjectured that no polynomial improvement is possible, our reductions would remain interesting even if these conjectures were proved; they have only polylogarithmic overhead, and can therefore be applied to subpolynomial improvements such as the n^{3}/exp(Θ(√logn))time algorithm for the NegativeWeight Triangle problem due to Williams (STOC 2014). Our framework is also general enough to apply to versions of the problems for which more efficient algorithms are known. For example, the Orthogonal Vectors problem over GF(m)^{d} for constant m can be solved in time n·poly(d) by a result of Williams and Yu (SODA 2014); our result implies that we can approximately count the number of orthogonal pairs with essentially the same running time. We also provide a finegrained reduction from approximate #SAT to SAT. Suppose the Strong Exponential Time Hypothesis (SETH) is false, so that for some 1<c<2 and all k there is an O(c^{n})time algorithm for #kSAT. Then we prove that for all k, there is an O((c+o(1))^{n})time algorithm for approximate #kSAT. In particular, our result implies that the Exponential Time Hypothesis (ETH) is equivalent to the seeminglyweaker statement that there is no algorithm to approximate #3SAT to within a factor of 1+ε in time 2^{o(n)}/ε^{2} (taking ε > 0 as part of the input). A full version of this paper containing detailed proofs is available at https://arxiv.org/abs/1707.04609. 

Larsen, Kasper Green 
STOC '18: "Tight Cell Probe Bounds for ..."
Tight Cell Probe Bounds for Succinct Boolean MatrixVector Multiplication
Diptarka Chakraborty, Lior Kamma, and Kasper Green Larsen (Charles University in Prague, Czechia; Aarhus University, Denmark) The conjectured hardness of Boolean matrixvector multiplication has been used with great success to prove conditional lower bounds for numerous important data structure problems, see Henzinger et al. [STOC’15]. In recent work, Larsen and Williams [SODA’17] attacked the problem from the upper bound side and gave a surprising cell probe data structure (that is, we only charge for memory accesses, while computation is free). Their cell probe data structure answers queries in Õ(n^{7/4}) time and is succinct in the sense that it stores the input matrix in readonly memory, plus an additional Õ(n^{7/4}) bits on the side. In this paper, we essentially settle the cell probe complexity of succinct Boolean matrixvector multiplication. We present a new cell probe data structure with query time Õ(n^{3/2}) storing just Õ(n^{3/2}) bits on the side. We then complement our data structure with a lower bound showing that any data structure storing r bits on the side, with n < r < n^{2} must have query time t satisfying t r = Ω(n^{3}). For r ≤ n, any data structure must have t = Ω(n^{2}). Since lower bounds in the cell probe model also apply to classic wordRAM data structures, the lower bounds naturally carry over. We also prove similar lower bounds for matrixvector multiplication over F_{2}. Kasper Green Larsen, Omri Weinstein, and Huacheng Yu (Aarhus University, Denmark; Columbia University, USA; Harvard University, USA) This paper proves the first superlogarithmic lower bounds on the cell probe complexity of dynamic boolean (a.k.a. decision) data structure problems, a longstanding milestone in data structure lower bounds. We introduce a new approach and use it to prove a Ω(log^{1.5} n) lower bound on the operational time of a wide range of boolean data structure problems, most notably, on the query time of dynamic range counting over F_{2} ([]). Proving an ω(lgn) lower bound for this problem was explicitly posed as one of five important open problems in the late Mihai Pǎtraşcu’s obituary []. This result also implies the first ω(lgn) lower bound for the classical 2D range counting problem, one of the most fundamental data structure problems in computational geometry and spatial databases. We derive similar lower bounds for boolean versions of dynamic polynomial evaluation and 2D rectangle stabbing, and for the (nonboolean) problems of range selection and range median. Our technical centerpiece is a new way of “weakly” simulating dynamic data structures using efficient oneway communication protocols with small advantage over random guessing. This simulation involves a surprising excursion to lowdegree (Chebyshev) polynomials which may be of independent interest, and offers an entirely new algorithmic angle on the “cell sampling” method of Panigrahy et al. []. 

Lau, Lap Chi 
STOC '18: "The Paulsen Problem, Continuous ..."
The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis
Tsz Chiu Kwok, Lap Chi Lau, Yin Tat Lee, and Akshay Ramachandran (University of Waterloo, Canada; University of Washington, USA) The Paulsen problem is a basic open problem in operator theory: Given vectors u_{1}, …, u_{n} ∈ ℝ^{d} that are єnearly satisfying the Parseval’s condition and the equal norm condition, is it close to a set of vectors v_{1}, …, v_{n} ∈ ℝ^{d} that exactly satisfy the Parseval’s condition and the equal norm condition? Given u_{1}, …, u_{n}, the squared distance (to the set of exact solutions) is defined as inf_{v} ∑_{i=1}^{n}  u_{i} − v_{i} _{2}^{2} where the infimum is over the set of exact solutions. Previous results show that the squared distance of any єnearly solution is at most O(poly(d,n,є)) and there are єnearly solutions with squared distance at least Ω(d є). The fundamental open question is whether the squared distance can be independent of the number of vectors n. We answer this question affirmatively by proving that the squared distance of any єnearly solution is O(d^{13/2} є). Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any єnearly solution is O(d^{2} n є). Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an єnearly solution is O(d^{5/2} є) when n is large enough and є is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyzing the operator scaling algorithm. 

Lauria, Massimo 
STOC '18: "Clique Is Hard on Average ..."
Clique Is Hard on Average for Regular Resolution
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Alexander Razborov (Universitat Politècnica de Catalunya, Spain; KTH, Sweden; Sapienza University of Rome, Italy; University of Chicago, USA; Russian Academy of Sciences, Russia) We prove that for k ≪ n^{1/4} regular resolution requires length n^{Ω(k)} to establish that an ErdosRenyi graph with appropriately chosen edge density does not contain a kclique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional n^{Ω(k)} lower bounds on running time for several stateoftheart algorithms for finding maximum cliques in graphs. 

Lavi, Ron 
STOC '18: "Approximating Generalized ..."
Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency
Yuval Emek, Shay Kutten, Ron Lavi, and Yangguang Shi (Technion, Israel)
In a generalized network design (GND) problem, a set of resources are assigned (nonexclusively) to multiple requests. Each request contributes its weight to the resources it uses and the total load on a resource is then translated to the cost it incurs via a resource specific cost function. Motivated by energy efficiency applications, recently, there is a growing interest in GND using cost functions that exhibit (dis)economies of scale ((D)oS), namely, cost functions that appear subadditive for small loads and superadditive for larger loads.
The current paper advances the existing literature on approximation algorithms for GND problems with (D)oS cost functions in various aspects: (1) while the existing results are restricted to routing requests in undirected graphs, identifying the resources with the graph's edges, the current paper presents a generic approximation framework that yields approximation results for a much wider family of requests (including various types of Steiner tree and Steiner forest requests) in both directed and undirected graphs, where the resources can be identified with either the edges or the vertices; (2) while the existing results assume that a request contributes the same weight to each resource it uses, our approximation framework allows for unrelated weights, thus providing the first nontrivial approximation for the problem of scheduling unrelated parallel machines with (D)oS cost functions; (3) while most of the existing approximation algorithms are based on convex programming, our approximation framework is fully combinatorial and runs in strongly polynomial time; (4) the family of (D)oS cost functions considered in the current paper is more general than the one considered in the existing literature, providing a more accurate abstraction for practical energy conservation scenarios; and (5) we obtain the first approximation ratio for GND with (D)oS cost functions that depends only on the parameters of the resources' technology and does not grow with the number of resources, the number of requests, or their weights. The design of our approximation framework relies heavily on Roughgarden's smoothness toolbox (JACM 2015), thus demonstrating the possible usefulness of this toolbox in the area of approximation algorithms.
@InProceedings{STOC18p598,
author = {Yuval Emek and Shay Kutten and Ron Lavi and Yangguang Shi},
title = {Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {598606},
doi = {10.1145/3188745.3188812},
year = {2018},
}
Publisher's Version
Article Search


Lee, James R. 
STOC '18: "kServer via Multiscale Entropic ..."
kServer via Multiscale Entropic Regularization
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Mądry (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA) We present an O((logk)^{2})competitive randomized algorithm for the kserver problem on hierarchically separated trees (HSTs). This is the first o(k)competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)^{2} logn)competitive algorithm on any npoint metric space. We give a new dynamic HST embedding that yields an O((logk)^{3} logΔ)competitive algorithm on any metric space where the ratio of the largest to smallest nonzero distance is at most Δ. 

Lee, Yin Tat 
STOC '18: "A Matrix Expander Chernoff ..."
A Matrix Expander Chernoff Bound
Ankit Garg, Yin Tat Lee, Zhao Song, and Nikhil Srivastava (Microsoft Research, USA; University of Washington, USA; Harvard University, USA; University of Texas at Austin, USA; University of California at Berkeley, USA)
We prove a Chernofftype bound for sums of matrixvalued random variables
sampled via a random walk on an expander, confirming a conjecture due to [Wigderson
and Xiao 06].
Our proof is based on a new multimatrix extension of the GoldenThompson inequality which improves upon the inequality in [Sutter, Berta and Tomamichel 17],
as well as an adaptation of an argument for the scalar case due to [Healy 08]. Our new multimatrix GoldenThompson inequality could be of independent interest. Secondarily, we also provide a generic reduction showing that any
concentration inequality for vectorvalued martingales implies a concentration
inequality for the corresponding expander walk, with a weakening of parameters proportional to the
squared mixing time.
@InProceedings{STOC18p1102,
author = {Ankit Garg and Yin Tat Lee and Zhao Song and Nikhil Srivastava},
title = {A Matrix Expander Chernoff Bound},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {11021114},
doi = {10.1145/3188745.3188890},
year = {2018},
}
Publisher's Version
Article Search
STOC '18: "kServer via Multiscale Entropic ..."
kServer via Multiscale Entropic Regularization
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Mądry (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA) We present an O((logk)^{2})competitive randomized algorithm for the kserver problem on hierarchically separated trees (HSTs). This is the first o(k)competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)^{2} logn)competitive algorithm on any npoint metric space. We give a new dynamic HST embedding that yields an O((logk)^{3} logΔ)competitive algorithm on any metric space where the ratio of the largest to smallest nonzero distance is at most Δ. Yin Tat Lee and Santosh S. Vempala (University of Washington, USA; Microsoft Research, USA; Georgia Tech, USA) We give the first rigorous proof of the convergence of Riemannian Hamiltonian Monte Carlo, a general (and practical) method for sampling Gibbs distributions. Our analysis shows that the rate of convergence is bounded in terms of natural smoothness parameters of an associated Riemannian manifold. We then apply the method with the manifold defined by the log barrier function to the problems of (1) uniformly sampling a polytope and (2) computing its volume, the latter by extending Gaussian cooling to the manifold setting. In both cases, the total number of steps needed is O^{*}(mn^{2/3}), improving the state of the art. A key ingredient of our analysis is a proof of an analog of the KLS conjecture for Gibbs distributions over manifolds. Yin Tat Lee and Santosh S. Vempala (University of Washington, USA; Microsoft Research, USA; Georgia Tech, USA) Logarithmic Sobolev inequalities are a powerful way to estimate the rate of convergence of Markov chains and to derive concentration inequalities on distributions. We prove that the logSobolev constant of any isotropic logconcave density in R^{n} with support of diameter D is Ω(1/D), resolving a question posed by Frieze and Kannan in 1997. This is asymptotically the best possible estimate and improves on the previous bound of Ω(1/D^{2}) by KannanLovászMontenegro. It follows that for any isotropic logconcave density, the ball walk with step size δ=Θ(1/√n) mixes in O^{*}(n^{2}D) proper steps from any starting point. This improves on the previous best bound of O^{*}(n^{2}D^{2}) and is also asymptotically tight. The new bound leads to the following refined large deviation inequality for an LLipschitz function g over an isotropic logconcave density p: for any t>0,
where ḡ is the median or mean of g for x∼ p; this improves on previous bounds by Paouris and by GuedonMilman. Our main proof is based on stochastic localization together with a Stieltjestype barrier function. Tsz Chiu Kwok, Lap Chi Lau, Yin Tat Lee, and Akshay Ramachandran (University of Waterloo, Canada; University of Washington, USA) The Paulsen problem is a basic open problem in operator theory: Given vectors u_{1}, …, u_{n} ∈ ℝ^{d} that are єnearly satisfying the Parseval’s condition and the equal norm condition, is it close to a set of vectors v_{1}, …, v_{n} ∈ ℝ^{d} that exactly satisfy the Parseval’s condition and the equal norm condition? Given u_{1}, …, u_{n}, the squared distance (to the set of exact solutions) is defined as inf_{v} ∑_{i=1}^{n}  u_{i} − v_{i} _{2}^{2} where the infimum is over the set of exact solutions. Previous results show that the squared distance of any єnearly solution is at most O(poly(d,n,є)) and there are єnearly solutions with squared distance at least Ω(d є). The fundamental open question is whether the squared distance can be independent of the number of vectors n. We answer this question affirmatively by proving that the squared distance of any єnearly solution is O(d^{13/2} є). Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any єnearly solution is O(d^{2} n є). Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an єnearly solution is O(d^{5/2} є) when n is large enough and є is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyzing the operator scaling algorithm. Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, and Yuanzhi Li (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA; Princeton University, USA) We consider the problem of linear regression where the ℓ_{2}^{n} norm loss (i.e., the usual least squares loss) is replaced by the ℓ_{p}^{n} norm. We show how to solve such problems up to machine precision in Õ_{p}(n^{1/2 − 1/p}) (dense) matrixvector products and Õ_{p}(1) matrix inversions, or alternatively in Õ_{p}(n^{1/2 − 1/p}) calls to a (sparse) linear system solver. This improves the state of the art for any p∉{1,2,+∞}. Furthermore we also propose a randomized algorithm solving such problems in input sparsity time, i.e., Õ_{p}(N + poly(d)) where N is the size of the input and d is the number of variables. Such a result was only known for p=2. Finally we prove that these results lie outside the scope of the NesterovNemirovski’s theory of interior point methods by showing that any symmetric selfconcordant barrier on the ℓ_{p}^{n} unit ball has selfconcordance parameter Ω(n). 

Lempiäinen, Tuomo 
STOC '18: "New Classes of Distributed ..."
New Classes of Distributed Time Complexity
Alkida Balliu, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Dennis Olivetti, and Jukka Suomela (Aalto University, Finland; University of Freiburg, Germany) A number of recent papers – e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) – have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π in which a solution can be verified by checking all radiusO(1) neighbourhoods, and the question is what is the smallest T such that a solution can be computed so that each node chooses its own output based on its radiusT neighbourhood. Here T is the distributed time complexity of Π. The time complexity classes for deterministic algorithms in boundeddegree graphs that are known to exist by prior work are Θ(1), Θ(log^{*} n), Θ(logn), Θ(n^{1/k}), and Θ(n). It is also known that there are two gaps: one between ω(1) and o(loglog^{*} n), and another between ω(log^{*} n) and o(logn). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple – indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(log^{α} n) for any α ≥ 1, 2^{Θ(logα n)} for any α ≤ 1, and Θ(n^{α}) for any α < 1/2 in the high end of the complexity spectrum, and Θ(log^{α} log^{*} n) for any α ≥ 1, 2^{Θ(logα log* n)} for any α ≤ 1, and Θ((log^{*} n)^{α}) for any α ≤ 1 in the low end of the complexity spectrum; here α is a positive rational number. 

Lenzen, Christoph 
STOC '18: "On the Complexity of HazardFree ..."
On the Complexity of HazardFree Circuits
Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah (Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Newcastle University, UK; IIT Hyderabad, India)
The problem of constructing hazardfree Boolean circuits dates back to the 1940s and is an important problem in circuit design.
Our main lowerbound result unconditionally shows the existence of
functions whose circuit complexity is polynomially bounded while every
hazardfree implementation is provably of exponential size.
Previous lower bounds on the hazardfree complexity were only valid for depth 2 circuits.
The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards.
These results follow from a crucial structural insight:
Hazardfree complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions.
Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazardfree complexity.
We also lift these methods from the monotone setting to prove exponential hazardfree complexity lower bounds for nonmonotone functions.
As our main upperbound result we show how to efficiently convert a Boolean circuit into a boundedbit hazardfree circuit with only a polynomially large blowup in the number of gates.
Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement.
As a side result we establish the NPcompleteness of several hazard detection problems.
@InProceedings{STOC18p878,
author = {Christian Ikenmeyer and Balagopal Komarath and Christoph Lenzen and Vladimir Lysikov and Andrey Mokhov and Karteek Sreenivasaiah},
title = {On the Complexity of HazardFree Circuits},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {878889},
doi = {10.1145/3188745.3188912},
year = {2018},
}
Publisher's Version
Article Search


Leung, Debbie 
STOC '18: "Capacity Approaching Coding ..."
Capacity Approaching Coding for Low Noise Interactive Quantum Communication
Debbie Leung, Ashwin Nayak, Ala Shayeghi, Dave Touchette, Penghui Yao, and Nengkun Yu (University of Waterloo, Canada; Perimeter Institute Waterloo, Canada; Nanjing University, China; University of Technology Sydney, Australia) We consider the problem of implementing twoparty interactive quantum communication over noisy channels, a necessary endeavor if we wish to fully reap quantum advantages for communication. For an arbitrary protocol with n messages, designed for noiseless qudit channels (where d is arbitrary), our main result is a simulation method that fails with probability less than 2^{−Θ (nє)} and uses a qudit channel n (1 + Θ (√є)) times, of which an є fraction can be corrupted adversarially. The simulation is thus capacity achieving to leading order, and we conjecture that it is optimal up to a constant factor in the √є term. Furthermore, the simulation is in a model that does not require preshared resources such as randomness or entanglement between the communicating parties. Perhaps surprisingly, this outperforms the best known overhead of 1 + O(√є loglog1/є) in the corresponding classical model, which is also conjectured to be optimal [Haeupler, FOCS’14]. Our work also improves over the best previously known quantum result where the overhead is a nonexplicit large constant [Brassard et al., FOCS’14] for low є. 

Leverrier, Anthony 
STOC '18: "Efficient Decoding of Random ..."
Efficient Decoding of Random Errors for Quantum Expander Codes
Omar Fawzi, Antoine Grospellier, and Anthony Leverrier (ENS Lyon, France; Inria, France) We show that quantum expander codes, a constantrate family of quantum lowdensity parity check (LDPC) codes, with the quasilinear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constantrate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman’s construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of αpercolation: for a random subset E of vertices of a given graph, we consider the size of the largest connected αsubset of E, where X is an αsubset of E if X ∩ E ≥ α X. 

Li, Jason 
STOC '18: "Improved Distributed Algorithms ..."
Improved Distributed Algorithms for Exact Shortest Paths
Mohsen Ghaffari and Jason Li (ETH Zurich, Switzerland; Carnegie Mellon University, USA) Computing shortest paths is one of the central problems in the theory of distributed computing. For the last few years, substantial progress has been made on the approximate single source shortest paths problem, culminating in an algorithm of Henzinger, Krinninger, and Nanongkai [STOC’16] which deterministically computes (1+o(1))approximate shortest paths in Õ(D+√n) time, where D is the hopdiameter of the graph. Up to logarithmic factors, this time complexity is optimal, matching the lower bound of Elkin [STOC’04]. The question of exact shortest paths however saw no algorithmic progress for decades, until the recent breakthrough of Elkin [STOC’17], which established a sublineartime algorithm for exact single source shortest paths on undirected graphs. Shortly after, Huang et al. [FOCS’17] provided improved algorithms for exact all pairs shortest paths problem on directed graphs. In this paper, we provide an alternative singlesource shortest path algorithm with complexity Õ(n^{3/4}D^{1/4}). For polylogarithmic D, this improves on Elkin’s Õ(n^{5/6}) bound and gets closer to the Ω(n^{1/2}) lower bound of Elkin [STOC’04]. For larger values of D, we present an improved variant of our algorithm which achieves complexity Õ(max{ n^{3/4+o(1)} , n^{3/4}D^{1/6}} + D ), and thus compares favorably with Elkin’s bound of Õ(max{ n^{5/6}, n^{2/3}D^{1/3}} + D ) in essentially the entire range of parameters. This algorithm provides also a qualitative improvement, because it works for the more challenging case of directed graph (i.e., graphs where the two directions of an edge can have different weights), constituting the first sublineartime algorithm for directed graphs. Our algorithm also extends to the case of exact rsource shortest paths, in which we provide the fastest algorithm for moderately small r and D, improving on those of Huang et al. 

Li, Jerry 
STOC '18: "Mixture Models, Robustness, ..."
Mixture Models, Robustness, and Sum of Squares Proofs
Samuel B. Hopkins and Jerry Li (Cornell University, USA; Massachusetts Institute of Technology, USA) We use the Sum of Squares method to develop new efficient algorithms for learning wellseparated mixtures of Gaussians and robust mean estimation, both in high dimensions, that substantially improve upon the statistical guarantees achieved by previous efficient algorithms. Our contributions are: Mixture models with separated means: We study mixtures of poly(k)many kdimensional distributions where the means of every pair of distributions are separated by at least k^{ε}. In the special case of spherical Gaussian mixtures, we give a k^{O(1/ε)}time algorithm that learns the means assuming separation at least k^{ε}, for any ε> 0. This is the first algorithm to improve on greedy (“singlelinkage”) and spectral clustering, breaking a longstanding barrier for efficient algorithms at separation k^{1/4}. Robust estimation: When an unknown (1−ε)fraction of X_{1},…,X_{n} are chosen from a subGaussian distribution with mean µ but the remaining points are chosen adversarially, we give an algorithm recovering µ to error ε^{1−1/t} in time k^{O(t)}, so long as subGaussianness up to O(t) moments can be certified by a Sum of Squares proof. This is the first polynomialtime algorithm with guarantees approaching the informationtheoretic limit for nonGaussian distributions. Previous algorithms could not achieve error better than ε^{1/2}. As a corollary, we achieve similar results for robust covariance estimation. Both of these results are based on a unified technique. Inspired by recent algorithms of Diakonikolas et al. in robust statistics, we devise an SDP based on the Sum of Squares method for the following setting: given X_{1},…,X_{n} ∈ ℝ^{k} for large k and n = poly(k) with the promise that a subset of X_{1},…,X_{n} were sampled from a probability distribution with bounded moments, recover some information about that distribution. 

Li, Shi 
STOC '18: "Constant Approximation for ..."
Constant Approximation for kMedian and kMeans with Outliers via Iterative Rounding
Ravishankar Krishnaswamy, Shi Li, and Sai Sandeep (Microsoft Research, India; SUNY Buffalo, USA) In this paper, we present a new iterative rounding framework for many clustering problems. Using this, we obtain an (α_{1} + є ≤ 7.081 + є)approximation algorithm for kmedian with outliers, greatly improving upon the large implicit constant approximation ratio of Chen. For kmeans with outliers, we give an (α_{2}+є ≤ 53.002 + є)approximation, which is the first O(1)approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give α_{1} and (α_{1} + є)approximation algorithms for matroid and knapsack median problems respectively, improving upon the previous best approximations ratios of 8 due to Swamy and 17.46 due to Byrka et al. The natural LP relaxation for the kmedian/kmeans with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almostintegral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almostintegral and fullyintegral solutions. Then, using a preprocessing procedure, we show how to convert an almostintegral solution to a fullyintegral solution losing only a constantfactor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any є > 0. 

Li, Wenzheng 
STOC '18: "An Optimal Distributed (Δ+1)Coloring ..."
An Optimal Distributed (Δ+1)Coloring Algorithm?
YiJun Chang, Wenzheng Li, and Seth Pettie (University of Michigan, USA; Tsinghua University, China) Vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. In this paper we present a new algorithm for (Δ+1)list coloring in the randomized LOCAL model running in O(log^{∗}n + Det_{d}(poly logn)) time, where Det_{d}(n′) is the deterministic complexity of (deg+1)list coloring (v’s palette has size deg(v)+1) on n′vertex graphs. This improves upon a previous randomized algorithm of Harris, Schneider, and Su (STOC 2016). with complexity O(√logΔ + loglogn + Det_{d}(poly logn)), and (when Δ is sufficiently large) is much faster than the best known deterministic algorithm of Fraigniaud, Heinrich, and Kosowski (FOCS 2016), with complexity O(√Δlog^{2.5}Δ + log^{*} n). Our algorithm appears to be optimal. It matches the Ω(log^{∗}n) randomized lower bound, due to Naor (SIDMA 1991) and sort of matches the Ω(Det(poly logn)) randomized lower bound due to Chang, Kopelowitz, and Pettie (FOCS 2016), where Det is the deterministic complexity of (Δ+1)list coloring. The best known upper bounds on Det_{d}(n′) and Det(n′) are both 2^{O(√logn′)} by Panconesi and Srinivasan (Journal of Algorithms 1996), and it is quite plausible that the complexities of both problems are the same, asymptotically. 

Li, Yuanzhi 
STOC '18: "Operator Scaling via Geodesically ..."
Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing
Zeyuan AllenZhu, Ankit Garg, Yuanzhi Li, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; Princeton University, USA; University of Toronto, Canada; Institute for Advanced Study at Princeton, USA)
We propose a new secondorder method for geodesically convex optimization on the natural hyperbolic metric over positive definite matrices. We apply it to solve the operator scaling problem in time polynomial in the input size and logarithmic in the error. This is an exponential improvement over previous algorithms which were analyzed in the usual Euclidean, "commutative" metric (for which the above problem is not convex). Our method is general and applicable to other settings.
As a consequence, we solve the equivalence problem for the leftright group action underlying the operator scaling problem. This yields a deterministic polynomialtime algorithm for a new class of Polynomial Identity Testing (PIT) problems, which was the original motivation for studying operator scaling.
@InProceedings{STOC18p172,
author = {Zeyuan AllenZhu and Ankit Garg and Yuanzhi Li and Rafael Oliveira and Avi Wigderson},
title = {Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {172181},
doi = {10.1145/3188745.3188942},
year = {2018},
}
Publisher's Version
Article Search
STOC '18: "An Homotopy Method for l_{p} ..."
An Homotopy Method for l_{p} Regression Provably beyond SelfConcordance and in InputSparsity Time
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, and Yuanzhi Li (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA; Princeton University, USA) We consider the problem of linear regression where the ℓ_{2}^{n} norm loss (i.e., the usual least squares loss) is replaced by the ℓ_{p}^{n} norm. We show how to solve such problems up to machine precision in Õ_{p}(n^{1/2 − 1/p}) (dense) matrixvector products and Õ_{p}(1) matrix inversions, or alternatively in Õ_{p}(n^{1/2 − 1/p}) calls to a (sparse) linear system solver. This improves the state of the art for any p∉{1,2,+∞}. Furthermore we also propose a randomized algorithm solving such problems in input sparsity time, i.e., Õ_{p}(N + poly(d)) where N is the size of the input and d is the number of variables. Such a result was only known for p=2. Finally we prove that these results lie outside the scope of the NesterovNemirovski’s theory of interior point methods by showing that any symmetric selfconcordant barrier on the ℓ_{p}^{n} unit ball has selfconcordance parameter Ω(n). 

Liang, Percy 
STOC '18: "Prediction with a Short Memory ..."
Prediction with a Short Memory
Vatsal Sharan, Sham Kakade, Percy Liang, and Gregory Valiant (Stanford University, USA; University of Washington, USA) We consider the problem of predicting the next observation given a sequence of past observations, and consider the extent to which accurate prediction requires complex algorithms that explicitly leverage longrange dependencies. Perhaps surprisingly, our positive results show that for a broad class of sequences, there is an algorithm that predicts well on average, and bases its predictions only on the most recent few observation together with a set of simple summary statistics of the past observations. Specifically, we show that for any distribution over observations, if the mutual information between past observations and future observations is upper bounded by I, then a simple Markov model over the most recent I/є observations obtains expected KL error є—and hence ℓ_{1} error √є—with respect to the optimal predictor that has access to the entire past and knows the data generating distribution. For a Hidden Markov Model with n hidden states, I is bounded by logn, a quantity that does not depend on the mixing time, and we show that the trivial prediction algorithm based on the empirical frequencies of length O(logn/є) windows of observations achieves this error, provided the length of the sequence is d^{Ω(logn/є)}, where d is the size of the observation alphabet. We also establish that this result cannot be improved upon, even for the class of HMMs, in the following two senses: First, for HMMs with n hidden states, a window length of logn/є is informationtheoretically necessary to achieve expected KL error є, or ℓ_{1} error √є. Second, the d^{Θ(logn/є)} samples required to accurately estimate the Markov model when observations are drawn from an alphabet of size d is necessary for any computationally tractable learning/prediction algorithm, assuming the hardness of strongly refuting a certain class of CSPs. 

Liao, Chao 
STOC '18: "Counting Hypergraph Colourings ..."
Counting Hypergraph Colourings in the Local Lemma Regime
Heng Guo, Chao Liao, Pinyan Lu, and Chihao Zhang (University of Edinburgh, UK; Shanghai Jiao Tong University, China; Shanghai University of Finance and Economics, China; Chinese University of Hong Kong, China) We give a fully polynomialtime approximation scheme (FPTAS) to count the number of qcolorings for kuniform hypergraphs with maximum degree Δ if k≥ 28 and q > 315Δ^{14/k−14}. We also obtain a polynomialtime almost uniform sampler if q>798Δ^{16/k−16/3}. These are the first approximate counting and sampling algorithms in the regime q≪Δ (for large Δ and k) without any additional assumptions. Our method is based on the recent work of Moitra (STOC, 2017). One important contribution of ours is to remove the dependency of k and Δ in Moitra’s approach. 

Liu, Tianren 
STOC '18: "Breaking the CircuitSize ..."
Breaking the CircuitSize Barrier in Secret Sharing
Tianren Liu and Vinod Vaikuntanathan (Massachusetts Institute of Technology, USA) We study secret sharing schemes for general (nonthreshold) access structures. A general secret sharing scheme for n parties is associated to a monotone function F:{0,1}^{n}→{0,1}. In such a scheme, a dealer distributes shares of a secret s among n parties. Any subset of parties T ⊆ [n] should be able to put together their shares and reconstruct the secret s if F(T)=1, and should have no information about s if F(T)=0. One of the major longstanding questions in informationtheoretic cryptography is to minimize the (total) size of the shares in a secretsharing scheme for arbitrary monotone functions F. There is a large gap between lower and upper bounds for secret sharing. The best known scheme for general F has shares of size 2^{n−o(n)}, but the best lower bound is Ω(n^{2}/logn). Indeed, the exponential share size is a direct result of the fact that in all known secretsharing schemes, the share size grows with the size of a circuit (or formula, or monotone span program) for F. Indeed, several researchers have suggested the existence of a representation size barrier which implies that the right answer is closer to the upper bound, namely, 2^{n−o(n)}. In this work, we overcome this barrier by constructing a secret sharing scheme for any access structure with shares of size 2^{0.994n} and a linear secret sharing scheme for any access structure with shares of size 2^{0.999n}. As a contribution of independent interest, we also construct a secret sharing scheme with shares of size 2^{Õ(√n)} for 2^{n n/2} monotone access structures, out of a total of 2^{n n/2· (1+O(logn/n))} of them. Our construction builds on recent works that construct better protocols for the conditional disclosure of secrets (CDS) problem. 

Liu, Zhengyang 
STOC '18: "DistributionFree Junta Testing ..."
DistributionFree Junta Testing
Zhengyang Liu, Xi Chen, Rocco A. Servedio, Ying Sheng, and Jinyu Xie (Shanghai Jiao Tong University, China; Columbia University, USA) We study the problem of testing whether an unknown nvariable Boolean function is a kjunta in the distributionfree property testing model, where the distance between functions is measured with respect to an arbitrary and unknown probability distribution over {0,1}^{n}. Our first main result is that distributionfree kjunta testing can be performed, with onesided error, by an adaptive algorithm that uses Õ(k^{2})/є queries (independent of n). Complementing this, our second main result is a lower bound showing that any nonadaptive distributionfree kjunta testing algorithm must make Ω(2^{k/3}) queries even to test to accuracy є=1/3. These bounds establish that while the optimal query complexity of nonadaptive kjunta testing is 2^{Θ(k)}, for adaptive testing it is poly(k), and thus show that adaptivity provides an exponential improvement in the distributionfree query complexity of testing juntas. 

Lkhamsuren, Luvsandondov 
STOC '18: "Holiest MinimumCost Paths ..."
Holiest MinimumCost Paths and Flows in Surface Graphs
Jeff Erickson, Kyle Fox, and Luvsandondov Lkhamsuren (University of Illinois at UrbanaChampaign, USA; University of Texas at Dallas, USA; Airbnb, USA) Let G be an edgeweighted directed graph with n vertices embedded on an orientable surface of genus g. We describe a simple deterministic lexicographic perturbation scheme that guarantees uniqueness of minimumcost flows and shortest paths in G. The perturbations take O(gn) time to compute. We use our perturbation scheme in a black box manner to derive a deterministic O(n loglogn) time algorithm for minimum cut in directed edgeweighted planar graphs and a deterministic O(g^{2} n logn) time proprocessing scheme for the multiplesource shortest paths problem of computing a shortest path oracle for all vertices lying on a common face of a surface embedded graph. The latter result yields faster deterministic nearlinear time algorithms for a variety of problems in constant genus surface embedded graphs. Finally, we open the black box in order to generalize a recent lineartime algorithm for multiplesource shortest paths in unweighted undirected planar graphs to work in arbitrary orientable surfaces. Our algorithm runs in O(g^{2} n logg) time in this setting, and it can be used to give improved linear time algorithms for several problems in unweighted undirected surface embedded graphs of constant genus including the computation of minimum cuts, shortest topologically nontrivial cycles, and minimum homology bases. 

Lo, Irene 
STOC '18KEY: "Dynamic Matching in School ..."
Dynamic Matching in School Choice: Efficient Seat Reassignment after Late Cancellations (Invited Talk)
Irene Lo (Columbia University, USA)
In the school choice market, where scarce public school seats are
assigned to students, a key issue is how to reassign seats that are
vacated after an initial round of centralized assignment. Every year
around 10% of students assigned a seat in the NYC public high
school system eventually do not use it, and their vacated seats
can be reassigned. Practical solutions to the reassignment problem
must be simple to implement, truthful and efficient. I propose and
axiomatically justify a class of reassignment mechanisms, the Per
muted Lottery Deferred Acceptance (PLDA) mechanisms, which
generalize the commonly used Deferred Acceptance (DA) school
choice mechanism to a tworound setting and retain its desirable in
centive and efficiency properties. I also provide guidance to school
districts as to how to choose the appropriate mechanism in this
class for their setting.
Centralized admissions are typically conducted in a single round
using Deferred Acceptance, with a lottery used to break ties in each
school’s prioritization of students. Our proposed PLDA mechanisms
reassign vacated seats using a second round of DA with a lottery
based on a suitable permutation of the firstround lottery numbers.
I demonstrate that under a natural order condition on aggregate
student demand for schools, the secondround tiebreaking lottery
can be correlated arbitrarily with that of the first round without
affecting allocative welfare. I also show how the identifying char
acteristic of PLDA mechanisms, their permutation, can be chosen
to control reallocation.
vacated after the initial round are reassigned using decentralized
waitlists that create significant student movement after the start
of the school year, which is costly for both students and schools. I
show that reversing the lottery order between rounds minimizes
reassignment among all PLDA mechanisms, allowing us to alleviate
costly student movement between schools without affecting the ef
ficiency of the final allocation. In a setting without school priorities,
I also characterize PLDA mechanisms as the class of mechanisms
that provide students with a guarantee at their firstround assign
ment, respect school priorities, and are strategyproof, constrained
Pareto efficient, and satisfy some mild symmetry properties. Finally,
I provide simulations of the performance of different PLDA mecha
nisms in the presence of school priorities. All simulated PLDAs have
similar allocative efficiency, while the PLDA based on reversing
the tiebreaking lottery between rounds minimizes the number of
reassigned students. These results support our theoretical findings.
This is based on joint work with Itai Feigenbaum, Yash Kanoria,
and Jay Sethuraman.
@InProceedings{STOC18p1,
author = {Irene Lo},
title = {Dynamic Matching in School Choice: Efficient Seat Reassignment after Late Cancellations (Invited Talk)},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {11},
doi = {10.1145/3188745.3232193},
year = {2018},
}
Publisher's Version
Article Search


Loff, Bruno 
STOC '18: "Simulation Beats Richness: ..."
Simulation Beats Richness: New DataStructure Lower Bounds
Arkadev Chattopadhyay, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay (TIFR, India; Charles University in Prague, Czechia; INESC TEC, Portugal; University of Porto, Portugal; KTH, Sweden) We develop a new technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is the first simulation theorem in the asymmetric setting, where Alice gets a p × n matrix x over F_{2} and Bob gets a vector y ∈ F_{2}^{n}. Alice and Bob need to evaluate f(x· y) for a Boolean function f: {0,1}^{p} → {0,1}. Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C· n for Alice and C for Bob, if and only if there exists a deterministic/randomized *parity decision tree* of cost Θ(C) for evaluating f. As applications of this technique, we obtain the following results: 1. The first strong lowerbounds against randomized datastructure schemes for the VectorMatrixVector product problem over F_{2}. Moreover, our method yields strong lower bounds even when the datastructure scheme has tiny advantage over random guessing. 2. The first lower bounds against randomized datastructures schemes for two natural Boolean variants of Orthogonal Vector Counting. 3. We construct an asymmetric communication problem and obtain a deterministic lowerbound for it which is provably better than any lowerbound that may be obtained by the classical Richness Method of Miltersen et al. (STOC ’95). This seems to be the first known limitation of the Richness Method in the context of proving deterministic lower bounds. 

Lovett, Shachar 
STOC '18: "The GramSchmidt Walk: A Cure ..."
The GramSchmidt Walk: A Cure for the Banaszczyk Blues
Nikhil Bansal, Daniel Dadush, Shashwat Garg, and Shachar Lovett (Eindhoven University of Technology, Netherlands; CWI, Netherlands; University of California at San Diego, USA) An important result in discrepancy due to Banaszczyk states that for any set of n vectors in ℝ^{m} of ℓ_{2} norm at most 1 and any convex body K in ℝ^{m} of Gaussian measure at least half, there exists a ± 1 combination of these vectors which lies in 5K. This result implies the best known bounds for several problems in discrepancy. Banaszczyk’s proof of this result is nonconstructive and an open problem has been to give an efficient algorithm to find such a ± 1 combination of the vectors. In this paper, we resolve this question and give an efficient randomized algorithm to find a ± 1 combination of the vectors which lies in cK for c>0 an absolute constant. This leads to new efficient algorithms for several problems in discrepancy theory. Daniel M. Kane, Shachar Lovett, and Shay Moran (University of California at San Diego, USA; Institute for Advanced Study at Princeton, USA) We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the kSUM problem on n elements using O(n log^{2} n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two ksubsets; when viewed as linear queries, comparison queries are 2ksparse and have only {−1,0,1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSETSUM problem, both with optimal number of queries, up to polylogarithmic terms. Our constructions are based on the notion of “inference dimension”, recently introduced by the authors in the context of active classification with comparison queries. This can be viewed as another contribution to the fruitful link between machine learning and discrete geometry, which goes back to the discovery of the VC dimension. 

Lu, Pinyan 
STOC '18: "Counting Hypergraph Colourings ..."
Counting Hypergraph Colourings in the Local Lemma Regime
Heng Guo, Chao Liao, Pinyan Lu, and Chihao Zhang (University of Edinburgh, UK; Shanghai Jiao Tong University, China; Shanghai University of Finance and Economics, China; Chinese University of Hong Kong, China) We give a fully polynomialtime approximation scheme (FPTAS) to count the number of qcolorings for kuniform hypergraphs with maximum degree Δ if k≥ 28 and q > 315Δ^{14/k−14}. We also obtain a polynomialtime almost uniform sampler if q>798Δ^{16/k−16/3}. These are the first approximate counting and sampling algorithms in the regime q≪Δ (for large Δ and k) without any additional assumptions. Our method is based on the recent work of Moitra (STOC, 2017). One important contribution of ours is to remove the dependency of k and Δ in Moitra’s approach. 

Lykouris, Thodoris 
STOC '18: "Stochastic Bandits Robust ..."
Stochastic Bandits Robust to Adversarial Corruptions
Thodoris Lykouris, Vahab Mirrokni, and Renato Paes Leme (Cornell University, USA; Google Research, USA) We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam. The goal of this model is to encourage the design of bandit algorithms that (i) work well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models. In our model, the rewards for all arms are initially drawn from a distribution and are then altered by an adaptive adversary. We provide a simple algorithm whose performance gracefully degrades with the total corruption the adversary injected in the data, measured by the sum across rounds of the biggest alteration the adversary made in the data in that round; this total corruption is denoted by C. Our algorithm provides a guarantee that retains the optimal guarantee (up to a logarithmic term) if the input is stochastic and whose performance degrades linearly to the amount of corruption C, while crucially being agnostic to it. We also provide a lower bound showing that this linear degradation is necessary if the algorithm achieves optimal performance in the stochastic setting (the lower bound works even for a known amount of corruption, a special case in which our algorithm achieves optimal performance without the extra logarithm). 

Lysikov, Vladimir 
STOC '18: "Generalized Matrix Completion ..."
Generalized Matrix Completion and Algebraic Natural Proofs
Markus Bläser, Christian Ikenmeyer, Gorav Jindal, and Vladimir Lysikov (Saarland University, Germany; Max Planck Institute for Informatics, Germany) Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudorandom generators. Unfortunately, there is no known analogous theory of pseudorandomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NPhard problem. As our first main result, we prove that it is also NPhard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a wellestablished and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P^{#P} ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case. Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah (Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Newcastle University, UK; IIT Hyderabad, India)
The problem of constructing hazardfree Boolean circuits dates back to the 1940s and is an important problem in circuit design.
Our main lowerbound result unconditionally shows the existence of
functions whose circuit complexity is polynomially bounded while every
hazardfree implementation is provably of exponential size.
Previous lower bounds on the hazardfree complexity were only valid for depth 2 circuits.
The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards.
These results follow from a crucial structural insight:
Hazardfree complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions.
Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazardfree complexity.
We also lift these methods from the monotone setting to prove exponential hazardfree complexity lower bounds for nonmonotone functions.
As our main upperbound result we show how to efficiently convert a Boolean circuit into a boundedbit hazardfree circuit with only a polynomially large blowup in the number of gates.
Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement.
As a side result we establish the NPcompleteness of several hazard detection problems.
@InProceedings{STOC18p878,
author = {Christian Ikenmeyer and Balagopal Komarath and Christoph Lenzen and Vladimir Lysikov and Andrey Mokhov and Karteek Sreenivasaiah},
title = {On the Complexity of HazardFree Circuits},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {878889},
doi = {10.1145/3188745.3188912},
year = {2018},
}
Publisher's Version
Article Search


Ma, Tengyu 
STOC '18KEY: "Generalization and Equilibrium ..."
Generalization and Equilibrium in Generative Adversarial Nets (GANs) (Invited Talk)
Tengyu Ma (Stanford University, USA)
Generative Adversarial Networks (GANs) have become one of the
dominant methods for fitting generative models to complicated
reallife data, and even found unusual uses such as designing good
cryptographic primitives. In this talk, we will first introduce the ba
sics of GANs and then discuss the fundamental statistical question
about GANs — assuming the training can succeed with polynomial
samples, can we have any statistical guarantees for the estimated
distributions? In the work with Arora, Ge, Liang, and Zhang, we
suggested a dilemma: powerful discriminators cause overfitting,
whereas weak discriminators cannot detect mode collapse. Such a
conundrum may be solved or alleviated by designing discrimina
tor class with strong distinguishing power against the particular
generator class (instead of against all possible generators.)
@InProceedings{STOC18p2,
author = {Tengyu Ma},
title = {Generalization and Equilibrium in Generative Adversarial Nets (GANs) (Invited Talk)},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {22},
doi = {10.1145/3188745.3232194},
year = {2018},
}
Publisher's Version
Article Search


Mądry, Aleksander 
STOC '18: "kServer via Multiscale Entropic ..."
kServer via Multiscale Entropic Regularization
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Mądry (Microsoft Research, USA; Massachusetts Institute of Technology, USA; University of Washington, USA) We present an O((logk)^{2})competitive randomized algorithm for the kserver problem on hierarchically separated trees (HSTs). This is the first o(k)competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)^{2} logn)competitive algorithm on any npoint metric space. We give a new dynamic HST embedding that yields an O((logk)^{3} logΔ)competitive algorithm on any metric space where the ratio of the largest to smallest nonzero distance is at most Δ. Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (University of Warwick, UK; Google Research, USA; Massachusetts Institute of Technology, USA; EPFL, Switzerland; IBM Research, USA; University of Warsaw, Poland) For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of largescale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem—one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n^{1+Ω(1)} memory, this problem can also be solved 2approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) nearlinear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+є)approximate maximum matching, for any fixed constant є>0, in O((loglogn)^{2}) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of the MPC model, as compared to the PRAM model. Firstly, we use vertex–based graph partitioning, instead of the edge–based approaches that were utilized so far. Secondly, we develop a technique of round compression. This technique enables one to take a (distributed) algorithm that computes an O(1)approximation of maximum matching in O(logn) independent PRAM phases and implement a superconstant number of these phases in only a constant number of MPC rounds. 

Mahabadi, Sepideh 
STOC '18: "Nonlinear Dimension Reduction ..."
Nonlinear Dimension Reduction via Outer BiLipschitz Extensions
Sepideh Mahabadi, Konstantin Makarychev, Yury Makarychev, and Ilya Razenshteyn (Columbia University, USA; Northwestern University, USA; Toyota Technological Institute at Chicago, USA; Microsoft Research, USA) We introduce and study the notion of *an outer biLipschitz extension* of a map between Euclidean spaces. The notion is a natural analogue of the notion of *a Lipschitz extension* of a Lipschitz map. We show that for every map f there exists an outer biLipschitz extension f′ whose distortion is greater than that of f by at most a constant factor. This result can be seen as a counterpart of the classic Kirszbraun theorem for outer biLipschitz extensions. We also study outer biLipschitz extensions of nearisometric maps and show upper and lower bounds for them. Then, we present applications of our results to prioritized and terminal dimension reduction problems, described next. We prove a *prioritized* variant of the Johnson–Lindenstrauss lemma: given a set of points X⊂ ℝ^{d} of size N and a permutation (”priority ranking”) of X, there exists an embedding f of X into ℝ^{O(logN)} with distortion O(loglogN) such that the point of rank j has only O(log^{3 + ε} j) nonzero coordinates – more specifically, all but the first O(log^{3+ε} j) coordinates are equal to 0; the distortion of f restricted to the first j points (according to the ranking) is at most O(loglogj). The result makes a progress towards answering an open question by Elkin, Filtser, and Neiman about prioritized dimension reductions. We prove that given a set X of N points in ℜ^{d}, there exists a *terminal* dimension reduction embedding of ℝ^{d} into ℝ^{d′}, where d′ = O(logN/ε^{4}), which preserves distances x−y between points x∈ X and y ∈ ℝ^{d}, up to a multiplicative factor of 1 ± ε. This improves a recent result by Elkin, Filtser, and Neiman. The dimension reductions that we obtain are nonlinear, and this nonlinearity is necessary. 

Makarychev, Konstantin 
STOC '18: "Nonlinear Dimension Reduction ..."
Nonlinear Dimension Reduction via Outer BiLipschitz Extensions
Sepideh Mahabadi, Konstantin Makarychev, Yury Makarychev, and Ilya Razenshteyn (Columbia University, USA; Northwestern University, USA; Toyota Technological Institute at Chicago, USA; Microsoft Research, USA) We introduce and study the notion of *an outer biLipschitz extension* of a map between Euclidean spaces. The notion is a natural analogue of the notion of *a Lipschitz extension* of a Lipschitz map. We show that for every map f there exists an outer biLipschitz extension f′ whose distortion is greater than that of f by at most a constant factor. This result can be seen as a counterpart of the classic Kirszbraun theorem for outer biLipschitz extensions. We also study outer biLipschitz extensions of nearisometric maps and show upper and lower bounds for them. Then, we present applications of our results to prioritized and terminal dimension reduction problems, described next. We prove a *prioritized* variant of the Johnson–Lindenstrauss lemma: given a set of points X⊂ ℝ^{d} of size N and a permutation (”priority ranking”) of X, there exists an embedding f of X into ℝ^{O(logN)} with distortion O(loglogN) such that the point of rank j has only O(log^{3 + ε} j) nonzero coordinates – more specifically, all but the first O(log^{3+ε} j) coordinates are equal to 0; the distortion of f restricted to the first j points (according to the ranking) is at most O(loglogj). The result makes a progress towards answering an open question by Elkin, Filtser, and Neiman about prioritized dimension reductions. We prove that given a set X of N points in ℜ^{d}, there exists a *terminal* dimension reduction embedding of ℝ^{d} into ℝ^{d′}, where d′ = O(logN/ε^{4}), which preserves distances x−y between points x∈ X and y ∈ ℝ^{d}, up to a multiplicative factor of 1 ± ε. This improves a recent result by Elkin, Filtser, and Neiman. The dimension reductions that we obtain are nonlinear, and this nonlinearity is necessary. 

Makarychev, Yury 
STOC '18: "Nonlinear Dimension Reduction ..."
Nonlinear Dimension Reduction via Outer BiLipschitz Extensions
Sepideh Mahabadi, Konstantin Makarychev, Yury Makarychev, and Ilya Razenshteyn (Columbia University, USA; Northwestern University, USA; Toyota Technological Institute at Chicago, USA; Microsoft Research, USA) We introduce and study the notion of *an outer biLipschitz extension* of a map between Euclidean spaces. The notion is a natural analogue of the notion of *a Lipschitz extension* of a Lipschitz map. We show that for every map f there exists an outer biLipschitz extension f′ whose distortion is greater than that of f by at most a constant factor. This result can be seen as a counterpart of the classic Kirszbraun theorem for outer biLipschitz extensions. We also study outer biLipschitz extensions of nearisometric maps and show upper and lower bounds for them. Then, we present applications of our results to prioritized and terminal dimension reduction problems, described next. We prove a *prioritized* variant of the Johnson–Lindenstrauss lemma: given a set of points X⊂ ℝ^{d} of size N and a permutation (”priority ranking”) of X, there exists an embedding f of X into ℝ^{O(logN)} with distortion O(loglogN) such that the point of rank j has only O(log^{3 + ε} j) nonzero coordinates – more specifically, all but the first O(log^{3+ε} j) coordinates are equal to 0; the distortion of f restricted to the first j points (according to the ranking) is at most O(loglogj). The result makes a progress towards answering an open question by Elkin, Filtser, and Neiman about prioritized dimension reductions. We prove that given a set X of N points in ℜ^{d}, there exists a *terminal* dimension reduction embedding of ℝ^{d} into ℝ^{d′}, where d′ = O(logN/ε^{4}), which preserves distances x−y between points x∈ X and y ∈ ℝ^{d}, up to a multiplicative factor of 1 ± ε. This improves a recent result by Elkin, Filtser, and Neiman. The dimension reductions that we obtain are nonlinear, and this nonlinearity is necessary. 

Manurangsi, Pasin 
STOC '18: "On the Parameterized Complexity ..."
On the Parameterized Complexity of Approximating Dominating Set
Karthik C. S., Bundit Laekhanukit, and Pasin Manurangsi (Weizmann Institute of Science, Israel; Max Planck Institute for Informatics, Germany; Shanghai University of Finance and Economics, China; University of California at Berkeley, USA) We study the parameterized complexity of approximating the kDominating Set (domset) problem where an integer k and a graph G on n vertices are given as input, and the goal is to find a dominating set of size at most F(k) · k whenever the graph G has a dominating set of size k. When such an algorithm runs in time T(k)poly(n) (i.e., FPTtime) for some computable function T, it is said to be an F(k)FPTapproximation algorithm for kdomset. Whether such an algorithm exists is listed in the seminal book of Downey and Fellows (2013) as one of the ”most infamous” open problems in Parameterized Complexity. This work gives an almost complete answer to this question by showing the nonexistence of such an algorithm under W[1]≠FPT and further providing tighter running time lower bounds under stronger hypotheses. Specifically, we prove the following for every computable functions T, F and every constant ε > 0: (i) Assuming W[1]≠FPT, there is no F(k)FPTapproximation algorithm for kdomset, (ii) Assuming the Exponential Time Hypothesis (ETH), there is no F(k)approximation algorithm for kdomset that runs in T(k)n^{o(k)} time, (iii) Assuming the Strong Exponential Time Hypothesis (SETH), for every integer k ≥ 2, there is no F(k)approximation algorithm for kdomset that runs in T(k)n^{k − ε} time, (iv) Assuming the ksum Hypothesis, for every integer k ≥ 3, there is no F(k)approximation algorithm for kdomset that runs in T(k) n^{⌈ k/2 ⌉ − ε} time. Previously, only constant ratio FPTapproximation algorithms were ruled out under W[1]≠FPT and (log^{1/4 − ε} k)FPTapproximation algorithms were ruled out under ETH [Chen and Lin, FOCS 2016]. Recently, the nonexistence of an F(k)FPTapproximation algorithm for any function F was shown under gapETH [Chalermsook et al., FOCS 2017]. Note that, to the best of our knowledge, no running time lower bound of the form n^{δ k} for any absolute constant δ > 0 was known before even for any constant factor inapproximation ratio. Our results are obtained by establishing a connection between communication complexity and hardness of approximation, generalizing the ideas from a recent breakthrough work of Abboud et al. [FOCS 2017]. Specifically, we show that to prove hardness of approximation of a certain parameterized variant of the label cover problem, it suffices to devise a specific protocol for a communication problem that depends on which hypothesis we rely on. Each of these communication problems turns out to be either a well studied problem or a variant of one; this allows us to easily apply known techniques to solve them. 

Marx, Dániel 
STOC '18: "A Framework for ETHTight ..."
A Framework for ETHTight Algorithms and Lower Bounds in Geometric Intersection Graphs
Mark de Berg, Hans L. Bodlaender, Sándor KisfaludiBak, Dániel Marx, and Tom C. van der Zanden (Eindhoven University of Technology, Netherlands; Utrecht University, Netherlands; Hungarian Academy of Sciences, Hungary) We give an algorithmic and lowerbound framework that facilitates the construction of subexponential algorithms and matching conditional complexity bounds. It can be applied to a wide range of geometric intersection graphs (intersections of similarly sized fat objects), yielding algorithms with running time 2^{O(n1−1/d)} for any fixed dimension d≥ 2 for many well known graph problems, including Independent Set, rDominating Set for constant r, and Steiner Tree. For most problems, we get improved running times compared to prior work; in some cases, we give the first known subexponential algorithm in geometric intersection graphs. Additionally, most of the obtained algorithms work on the graph itself, i.e., do not require any geometric information. Our algorithmic framework is based on a weighted separator theorem and various treewidth techniques. The lowerbound framework is based on a constructive embedding of graphs into ddimensional grids, and it allows us to derive matching 2^{Ω(n1−1/d)} lower bounds under the Exponential Time Hypothesis even in the much more restricted class of ddimensional induced grid graphs. 

Maus, Yannic 
STOC '18: "Deterministic Distributed ..."
Deterministic Distributed EdgeColoring with Fewer Colors
Mohsen Ghaffari, Fabian Kuhn, Yannic Maus, and Jara Uitto (ETH Zurich, Switzerland; University of Freiburg, Germany) We present a deterministic distributed algorithm, in the LOCAL model, that computes a (1+o(1))Δedgecoloring in polylogarithmictime, so long as the maximum degree Δ=Ω(logn). For smaller Δ, we give a polylogarithmictime 3Δ/2edgecoloring. These are the first deterministic algorithms to go below the natural barrier of 2Δ−1 colors, and they improve significantly on the recent polylogarithmictime (2Δ−1)(1+o(1))edgecoloring of Ghaffari and Su [SODA’17] and the (2Δ−1)edgecoloring of Fischer, Ghaffari, and Kuhn [FOCS’17], positively answering the main open question of the latter. The key technical ingredient of our algorithm is a simple and novel gradual packing of judiciously chosen nearmaximum matchings, each of which becomes one of the color classes. 

Mehr, Mehran 
STOC '18: "Fast Fencing ..."
Fast Fencing
Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent CohenAddad, Mehran Mehr, Eva Rotenberg, Alan Roytman, and Mikkel Thorup (University of Copenhagen, Denmark; Max Planck Institute for Informatics, Germany; Sorbonne University, France; UPMC, France; CNRS, France; LIP6, France; Eindhoven University of Technology, Netherlands; DTU, Denmark) We consider very natural ”fence enclosure” problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set S of n points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose n unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most k closed curves and pay no cost per curve. For the variant with at most k closed curves, we present an algorithm that is polynomial in both n and k. For the variant with unit cost per curve, or unit disks, we present a nearlinear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most k curves in n^{O(k)} time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with k curves is NPhard for general k. Our polynomial time algorithm refutes this unless P equals NP. 

Mehta, Ruta 
STOC '18: "SumofSquares Meets Nash: ..."
SumofSquares Meets Nash: Lower Bounds for Finding Any Equilibrium
Pravesh K. Kothari and Ruta Mehta (Princeton University, USA; Institute for Advanced Study at Princeton, USA; University of Illinois at UrbanaChampaign, USA) Computing Nash equilibrium (NE) in twoplayer game is a central question in algorithmic game theory. The main motivation of this work is to understand the power of sumofsquares method in computing equilibria, both exact and approximate. Previous works in this context have focused on hardness of approximating “best” equilibria with respect to some natural quality measure on equilibria such as social welfare. Such results, however, do not directly relate to the complexity of the problem of finding any equilibrium. In this work, we propose a framework of roundings for the sumofsquares algorithm (and convex relaxations in general) applicable to finding approximate/exact equilbria in two player bimatrix games. Specifically, we define the notion of oblivious roundings with verification oracle (OV). These are algorithms that can access a solution to the degree d SoS relaxation to construct a list of candidate (partial) solutions and invoke a verification oracle to check if a candidate in the list gives an (exact or approximate) equilibrium. This framework captures most known approximation algorithms in combinatorial optimization including the celebrated semidefinite programming based algorithms for MaxCut, ConstraintSatisfaction Problems, and the recent works on SoS relaxations for Unique Games/SmallSet Expansion, Best Separable State, and many problems in unsupervised machine learning. Our main results are strong unconditional lower bounds in this framework. Specifically, we show that for є = Θ(1/poly(n)), there’s no algorithm that uses a o(n)degree SoS relaxation to construct a 2^{o(n)}size list of candidates and obtain an єapproximate NE. For some constant є, we show a similar result for degree o(log(n)) SoS relaxation and list size n^{o(log(n))}. Our results can be seen as an unconditional confirmation, in our restricted algorithmic framework, of the recent Exponential Time Hypothesis for PPAD. Our proof strategy involves constructing a family of games that all share a common sumofsquares solution but every (approximate) equilibrium of any game is far from every equilibrium of any other game in the family (in either player’s strategy). Along the way, we strengthen the classical unconditional lower bound against enumerative algorithms for finding approximate equilibria due to DaskalakisPapadimitriou and the classical hardness of computing equilibria due to GilbowZemel. 

Miltzow, Tillmann 
STOC '18: "The Art Gallery Problem Is ..."
The Art Gallery Problem Is ∃ ℝComplete
Mikkel Abrahamsen, Anna Adamaszek, and Tillmann Miltzow (University of Copenhagen, Denmark; Université Libre de Bruxelles, Belgium) We prove that the art gallery problem is equivalent under polynomial time reductions to deciding whether a system of polynomial equations over the real numbers has a solution. The art gallery problem is a classic problem in computational geometry, introduced in 1973 by Victor Klee. Given a simple polygon P and an integer k, the goal is to decide if there exists a set G of k guards within P such that every point p ∈ P is seen by at least one guard g∈ G. Each guard corresponds to a point in the polygon P, and we say that a guard g sees a point p if the line segment pg is contained in P. The art gallery problem has stimulated extensive research in geometry and in algorithms. However, the complexity status of the art gallery problem has not been resolved. It has long been known that the problem is NPhard, but no one has been able to show that it lies in NP. Recently, the computational geometry community became more aware of the complexity class ∃ ℝ, which has been studied earlier by other communities. The class ∃ ℝ consists of problems that can be reduced in polynomial time to the problem of deciding whether a system of polynomial equations with integer coefficients and any number of real variables has a solution. It can be easily seen that NP ⊆ ∃ ℝ. We prove that the art gallery problem is ∃ ℝcomplete, implying that (1) any system of polynomial equations over the real numbers can be encoded as an instance of the art gallery problem, and (2) the art gallery problem is not in the complexity class NP unless NP=∃ ℝ. As an illustration of our techniques, we can show that for every compact semialgebraic set S⊂ [0,1]^{2} there exists a polygon with rational coordinates that enforces one of the guards to be at any position p∈ S, in any optimal guarding. As a corollary of our construction, we prove that for any real algebraic number α there is an instance of the art gallery problem where one of the coordinates of the guards equals α in any guard set of minimum cardinality. That rules out many natural geometric approaches to the problem, as it shows that any approach based on constructing a finite set of candidate points for placing guards has to include points with coordinates being roots of polynomials with arbitrary degree. In the ∃ ℝhardness proof for the art gallery problem we introduce a new ∃ ℝcomplete problem ETRINV. We believe that this problem is of independent interest, as it can be used to obtain ∃ ℝhardness proofs for other problems. In particular, ETRINV has been very recently used to prove ∃ ℝhardness of other geometric problems. 

Minzer, Dor 
STOC '18: "Towards a Proof of the 2to1 ..."
Towards a Proof of the 2to1 Games Conjecture?
Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We present a polynomial time reduction from gap3LIN to label cover with 2to1 constraints. In the “yes” case the fraction of satisfied constraints is at least 1 −ε, and in the “no” case we show that this fraction is at most ε, assuming a certain (new) combinatorial hypothesis on the Grassmann graph. In other words, we describe a combinatorial hypothesis that implies the 2to1 conjecture with imperfect completeness. The companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] makes some progress towards proving this hypothesis. Our work builds on earlier work by a subset of the authors [Khot, Minzer and Safra, STOC 2017] where a slightly different hypothesis was used to obtain hardness of approximating vertex cover to within factor of √2−ε. The most important implication of this work is (assuming the hypothesis) an NPhardness gap of 1/2−ε vs. ε for unique games. In addition, we derive optimal NPhardness for approximating the maxcutgain problem, NPhardness of coloring an almost 4colorable graph with any constant number of colors, and the same √2−ε NPhardness for approximate vertex cover that was already obtained based on a slightly different hypothesis. Recent progress towards proving our hypothesis [Barak, Kothari and Steurer, ECCC TR18077], [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] directly implies some new unconditional NPhardness results. These include new points of NPhardness for unique games and for 2to1 and 2to2 games. More recently, the full version of our hypothesis was proven [Khot, Minzer and Safra, ECCC TR18006]. Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra (Weizmann Institute of Science, Israel; New York University, USA; Hebrew University of Jerusalem, Israel; Tel Aviv University, Israel) We study the structure of nonexpanding sets in the Grassmann graph. We put forth a hypothesis stating that every small set whose expansion is smaller than 1−δ must be correlated with one of a specified list of sets which are isomorphic to smaller Grassmann graphs. We develop a framework of Fourier analysis for analyzing functions over the Grassmann graph, and prove that our hypothesis holds for all sets whose expansion is below 7/8. In the companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018], the authors show that a linearity agreement hypothesis implies an NPhardness gap of 1/2− vs for unique games and other inapproximability results. In [Barak, Kothari and Steurer, ECCC TR18077], the authors show that the hypothesis in this work implies the linearity agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018]. Combined with our main theorem here this proves a version of the linearity agreement hypothesis with certain specific parameters. Short of proving the entire hypothesis, this nevertheless suffices for getting new unconditional NP hardness gaps for label cover with 2to1 and unique constraints. Our Expansion Hypothesis has been subsequently proved in its full form [Khot, Minzer and Safra, ECCC TR18006] thereby proving the agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] and completing the proof of the 2to1 Games Conjecture (albeit with imperfect completeness). 

Mirrokni, Vahab 
STOC '18: "Stochastic Bandits Robust ..."
Stochastic Bandits Robust to Adversarial Corruptions
Thodoris Lykouris, Vahab Mirrokni, and Renato Paes Leme (Cornell University, USA; Google Research, USA) We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam. The goal of this model is to encourage the design of bandit algorithms that (i) work well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models. In our model, the rewards for all arms are initially drawn from a distribution and are then altered by an adaptive adversary. We provide a simple algorithm whose performance gracefully degrades with the total corruption the adversary injected in the data, measured by the sum across rounds of the biggest alteration the adversary made in the data in that round; this total corruption is denoted by C. Our algorithm provides a guarantee that retains the optimal guarantee (up to a logarithmic term) if the input is stochastic and whose performance degrades linearly to the amount of corruption C, while crucially being agnostic to it. We also provide a lower bound showing that this linear degradation is necessary if the algorithm achieves optimal performance in the stochastic setting (the lower bound works even for a known amount of corruption, a special case in which our algorithm achieves optimal performance without the extra logarithm). 

Mitrović, Slobodan 
STOC '18: "Round Compression for Parallel ..."
Round Compression for Parallel Matching Algorithms
Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski (University of Warwick, UK; Google Research, USA; Massachusetts Institute of Technology, USA; EPFL, Switzerland; IBM Research, USA; University of Warsaw, Poland) For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of largescale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem—one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n^{1+Ω(1)} memory, this problem can also be solved 2approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) nearlinear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+є)approximate maximum matching, for any fixed constant є>0, in O((loglogn)^{2}) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of the MPC model, as compared to the PRAM model. Firstly, we use vertex–based graph partitioning, instead of the edge–based approaches that were utilized so far. Secondly, we develop a technique of round compression. This technique enables one to take a (distributed) algorithm that computes an O(1)approximation of maximum matching in O(logn) independent PRAM phases and implement a superconstant number of these phases in only a constant number of MPC rounds. 

Mömke, Tobias 
STOC '18: "A (5/3 + ε)Approximation ..."
A (5/3 + ε)Approximation for Unsplittable Flow on a Path: Placing Small Tasks into Boxes
Fabrizio Grandoni, Tobias Mömke, Andreas Wiese, and Hang Zhou (University of Lugano, Switzerland; University of Bremen, Germany; Saarland University, Germany; University of Chile, Chile; École Polytechnique, France) In the unsplittable flow on a path problem (UFP) we are given a path with edge capacities and a collection of tasks. Each task is characterized by a subpath, a profit, and a demand. Our goal is to compute a maximum profit subset of tasks such that, for each edge e, the total demand of selected tasks that use e does not exceed the capacity of e. The current best polynomialtime approximation factor for this problem is 2+є for any constant є>0 [Anagostopoulos et al.SODA 2014]. This is the best known factor even in the case of uniform edge capacities [Călinescu et al.IPCO 2002, TALG 2011]. These results, likewise most prior work, are based on a partition of tasks into large and small depending on their ratio of demand to capacity over their respective edges: these algorithms invoke (1+є)approximations for large and small tasks separately. The known techniques do not seem to be able to combine a big fraction of large and small tasks together (apart from some special cases and quasipolynomialtime algorithms). The main contribution of this paper is to overcome this critical barrier. Namely, we present a polynomialtime algorithm that obtains roughly all profit from the optimal large tasks plus one third of the profit from the optimal small tasks. In combination with known results, this implies a polynomialtime (5/3+є)approximation algorithm for UFP. Our algorithm is based on two main ingredients. First, we prove that there exist certain suboptimal solutions where, roughly speaking, small tasks are packed into boxes. To prove that such solutions can yield high profit we introduce a horizontal slicing lemma which yields a novel geometric interpretation of certain solutions. The resulting boxed structure has polynomial complexity, hence cannot be guessed directly. Therefore, our second contribution is a dynamic program that guesses this structure (plus a packing of large and small tasks) on the fly, while losing at most one third of the profit of the remaining small tasks. 

Mokhov, Andrey 
STOC '18: "On the Complexity of HazardFree ..."
On the Complexity of HazardFree Circuits
Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah (Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Newcastle University, UK; IIT Hyderabad, India)
The problem of constructing hazardfree Boolean circuits dates back to the 1940s and is an important problem in circuit design.
Our main lowerbound result unconditionally shows the existence of
functions whose circuit complexity is polynomially bounded while every
hazardfree implementation is provably of exponential size.
Previous lower bounds on the hazardfree complexity were only valid for depth 2 circuits.
The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards.
These results follow from a crucial structural insight:
Hazardfree complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions.
Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazardfree complexity.
We also lift these methods from the monotone setting to prove exponential hazardfree complexity lower bounds for nonmonotone functions.
As our main upperbound result we show how to efficiently convert a Boolean circuit into a boundedbit hazardfree circuit with only a polynomially large blowup in the number of gates.
Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement.
As a side result we establish the NPcompleteness of several hazard detection problems.
@InProceedings{STOC18p878,
author = {Christian Ikenmeyer and Balagopal Komarath and Christoph Lenzen and Vladimir Lysikov and Andrey Mokhov and Karteek Sreenivasaiah},
title = {On the Complexity of HazardFree Circuits},
booktitle = {Proc.\ STOC},
publisher = {ACM},
pages = {878889},
doi = {10.1145/3188745.3188912},
year = {2018},
}
Publisher's Version
Article Search


Moran, Shay 
STOC '18: "NearOptimal Linear Decision ..."
NearOptimal Linear Decision Trees for kSUM and Related Problems
Daniel M. Kane, Shachar Lovett, and Shay Moran (University of California at San Diego, USA; Institute for Advanced Study at Princeton, USA) We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the kSUM problem on n elements using O(n log^{2} n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two ksubsets; when viewed as linear queries, comparison queries are 2ksparse and have only {−1,0,1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSETSUM problem, both with optimal number of queries, up to polylogarithmic terms. Our constructions are based on the notion of “inference dimension”, recently introduced by the authors in the context of active classification with comparison queries. This can be viewed as another contribution to the fruitful link between machine learning and discrete geometry, which goes back to the discovery of the VC dimension. 

Mütze, Torsten 
STOC '18: "Sparse Kneser Graphs Are Hamiltonian ..."
Sparse Kneser Graphs Are Hamiltonian
Torsten Mütze, Jerri Nummenpalo, and Bartosz Walczak (TU Berlin, Germany; ETH Zurich, Switzerland; Jagiellonian University, Poland) For integers k≥1 and n≥2k+1, the Kneser graph K(n,k) is the graph whose vertices are the kelement subsets of {1,…,n} and whose edges connect pairs of subsets that are disjoint. The Kneser graphs of the form K(2k+1,k) are also known as the odd graphs. We settle an old problem due to Meredith, Lloyd, and Biggs from the 1970s, proving that for every k≥3, the odd graph K(2k+1,k) has a Hamilton cycle. This and a known conditional result due to Johnson imply that all Kneser graphs of the form K(2k+2^{a},k) with k≥3 and a≥0 have a Hamilton cycle. We also prove that K(2k+1,k) has at least 2^{2k−6} distinct Hamilton cycles for k≥6. Our proofs are based on a reduction of the Hamiltonicity problem in the odd graph to the problem of finding a spanning tree in a suitably defined hypergraph on Dyck words. 

Mukhopadhyay, Sagnik 
STOC '18: "Simulation Beats Richness: ..."
Simulation Beats Richness: New DataStructure Lower Bounds
Arkadev Chattopadhyay, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay (TIFR, India; Charles University in Prague, Czechia; INESC TEC, Portugal; University of Porto, Portugal; KTH, Sweden) We develop a new technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is the first simulation theorem in the asymmetric setting, where Alice gets a p × n matrix x over F_{2} and Bob gets a vector y ∈ F_{2}^{n}. Alice and Bob need to evaluate f(x· y) for a Boolean function f: {0,1}^{p} → {0,1}. Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C· n for Alice and C for Bob, if and only if there exists a deterministic/randomized *parity decision tree* of cost Θ(C) for evaluating f. As applications of this technique, we obtain the following results: 1. The first strong lowerbounds against randomized datastructure schemes for the VectorMatrixVector product problem over F_{2}. Moreover, our method yields strong lower bounds even when the datastructure scheme has tiny advantage over random guessing. 2. The first lower bounds against randomized datastructures schemes for two natural Boolean variants of Orthogonal Vector Counting. 3. We construct an asymmetric communication problem and obtain a deterministic lowerbound for it which is provably better than any lowerbound that may be obtained by the classical Richness Method of Miltersen et al. (STOC ’95). This seems to be the first known limitation of the Richness Method in the context of proving deterministic lower bounds. 

Murray, Cody 
STOC '18: "Circuit Lower Bounds for Nondeterministic ..."
Circuit Lower Bounds for Nondeterministic QuasiPolytime: An Easy Witness Lemma for NP and NQP
Cody Murray and Ryan Williams (Massachusetts Institute of Technology, USA) We prove that if every problem in NP has n^{k}size circuits for a fixed constant k, then for every NPverifier and every yesinstance x of length n for that verifier, the verifier’s search space has an n^{O(k3)}size witness circuit: a witness for x that can be encoded with a circuit of only n^{O(k3)} size. An analogous statement is proved for nondeterministic quasipolynomial time, i.e., NQP = NTIME[n^{logO(1) n}]. This significantly extends the Easy Witness Lemma of Impagliazzo, Kabanets, and Wigderson [JCSS’02] which only held for larger nondeterministic classes such as NEXP. As a consequence, the connections between circuitanalysis algorithms and circuit lower bounds can be considerably sharpened: algorithms for approximately counting satisfying assignments to given circuits which improve over exhaustive search can imply circuit lower bounds for functions in NQP or even NP. To illustrate, applying known algorithms for satisfiability of ACC ∘ THR circuits [R. Williams, STOC 2014] we conclude that for every fixed k, NQP does not have n^{logk n}size ACC ∘ THR circuits. 

Nakkiran, Preetum 
STOC '18: "General Strong Polarization ..."
General Strong Polarization
Jarosław Błasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, and Madhu Sudan (Harvard University, USA; Carnegie Mellon University, USA; SUNY Buffalo, USA) Arikan’s exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constantsized) invertible matrix M, a family of polar codes can be associated with this matrix and its ability to approach capacity follows from the polarization of an associated [0,1]bounded martingale, namely its convergence in the limit to either 0 or 1 with probability 1. Arikan showed appropriate polarization of the martingale associated with the matrix G_{2} = (
) to get capacity achieving codes. His analysis was later extended to all matrices M which satisfy an obvious necessary condition for polarization. While Arikan’s theorem does not guarantee that the codes achieve capacity at small blocklengths (specifically in length which is a polynomial in 1/є where є is the difference between the capacity of a channel and the rate of the code), it turns out that a “strong” analysis of the polarization of the underlying martingale would lead to such constructions. Indeed for the martingale associated with G_{2} such a strong polarization was shown in two independent works ([Guruswami and Xia, IEEE IT ’15] and [Hassani et al., IEEE IT’14]), thereby resolving a major theoretical challenge associated with the efficient attainment of Shannon capacity. In this work we extend the result above to cover martingales associated with all matrices that satisfy the necessary condition for (weak) polarization. In addition to being vastly more general, our proofs of strong polarization are (in our view) also much simpler and modular. Key to our proof is a notion of local polarization that only depends on the evolution of the martingale in a single time step. We show that local polarization always implies strong polarization. We then apply relatively simple reasoning about conditional entropies to prove local polarization in very general settings. Specifically, our result shows strong polarization over all prime fields and leads to efficient capacityachieving source codes for compressing arbitrary i.i.d. sources, and capacityachieving channel codes for arbitrary symmetric memoryless channels. 

Naor, Assaf 
STOC '18: "DataDependent Hashing via ..."
DataDependent Hashing via Nonlinear Spectral Gaps
Alexandr Andoni, Assaf Naor, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Princeton University, USA; University of Toronto, Canada; Microsoft Research, USA) We establish a generic reduction from _nonlinear spectral gaps_ of metric spaces to datadependent LocalitySensitive Hashing, yielding a new approach to the highdimensional Approximate Near Neighbor Search problem (ANN) under various distance functions. Using this reduction, we obtain the following results: * For _general_ ddimensional normed spaces and npoint datasets, we obtain a _cellprobe_ ANN data structure with approximation O(logd/ε^{2}), space d^{O(1)} n^{1+ε}, and d^{O(1)}n^{ε} cell probes per query, for any ε>0. No nontrivial approximation was known before in this generality other than the O(√d) bound which follows from embedding a general norm into ℓ_{2}. * For ℓ_{p} and Schattenp norms, we improve the data structure further, to obtain approximation O(p) and sublinear query _time_. For ℓ_{p}, this improves upon the previous best approximation 2^{O(p)} (which required polynomial as opposed to nearlinear in n space). For the Schattenp norm, no nontrivial ANN data structure was known before this work. Previous approaches to the ANN problem either exploit the low dimensionality of a metric, requiring space exponential in the dimension, or circumvent the curse of dimensionality by embedding a metric into a ”tractable” space, such as ℓ_{1}. Our new generic reduction proceeds differently from both of these approaches using a novel partitioning method. 

Nayak, Ashwin 
STOC '18: "Capacity Approaching Coding ..."
Capacity Approaching Coding for Low Noise Interactive Quantum Communication
Debbie Leung, Ashwin Nayak, Ala Shayeghi, Dave Touchette, Penghui Yao, and Nengkun Yu (University of Waterloo, Canada; Perimeter Institute Waterloo, Canada; Nanjing University, China; University of Technology Sydney, Australia) We consider the problem of implementing twoparty interactive quantum communication over noisy channels, a necessary endeavor if we wish to fully reap quantum advantages for communication. For an arbitrary protocol with n messages, designed for noiseless qudit channels (where d is arbitrary), our main result is a simulation method that fails with probability less than 2^{−Θ (nє)} and uses a qudit channel n (1 + Θ (√є)) times, of which an є fraction can be corrupted adversarially. The simulation is thus capacity achieving to leading order, and we conjecture that it is optimal up to a constant factor in the √є term. Furthermore, the simulation is in a model that does not require preshared resources such as randomness or entanglement between the communicating parties. Perhaps surprisingly, this outperforms the best known overhead of 1 + O(√є loglog1/є) in the corresponding classical model, which is also conjectured to be optimal [Haeupler, FOCS’14]. Our work also improves over the best previously known quantum result where the overhead is a nonexplicit large constant [Brassard et al., FOCS’14] for low є. 

Nederlof, Jesper 
STOC '18: "More Consequences of Falsifying ..."
More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture
Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof (IBM Research, USA; Max Planck Institute for Informatics, Germany; Saarland University, Germany; M2CI, Germany; Eindhoven University of Technology, Netherlands) The Strong Exponential Time Hypothesis and the OVconjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomialtime algorithms. The OVconjecture in moderate dimension states there is no ε>0 for which an O(N^{2−ε}) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains Ddimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OVconjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that:  For all d and all large enough k, there is a randomized algorithm that takes O(n^{(1−ε)k}) time to solve the ZeroWeightkClique and MinWeightkClique problems on dhypergraphs with n vertices. As a consequence, the OVconjecture is implied by the Weighted Clique conjecture.  For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)^{n}). 

Neiman, Ofer 
STOC '18: "Metric Embedding via Shortest ..."
Metric Embedding via Shortest Path Decompositions
Ittai Abraham, Arnold Filtser, Anupam Gupta, and Ofer Neiman (VMware, USA; BenGurion University of the Negev, Israel; Carnegie Mellon University, USA) We study the problem of embedding weighted graphs of pathwidth k into ℓ_{p} spaces. Our main result is an O(k^{min{1p,12}})distortion embedding. For p=1, this is a superexponential improvement over the best previous bound of Lee and Sidiropoulos. Our distortion bound is asymptotically tight for any fixed p >1. Our result is obtained via a novel embedding technique that is based on low depth decompositions of a graph via shortest paths. The core new idea is that given a geodesic shortest path P, we can probabilistically embed all points into 2 dimensions with respect to P. For p>2 our embedding also implies improved distortion on bounded treewidth graphs (O((klogn)^{1p})). For asymptotically large p, our results also implies improved distortion on graphs excluding a minor. 

Neuen, Daniel 
STOC '18: "An Exponential Lower Bound ..."
An Exponential Lower Bound for IndividualizationRefinement Algorithms for Graph Isomorphism
Daniel Neuen and Pascal Schweitzer (RWTH Aachen University, Germany; TU Kaiserslautern, Germany) The individualizationrefinement paradigm provides a strong toolbox for testing isomorphism of two graphs and indeed, the currently fastest implementations of isomorphism solvers all follow this approach. While these solvers are fast in practice, from a theoretical point of view, no general lower bounds concerning the worst case complexity of these tools are known. In fact, it is an open question what the running time of individualizationrefinement algorithms is. For all we know some of the algorithms could have polynomial running time. In this work we give a negative answer to this question and construct a family of graphs on which algorithms based on the individualizationrefinement paradigm require exponential time. Contrary to a previous construction of Miyazaki, that only applies to a specific implementation within the individualizationrefinement framework, our construction is immune to changing the cell selector, the refinement operator, the invariant that is used, or adding various heuristic invariants to the algorithm. In fact, our graphs also provide exponential lower bounds in the case when the kdimensional WeisfeilerLeman algorithm is used to replace the the 1dimensional WeisfeilerLeman algorithm (often called color refinement) that is normally used. Finally, the arguments even work when the entire automorphism group of the inputs is initially provided to the algorithm. The arguments apply to isomorphism testing algorithms as well as canonization algorithms within the framework. 

Nikolov, Aleksandar 
STOC '18: "DataDependent Hashing via ..."
DataDependent Hashing via Nonlinear Spectral Gaps
Alexandr Andoni, Assaf Naor, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Princeton University, USA; University of Toronto, Canada; Microsoft Research, USA) We establish a generic reduction from _nonlinear spectral gaps_ of metric spaces to datadependent LocalitySensitive Hashing, yielding a new approach to the highdimensional Approximate Near Neighbor Search problem (ANN) under various distance functions. Using this reduction, we obtain the following results: * For _general_ ddimensional normed spaces and npoint datasets, we obtain a _cellprobe_ ANN data structure with approximation O(logd/ε^{2}), space d^{O(1)} n^{1+ε}, and d^{O(1)}n^{ε} cell probes per query, for any ε>0. No nontrivial approximation was known before in this generality other than the O(√d) bound which follows from embedding a general norm into ℓ_{2}. * For ℓ_{p} and Schattenp norms, we improve the data structure further, to obtain approximation O(p) and sublinear query _time_. For ℓ_{p}, this improves upon the previous best approximation 2^{O(p)} (which required polynomial as opposed to nearlinear in n space). For the Schattenp norm, no nontrivial ANN data structure was known before this work. Previous approaches to the ANN problem either exploit the low dimensionality of a metric, requiring space exponential in the dimension, or circumvent the curse of dimensionality by embedding a metric into a ”tractable” space, such as ℓ_{1}. Our new generic reduction proceeds differently from both of these approaches using a novel partitioning method. 