STOC 2019 – Author Index 
Contents 
Abstracts 
Authors

A B C D E F G H J K L M N O P Q R S T U V W X Y Z
Aaronson, Scott 
STOC '19: "Gentle Measurement of Quantum ..."
Gentle Measurement of Quantum States and Differential Privacy
Scott Aaronson and Guy N. Rothblum (University of Texas at Austin, USA; Weizmann Institute of Science, Israel) In differential privacy (DP), we want to query a database about n users, in a way that “leaks at most ε about any individual user,” even conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure n quantum states, in a way that “damages the states by at most α,” even conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it. This paper proves a new and general connection between the two subjects. Specifically, we show that on products of n quantum states, any measurement that is αgentle for small α is also O( α) DP, and any product measurement that is εDP is also O( ε√n) gentle. Illustrating the power of this connection, we apply it to the recently studied problem of shadow tomography. Given an unknown ddimensional quantum state ρ, as well as known twooutcome measurements E_{1},…,E_{m}, shadow tomography asks us to estimate Pr[ E_{i} accepts ρ] , for every i∈[ m] , by measuring few copies of ρ. Using our connection theorem, together with a quantum analog of the socalled private multiplicative weights algorithm of Hardt and Rothblum, we give a protocol to solve this problem using order ( logm) ^{2}( logd) ^{2} copies of ρ, compared to Aaronson’s previous bound of O( ( logm) ^{4}( logd) ) . Our protocol has the advantages of being online (that is, the E_{i}’s are processed one at a time), gentle, and conceptually simple. Other applications of our connection include new lower bounds for shadow tomography from lower bounds on DP, and a result on the safe use of estimation algorithms as subroutines inside larger quantum algorithms. @InProceedings{STOC19p322, author = {Scott Aaronson and Guy N. Rothblum}, title = {Gentle Measurement of Quantum States and Differential Privacy}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {322333}, doi = {10.1145/3313276.3316378}, year = {2019}, } Publisher's Version Info 

Abboud, Amir 
STOC '19: "Dynamic Set Cover: Improved ..."
Dynamic Set Cover: Improved Algorithms and Lower Bounds
Amir Abboud, Raghavendra Addanki, Fabrizio Grandoni, Debmalya Panigrahi, and Barna Saha (IBM Research, USA; University of Massachusetts at Amherst, USA; IDSIA, Switzerland; Duke University, USA) We give new upper and lower bounds for the dynamic set cover problem. First, we give a (1+є) fapproximation for fully dynamic set cover in O(f^{2}logn/є^{5}) (amortized) update time, for any є > 0, where f is the maximum number of sets that an element belongs to. In the decremental setting, the update time can be improved to O(f^{2}/є^{5}), while still obtaining an (1+є) fapproximation. These are the first algorithms that obtain an approximation factor linear in f for dynamic set cover, thereby almost matching the best bounds known in the offline setting and improving upon the previous best approximation of O(f^{2}) in the dynamic setting. To complement our upper bounds, we also show that a linear dependence of the update time on f is necessary unless we can tolerate much worse approximation factors. Using the recent distributed PCPframework, we show that any dynamic set cover algorithm that has an amortized update time of O(f^{1−є}) must have an approximation factor that is Ω(n^{δ}) for some constant δ>0 under the Strong Exponential Time Hypothesis. @InProceedings{STOC19p114, author = {Amir Abboud and Raghavendra Addanki and Fabrizio Grandoni and Debmalya Panigrahi and Barna Saha}, title = {Dynamic Set Cover: Improved Algorithms and Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {114125}, doi = {10.1145/3313276.3316376}, year = {2019}, } Publisher's Version 

Addanki, Raghavendra 
STOC '19: "Dynamic Set Cover: Improved ..."
Dynamic Set Cover: Improved Algorithms and Lower Bounds
Amir Abboud, Raghavendra Addanki, Fabrizio Grandoni, Debmalya Panigrahi, and Barna Saha (IBM Research, USA; University of Massachusetts at Amherst, USA; IDSIA, Switzerland; Duke University, USA) We give new upper and lower bounds for the dynamic set cover problem. First, we give a (1+є) fapproximation for fully dynamic set cover in O(f^{2}logn/є^{5}) (amortized) update time, for any є > 0, where f is the maximum number of sets that an element belongs to. In the decremental setting, the update time can be improved to O(f^{2}/є^{5}), while still obtaining an (1+є) fapproximation. These are the first algorithms that obtain an approximation factor linear in f for dynamic set cover, thereby almost matching the best bounds known in the offline setting and improving upon the previous best approximation of O(f^{2}) in the dynamic setting. To complement our upper bounds, we also show that a linear dependence of the update time on f is necessary unless we can tolerate much worse approximation factors. Using the recent distributed PCPframework, we show that any dynamic set cover algorithm that has an amortized update time of O(f^{1−є}) must have an approximation factor that is Ω(n^{δ}) for some constant δ>0 under the Strong Exponential Time Hypothesis. @InProceedings{STOC19p114, author = {Amir Abboud and Raghavendra Addanki and Fabrizio Grandoni and Debmalya Panigrahi and Barna Saha}, title = {Dynamic Set Cover: Improved Algorithms and Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {114125}, doi = {10.1145/3313276.3316376}, year = {2019}, } Publisher's Version 

Alistarh, Dan 
STOC '19: "Why ExtensionBased Proofs ..."
Why ExtensionBased Proofs Fail
Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, and Leqi Zhu (IST Austria, Austria; Yale University, USA; University of Toronto, Canada) It is impossible to deterministically solve waitfree consensus in an asynchronous system. The classic proof uses a valency argument, which constructs an infinite execution by repeatedly extending a finite execution. We introduce extensionbased proofs, a class of impossibility proofs that are modelled as an interaction between a prover and a protocol and that include valency arguments. Using proofs based on combinatorial topology, it has been shown that it is impossible to deterministically solve kset agreement among n > k ≥ 2 processes in a waitfree manner. However, it was unknown whether proofs based on simpler techniques were possible. We show that this impossibility result cannot be obtained by an extensionbased proof and, hence, extensionbased proofs are limited in power. @InProceedings{STOC19p986, author = {Dan Alistarh and James Aspnes and Faith Ellen and Rati Gelashvili and Leqi Zhu}, title = {Why ExtensionBased Proofs Fail}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {986996}, doi = {10.1145/3313276.3316407}, year = {2019}, } Publisher's Version 

Alon, Noga 
STOC '19: "Private PAC Learning Implies ..."
Private PAC Learning Implies Finite Littlestone Dimension
Noga Alon, Roi Livni, Maryanthe Malliaris, and Shay Moran (Princeton University, USA; Tel Aviv University, Israel; University of Chicago, USA) We show that every approximately differentially private learning algorithm (possibly improper) for a class H with Littlestone dimension d requires Ω(log^{*}(d)) examples. As a corollary it follows that the class of thresholds over ℕ can not be learned in a private manner; this resolves open questions due to [Bun et al. 2015] and [Feldman and Xiao, 2015]. We leave as an open question whether every class with a finite Littlestone dimension can be learned by an approximately differentially private algorithm. @InProceedings{STOC19p852, author = {Noga Alon and Roi Livni and Maryanthe Malliaris and Shay Moran}, title = {Private PAC Learning Implies Finite Littlestone Dimension}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {852860}, doi = {10.1145/3313276.3316312}, year = {2019}, } Publisher's Version 

Alrabiah, Omar 
STOC '19: "An Exponential Lower Bound ..."
An Exponential Lower Bound on the SubPacketization of MSR Codes
Omar Alrabiah and Venkatesan Guruswami (Carnegie Mellon University, USA) An (n,k,ℓ)vector MDS code is a Flinear subspace of (F^{ℓ})^{n} (for some field F) of dimension kℓ, such that any k (vector) symbols of the codeword suffice to determine the remaining r=n−k (vector) symbols. The length ℓ of each codeword symbol is called the SubPacketization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading ℓ/r field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large SubPacketization ℓ ≳ r^{k/r}. Our main result is an almost tight lower bound showing that for an MSR code, one must have ℓ ≥ exp(Ω(k/r)). Previously, a lower bound of ≈ exp(√k/r), and a tight lower bound for a restricted class of ”optimal access” MSR codes, were known. Our work settles a central open question concerning MSR codes that has received much attention. Further our proof is really short, hinging on one key definition that is somewhat inspired by Galois theory. @InProceedings{STOC19p979, author = {Omar Alrabiah and Venkatesan Guruswami}, title = {An Exponential Lower Bound on the SubPacketization of MSR Codes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {979985}, doi = {10.1145/3313276.3316387}, year = {2019}, } Publisher's Version 

Anari, Nima 
STOC '19: "LogConcave Polynomials II: ..."
LogConcave Polynomials II: HighDimensional Walks and an FPRAS for Counting Bases of a Matroid
Nima Anari, Kuikui Liu, Shayan Oveis Gharan, and Cynthia Vinzant (Stanford University, USA; University of Washington, USA; North Carolina State University, USA) We design an FPRAS to count the number of bases of any matroid given by an independent set oracle, and to estimate the partition function of the random cluster model of any matroid in the regime where 0<q<1. Consequently, we can sample random spanning forests in a graph and estimate the reliability polynomial of any matroid. We also prove the thirty year old conjecture of Mihail and Vazirani that the bases exchange graph of any matroid has edge expansion at least 1. Our algorithm and proof build on the recent results of Dinur, Kaufman, Mass and Oppenheim who show that a high dimensional walk on a weighted simplicial complex mixes rapidly if for every link of the complex, the corresponding localized random walk on the 1skeleton is a strong spectral expander. One of our key observations is that a weighted simplicial complex X is a 0local spectral expander if and only if a naturally associated generating polynomial p_{X} is strongly logconcave. More generally, to every pure simplicial complex with positive weights on its maximal faces, we can associate to X a multiaffine homogeneous polynomial p_{X} such that the eigenvalues of the localized random walks on X correspond to the eigenvalues of the Hessian of derivatives of p_{X}. @InProceedings{STOC19p1, author = {Nima Anari and Kuikui Liu and Shayan Oveis Gharan and Cynthia Vinzant}, title = {LogConcave Polynomials II: HighDimensional Walks and an FPRAS for Counting Bases of a Matroid}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {112}, doi = {10.1145/3313276.3316385}, year = {2019}, } Publisher's Version 

Arora, Atul Singh 
STOC '19: "Quantum Weak Coin Flipping ..."
Quantum Weak Coin Flipping
Atul Singh Arora, Jérémie Roland, and Stephan Weis (Université libre de Bruxelles, Belgium) We investigate weak coin flipping, a fundamental cryptographic primitive where two distrustful parties need to remotely establish a shared random bit. A cheating player can try to bias the output bit towards a preferred value. For weak coin flipping the players have known opposite preferred values. A weak coinflipping protocol has a bias є if neither player can force the outcome towards their preferred value with probability more than 1/2+є. While it is known that all classical protocols have є=1/2, Mochon showed in 2007 that quantumly weak coin flipping can be achieved with arbitrarily small bias (near perfect) but the former best known explicit protocol has bias 1/6 (also due to Mochon, 2005). We propose a framework to construct new explicit protocols achieving biases below 1/6. In particular, we construct explicit unitaries for protocols with bias down to 1/10. To go lower, we introduce what we call the Elliptic Monotone Align (EMA) algorithm which, together with the framework, allows us to construct protocols with arbitrarily small biases. @InProceedings{STOC19p205, author = {Atul Singh Arora and Jérémie Roland and Stephan Weis}, title = {Quantum Weak Coin Flipping}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {205216}, doi = {10.1145/3313276.3316306}, year = {2019}, } Publisher's Version Info 

Aspnes, James 
STOC '19: "Why ExtensionBased Proofs ..."
Why ExtensionBased Proofs Fail
Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, and Leqi Zhu (IST Austria, Austria; Yale University, USA; University of Toronto, Canada) It is impossible to deterministically solve waitfree consensus in an asynchronous system. The classic proof uses a valency argument, which constructs an infinite execution by repeatedly extending a finite execution. We introduce extensionbased proofs, a class of impossibility proofs that are modelled as an interaction between a prover and a protocol and that include valency arguments. Using proofs based on combinatorial topology, it has been shown that it is impossible to deterministically solve kset agreement among n > k ≥ 2 processes in a waitfree manner. However, it was unknown whether proofs based on simpler techniques were possible. We show that this impossibility result cannot be obtained by an extensionbased proof and, hence, extensionbased proofs are limited in power. @InProceedings{STOC19p986, author = {Dan Alistarh and James Aspnes and Faith Ellen and Rati Gelashvili and Leqi Zhu}, title = {Why ExtensionBased Proofs Fail}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {986996}, doi = {10.1145/3313276.3316407}, year = {2019}, } Publisher's Version 

Assadi, Sepehr 
STOC '19: "Polynomial Pass Lower Bounds ..."
Polynomial Pass Lower Bounds for Graph Streaming Algorithms
Sepehr Assadi, Yu Chen, and Sanjeev Khanna (Princeton University, USA; University of Pennsylvania, USA) We present new lower bounds that show that a polynomial number of passes are necessary for solving some fundamental graph problems in the streaming model of computation. For instance, we show that any streaming algorithm that finds a weighted minimum st cut in an nvertex undirected graph requires n^{2−o(1)} space unless it makes n^{Ω(1)} passes over the stream. To prove our lower bounds, we introduce and analyze a new fourplayer communication problem that we refer to as the hiddenpointer chasing problem. This is a problem in spirit of the standard pointer chasing problem with the key difference that the pointers in this problem are hidden to players and finding each one of them requires solving another communication problem, namely the set intersection problem. Our lower bounds for graph problems are then obtained by reductions from the hiddenpointer chasing problem. Our hiddenpointer chasing problem appears flexible enough to find other applications and is therefore interesting in its own right. To showcase this, we further present an interesting application of this problem beyond streaming algorithms. Using a reduction from hiddenpointer chasing, we prove that any algorithm for submodular function minimization needs to make n^{2−o(1)} value queries to the function unless it has a polynomial degree of adaptivity. @InProceedings{STOC19p265, author = {Sepehr Assadi and Yu Chen and Sanjeev Khanna}, title = {Polynomial Pass Lower Bounds for Graph Streaming Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {265276}, doi = {10.1145/3313276.3316361}, year = {2019}, } Publisher's Version 

Avron, Haim 
STOC '19: "A Universal Sampling Method ..."
A Universal Sampling Method for Reconstructing Signals with Simple Fourier Transforms
Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, and Amir Zandieh (Tel Aviv University, Israel; EPFL, Switzerland; Microsoft Research, USA; Princeton University, USA; Google Research, USA) Reconstructing continuous signals based on a small number of discrete samples is a fundamental problem across science and engineering. We are often interested in signals with "simple'' Fourier structure  e.g., those involving frequencies within a bounded range, a small number of frequencies, or a few blocks of frequencies  i.e., bandlimited, sparse, and multiband signals, respectively. More broadly, any prior knowledge on a signal's Fourier power spectrum can constrain its complexity. Intuitively, signals with more highly constrained Fourier structure require fewer samples to reconstruct. We formalize this intuition by showing that, roughly, a continuous signal from a given class can be approximately reconstructed using a number of samples proportional to the statistical dimension of the allowed power spectrum of that class. We prove that, in nearly all settings, this natural measure tightly characterizes the sample complexity of signal reconstruction. Surprisingly, we also show that, up to log factors, a universal nonuniform sampling strategy can achieve this optimal complexity for any class of signals. We present an efficient and general algorithm for recovering a signal from the samples taken. For bandlimited and sparse signals, our method matches the stateoftheart, while providing the the first computationally and sample efficient solution to a broader range of problems, including multiband signal reconstruction and Gaussian process regression tasks in one dimension. Our work is based on a novel connection between randomized linear algebra and the problem of reconstructing signals with constrained Fourier structure. We extend tools based on statistical leverage score sampling and columnbased matrix reconstruction to the approximation of continuous linear operators that arise in the signal reconstruction problem. We believe these extensions are of independent interest and serve as a foundation for tackling a broad range of continuous time problems using randomized methods. @InProceedings{STOC19p1051, author = {Haim Avron and Michael Kapralov and Cameron Musco and Christopher Musco and Ameya Velingker and Amir Zandieh}, title = {A Universal Sampling Method for Reconstructing Signals with Simple Fourier Transforms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10511063}, doi = {10.1145/3313276.3316363}, year = {2019}, } Publisher's Version 

Babai, László 
STOC '19: "Canonical Form for Graphs ..."
Canonical Form for Graphs in Quasipolynomial Time: Preliminary Report
László Babai (University of Chicago, USA) We outline how to turn the author's quasipolynomialtime graph isomorphism test into a construction of a canonical form within the same time bound. The proof involves a nontrivial modification of the central symmetrybreaking tool, the construction of a canonical relational structure of logarithmic arity on the ideal domain based on local certificates. @InProceedings{STOC19p1237, author = {László Babai}, title = {Canonical Form for Graphs in Quasipolynomial Time: Preliminary Report}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12371246}, doi = {10.1145/3313276.3316356}, year = {2019}, } Publisher's Version 

Babichenko, Yakov 
STOC '19: "The Communication Complexity ..."
The Communication Complexity of Local Search
Yakov Babichenko, Shahar Dobzinski, and Noam Nisan (Technion, Israel; Weizmann Institute of Science, Israel; Hebrew University of Jerusalem, Israel) We study a communication variant of local search. There is some fixed, commonly known graph G. Alice holds f_{A} and Bob holds f_{B}, both are functions that specify a value for each vertex. The goal is to find a local maximum of f_{A}+f_{B} with respect to G, i.e., a vertex v for which (f_{A}+f_{B})(v)≥ (f_{A}+f_{B})(u) for each neighbor u of v. Our main result is that finding a local maximum requires polynomial (in the number of vertices) bits of communication. The result holds for the following families of graphs: three dimensional grids, hypercubes, odd graphs, and degree 4 graphs. Moreover, we prove an optimal communication bound of Ω(√N) for the hypercube, and for a constant dimension grid, where N is the number of vertices in the graph. We provide applications of our main result in two domains, exact potential games and combinatorial auctions. Each one of the results demonstrates an exponential separation between the nondeterministic communication complexity and the randomized communication complexity of a total search problem. First, we show that finding a pure Nash equilibrium in 2player Naction exact potential games requires poly(N) communication. We also show that finding a pure Nash equilibrium in nplayer 2action exact potential games requires exp(n) communication. The second domain that we consider is combinatorial auctions, in which we prove that finding a local maximum in combinatorial auctions requires exponential (in the number of items) communication even when the valuations are submodular. @InProceedings{STOC19p650, author = {Yakov Babichenko and Shahar Dobzinski and Noam Nisan}, title = {The Communication Complexity of Local Search}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {650661}, doi = {10.1145/3313276.3316354}, year = {2019}, } Publisher's Version 

Bădescu, Costin 
STOC '19: "Quantum State Certification ..."
Quantum State Certification
Costin Bădescu, Ryan O'Donnell, and John Wright (Carnegie Mellon University, USA; Massachusetts Institute of Technology, USA) We consider the problem of quantum state certification, where one is given n copies of an unknown ddimensional quantum mixed state ρ, and one wants to test whether ρ is equal to some known mixed state σ or else is єfar from σ. The goal is to use notably fewer copies than the Ω(d^{2}) needed for full tomography on ρ (i.e., density estimation). We give two robust state certification algorithms: one with respect to fidelity using n = O(d/є) copies, and one with respect to trace distance using n = O(d/є^{2}) copies. The latter algorithm also applies when σ is unknown as well. These copy complexities are optimal up to constant factors. @InProceedings{STOC19p503, author = {Costin Bădescu and Ryan O'Donnell and John Wright}, title = {Quantum State Certification}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {503514}, doi = {10.1145/3313276.3316344}, year = {2019}, } Publisher's Version 

Balkanski, Eric 
STOC '19: "An Optimal Approximation for ..."
An Optimal Approximation for Submodular Maximization under a Matroid Constraint in the Adaptive Complexity Model
Eric Balkanski, Aviad Rubinstein, and Yaron Singer (Harvard University, USA; Stanford University, USA) In this paper we study submodular maximization under a matroid constraint in the adaptive complexity model. This model was recently introduced in the context of submodular optimization to quantify the information theoretic complexity of blackbox optimization in a parallel computation model. Informally, the adaptivity of an algorithm is the number of sequential rounds it makes when each round can execute polynomiallymany function evaluations in parallel. Since submodular optimization is regularly applied on large datasets we seek algorithms with low adaptivity to enable speedups via parallelization. Consequently, a recent line of work has been devoted to designing constant factor approximation algorithms for maximizing submodular functions under various constraints in the adaptive complexity model. Despite the burst in work on submodular maximization in the adaptive complexity model, the fundamental problem of maximizing a monotone submodular function under a matroid constraint has remained elusive. In particular, all known techniques fail for this problem and there are no known constant factor approximation algorithms whose adaptivity is sublinear in the rank of the matroid k or in the worst case sublinear in the size of the ground set n. In this paper we present an approximation algorithm for the problem of maximizing a monotone submodular function under a matroid constraint in the adaptive complexity model. The approximation guarantee of the algorithm is arbitrarily close to the optimal 1−1/e and it has near optimal adaptivity of Ø(log(n)log(k)). This result is obtained using a novel technique of adaptive sequencing which departs from previous techniques for submodular maximization in the adaptive complexity model. In addition to our main result we show how to use this technique to design other approximation algorithms with strong approximation guarantees and polylogarithmic adaptivity. @InProceedings{STOC19p66, author = {Eric Balkanski and Aviad Rubinstein and Yaron Singer}, title = {An Optimal Approximation for Submodular Maximization under a Matroid Constraint in the Adaptive Complexity Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {6677}, doi = {10.1145/3313276.3316304}, year = {2019}, } Publisher's Version 

Bansal, Nikhil 
STOC '19: "On a Generalization of Iterated ..."
On a Generalization of Iterated and Randomized Rounding
Nikhil Bansal (CWI, Netherlands; Eindhoven University of Technology, Netherlands) We give a general method for rounding linear programs that combines the commonly used iterated rounding and randomized rounding techniques. In particular, we show that whenever iterated rounding can be applied to a problem with some slack, there is a randomized procedure that returns an integral solution that satisfies the guarantees of iterated rounding and also has concentration properties. We use this to give new results for several classic problems where iterated rounding has been useful. @InProceedings{STOC19p1125, author = {Nikhil Bansal}, title = {On a Generalization of Iterated and Randomized Rounding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11251135}, doi = {10.1145/3313276.3316313}, year = {2019}, } Publisher's Version 

Becchetti, Luca 
STOC '19: "Oblivious Dimension Reduction ..."
Oblivious Dimension Reduction for kMeans: Beyond Subspaces and the JohnsonLindenstrauss Lemma
Luca Becchetti, Marc Bury, Vincent CohenAddad, Fabrizio Grandoni, and Chris Schwiegelshohn (Sapienza University of Rome, Italy; Zalando, Switzerland; CNRS, France; IDSIA, Switzerland) We show that for n points in ddimensional Euclidean space, a data oblivious random projection of the columns onto m∈ O((logk+loglogn)ε^{−6}log1/ε) dimensions is sufficient to approximate the cost of all kmeans clusterings up to a multiplicative (1±ε) factor. The previousbest upper bounds on m are O(logn· ε^{−2}) given by a direct application of the JohnsonLindenstrauss Lemma, and O(kε^{−2}) given by [Cohen et al.STOC’15]. @InProceedings{STOC19p1039, author = {Luca Becchetti and Marc Bury and Vincent CohenAddad and Fabrizio Grandoni and Chris Schwiegelshohn}, title = {Oblivious Dimension Reduction for <i>k</i>Means: Beyond Subspaces and the JohnsonLindenstrauss Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10391050}, doi = {10.1145/3313276.3316318}, year = {2019}, } Publisher's Version 

Bekos, Michael 
STOC '19: "Planar Graphs of Bounded Degree ..."
Planar Graphs of Bounded Degree Have Bounded Queue Number
Michael Bekos, Henry Förster, Martin Gronemann, Tamara Mchedlidze, Fabrizio Montecchiani, Chrysanthi Raftopoulou, and Torsten Ueckerdt (University of Tübingen, Germany; University of Cologne, Germany; KIT, Germany; University of Perugia, Italy; National Technical University of Athens, Greece) A queue layout of a graph consists of a linear order of its vertices and a partition of its edges into queues, so that no two independent edges of the same queue are nested. The queue number of a graph is the minimum number of queues required by any of its queue layouts. A longstanding conjecture by Heath, Leighton and Rosenberg states that the queue number of planar graphs is bounded.This conjecture has been partially settled in the positive for several sub families of planar graphs (most of which have bounded treewidth). In this paper, we make a further important step towards settling this conjecture. We prove that planar graphs of bounded degree (which may have unbounded treewidth) have bounded queue number. A notable implication of this result is that every planar graph of bounded degree admits a threedimensional straightline grid drawing in linear volume. Further implications are that every planar graph of bounded degree has bounded track number, and that every kplanar graph (i.e., every graph that can be drawn in the plane with at most k crossings per edge) of bounded degree as bounded queue number. @InProceedings{STOC19p176, author = {Michael Bekos and Henry Förster and Martin Gronemann and Tamara Mchedlidze and Fabrizio Montecchiani and Chrysanthi Raftopoulou and Torsten Ueckerdt}, title = {Planar Graphs of Bounded Degree Have Bounded Queue Number}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {176184}, doi = {10.1145/3313276.3316324}, year = {2019}, } Publisher's Version 

Bender, Michael A. 
STOC '19: "Achieving Optimal Backlog ..."
Achieving Optimal Backlog in Multiprocessor Cup Games
Michael A. Bender, Martín FarachColton, and William Kuszmaul (Stony Brook University, USA; Rutgers University, USA; Massachusetts Institute of Technology, USA) Many problems in processor scheduling, deamortization, and buffer management can be modeled as single and multiprocessor cup games. At the beginning of the singleprocessor ncup game, all cups are empty. In each step of the game, a filler distributes 1−є units of water among the cups, and then an emptier selects a cup and removes up to 1 unit of water from it. The goal of the emptier is to minimize the amount of water in the fullest cup, also known as the backlog. The greedy algorithm (i.e., empty from the fullest cup) is known to achieve backlog O(logn), and no deterministic algorithm can do better. We show that the performance of the greedy algorithm can be exponentially improved with a small amount of randomization: After each step and for any k ≥ Ω(logє^{−1}), the emptier achieves backlog at most O(k) with probability at least 1 −O(2^{−2k}). We call our algorithm the smoothed greedy algorithm because if follows from a smoothed analysis of the (standard) greedy algorithm. In each step of the pprocessor ncup game, the filler distributes p(1−є) unit of water among the cups, with no cup receiving more than 1−δ units of water, and then the emptier selects p cups and removes 1 unit of water from each. Proving nontrivial bounds on the backlog for the multiprocessor cup game has remained open for decades. We present a simple analysis of the greedy algorithm for the multiprocessor cup game, establishing a backlog of O(є^{−1} logn), as long as δ > 1/poly(n). Turning to randomized algorithms, we find that the backlog drops to constant. Specifically, we show that if є and δ satisfy reasonable constraints, then there exists an algorithm that bounds the backlog after a given step by 3 with probability at least 1 − O(exp(−Ω(є^{2} p)). We prove that our results are asymptotically optimal for constant є, in the sense that no algorithms can achieve better bounds, up to constant factors in the backlog and in p. Moreover, we prove robustness results, demonstrating that our randomized algorithms continue to behave well even when placed in bad starting states. @InProceedings{STOC19p1148, author = {Michael A. Bender and Martín FarachColton and William Kuszmaul}, title = {Achieving Optimal Backlog in Multiprocessor Cup Games}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11481157}, doi = {10.1145/3313276.3316342}, year = {2019}, } Publisher's Version 

Bernstein, Aaron 
STOC '19: "Distributed Exact Weighted ..."
Distributed Exact Weighted AllPairs Shortest Paths in NearLinear Time
Aaron Bernstein and Danupon Nanongkai (Rutgers University, USA; KTH, Sweden) In the distributed allpairs shortest paths problem (APSP), every node in the weighted undirected distributed network (the CONGEST model) needs to know the distance from every other node using least number of communication rounds (typically called time complexity). The problem admits (1+o(1))approximation Θ(n)time algorithm and a nearlytight Ω(n) lower bound [Nanongkai, STOC’14; Lenzen and PattShamir PODC’15]. For the exact case, Elkin [STOC’17] presented an O(n^{5/3} log^{2/3} n) time bound, which was later improved to Õ(n^{5/4}) in [Huang, Nanongkai, Saranurak FOCS’17]. It was shown that any superlinear lower bound (in n) requires a new technique [CensorHillel, Khoury, Paz, DISC’17], but otherwise it remained widely open whether there exists a Õ(n)time algorithm for the exact case, which would match the best possible approximation algorithm. This paper resolves this question positively: we present a randomized (Las Vegas) Õ(n)time algorithm, matching the lower bound up to polylogarithmic factors. Like the previous Õ(n^{5/4}) bound, our result works for directed graphs with zero (and even negative) edge weights. In addition to the improved running time, our algorithm works in a more general setting than that required by the previous Õ(n^{5/4}) bound; in our setting (i) the communication is only along edge directions (as opposed to bidirectional), and (ii) edge weights are arbitrary (as opposed to integers in {1, 2, ... poly(n)}). The previously best algorithm for this more difficult setting required Õ(n^{3/2}) time [Agarwal and Ramachandran, ArXiv’18] (this can be improved to Õ(n^{4/3}) if one allows bidirectional communication). Our algorithm is extremely simple and relies on a new technique called Random Filtered Broadcast. Given any sets of nodes A,B⊆ V and assuming that every b ∈ B knows all distances from nodes in A, and every node v ∈ V knows all distances from nodes in B, we want every v∈ V to know DistThrough_{B}(a,v) = min_{b∈ B} dist(a,b) + dist(b,v) for every a∈ A. Previous works typically solve this problem by broadcasting all knowledge of every b∈ B, causing superlinear edge congestion and time. We show a randomized algorithm that can reduce edge congestions and thus solve this problem in Õ(n) expected time. @InProceedings{STOC19p334, author = {Aaron Bernstein and Danupon Nanongkai}, title = {Distributed Exact Weighted AllPairs Shortest Paths in NearLinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {334342}, doi = {10.1145/3313276.3316326}, year = {2019}, } Publisher's Version Info STOC '19: "Decremental StronglyConnected ..." Decremental StronglyConnected Components and SingleSource Reachability in NearLinear Time Aaron Bernstein, Maximilian Probst, and Christian WulffNilsen (Rutgers University, USA; University of Copenhagen, Denmark) Computing the StronglyConnected Components (SCCs) in a graph G=(V,E) is known to take only O(m+n) time using an algorithm by Tarjan from 1972[SICOMP 72] where m = E, n=V. For fullydynamic graphs, conditional lower bounds provide evidence that the update time cannot be improved by polynomial factors over recomputing the SCCs from scratch after every update. Nevertheless, substantial progress has been made to find algorithms with fast update time for decremental graphs, i.e. graphs that undergo edge deletions. In this paper, we present the first algorithm for general decremental graphs that maintains the SCCs in total update time Õ(m), thus only a polylogarithmic factor from the optimal running time. Previously such a result was only known for the special case of planar graphs [Italiano et al, STOC 17]. Our result should be compared to the formerly best algorithm for general graphs achieving Õ(m√n) total update time by Chechik et.al. [FOCS 16] which improved upon a breakthrough result leading to O(mn^{0.9 + o(1)}) total update time by Henzinger, Krinninger and Nanongkai [STOC 14, ICALP 15]; these results in turn improved upon the longstanding bound of O(mn) by Roditty and Zwick [STOC 04]. All of the above results also apply to the decremental SingleSource Reachability (SSR) problem, which can be reduced to decrementally maintaining SCCs. A bound of O(mn) total update time for decremental SSR was established already in 1981 by Even and Shiloach [JACM 81]. @InProceedings{STOC19p365, author = {Aaron Bernstein and Maximilian Probst and Christian WulffNilsen}, title = {Decremental StronglyConnected Components and SingleSource Reachability in NearLinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {365376}, doi = {10.1145/3313276.3316335}, year = {2019}, } Publisher's Version 

Beyhaghi, Hedyeh 
STOC '19: "Optimal (and BenchmarkOptimal) ..."
Optimal (and BenchmarkOptimal) Competition Complexity for Additive Buyers over Independent Items
Hedyeh Beyhaghi and S. Matthew Weinberg (Cornell University, USA; Princeton University, USA) The Competition Complexity of an auction setting refers to the number of additional bidders necessary in order for the (deterministic, priorindependent, dominant strategy truthful) VickreyClarkeGroves mechanism to achieve greater revenue than the (randomized, priordependent, Bayesiantruthful) optimal mechanism without the additional bidders. We prove that the competition complexity of n bidders with additive valuations over m independent items is at most n(ln(1+m/n)+2), and also at most 9√nm. When n ≤ m, the first bound is optimal up to constant factors, even when the items are i.i.d. and regular. When n ≥ m, the second bound is optimal for the benchmark introduced by Eden et al. up to constant factors, even when the items are i.i.d. and regular. We further show that, while the Eden et al. benchmark is not necessarily tight in the n ≥ m regime, the competition complexity of n bidders with additive valuations over even 2 i.i.d. regular items is indeed ω(1). Our main technical contribution is a reduction from analyzing the Eden et al. benchmark to proving stochastic dominance of certain random variables. @InProceedings{STOC19p686, author = {Hedyeh Beyhaghi and S. Matthew Weinberg}, title = {Optimal (and BenchmarkOptimal) Competition Complexity for Additive Buyers over Independent Items}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {686696}, doi = {10.1145/3313276.3316405}, year = {2019}, } Publisher's Version 

Bitansky, Nir 
STOC '19: "Weak ZeroKnowledge Beyond ..."
Weak ZeroKnowledge Beyond the BlackBox Barrier
Nir Bitansky, Dakshita Khurana, and Omer Paneth (Tel Aviv University, Israel; Microsoft Research, USA; University of Illinois at UrbanaChampaign, USA; Massachusetts Institute of Technology, USA) The round complexity of zeroknowledge protocols is a longstanding open question, yet to be settled under standard assumptions. So far, the question has appeared equally challenging for relaxations such as weak zeroknowledge and witness hiding. Protocols satisfying these relaxed notions under standard assumptions have at least four messages, just like fullfledged zeroknowledge. The difficulty in improving round complexity stems from a fundamental barrier: none of these notions can be achieved in three messages via reductions (or simulators) that treat the verifier as a black box. We introduce a new nonblackbox technique and use it to obtain the first protocols that cross this barrier under standard assumptions. We obtain weak zeroknowledge for in two messages, assuming the existence of quasipolynomiallysecure fullyhomomorphic encryption and other standard primitives (known based on the quasipolynomial hardness of Learning with Errors), and subexponentiallysecure oneway functions. We also obtain weak zeroknowledge for in three messages under standard polynomial assumptions (following for example from fully homomorphic encryption and factoring). We also give, under polynomial assumptions, a twomessage witnesshiding protocol for any language ∈ that has a witness encryption scheme. This protocol is publicly verifiable. Our technique is based on a new homomorphic trapdoor paradigm, which can be seen as a nonblackbox analog of the classic FeigeLapidotShamir trapdoor paradigm. @InProceedings{STOC19p1091, author = {Nir Bitansky and Dakshita Khurana and Omer Paneth}, title = {Weak ZeroKnowledge Beyond the BlackBox Barrier}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10911102}, doi = {10.1145/3313276.3316382}, year = {2019}, } Publisher's Version 

Bohdanowicz, Thomas C. 
STOC '19: "Good Approximate Quantum LDPC ..."
Good Approximate Quantum LDPC Codes from Spacetime Circuit Hamiltonians
Thomas C. Bohdanowicz, Elizabeth Crosson, Chinmay Nirkhe, and Henry Yuen (California Institute of Technology, USA; University of New Mexico, USA; University of California at Berkeley, USA; University of Toronto, Canada) We study approximate quantum lowdensity paritycheck (QLDPC) codes, which are approximate quantum errorcorrecting codes specified as the ground space of a frustrationfree local Hamiltonian, whose terms do not necessarily commute. Such codes generalize stabilizer QLDPC codes, which are exact quantum errorcorrecting codes with sparse, lowweight stabilizer generators (i.e. each stabilizer generator acts on a few qubits, and each qubit participates in a few stabilizer generators). Our investigation is motivated by an important question in Hamiltonian complexity and quantum coding theory: do stabilizer QLDPC codes with constant rate, linear distance, and constantweight stabilizers exist? We show that obtaining such optimal scaling of parameters (modulo polylogarithmic corrections) is possible if we go beyond stabilizer codes: we prove the existence of a family of [[N,k,d,ε]] approximate QLDPC codes that encode k = Ω(N) logical qubits into N physical qubits with distance d = Ω(N) and approximation infidelity ε = 1/(N). The code space is stabilized by a set of 10local noncommuting projectors, with each physical qubit only participating in N projectors. We prove the existence of an efficient encoding map and show that the spectral gap of the code Hamiltonian scales as Ω(N^{−3.09}). We also show that arbitrary Pauli errors can be locally detected by circuits of polylogarithmic depth. Our family of approximate QLDPC codes is based on applying a recent connection between circuit Hamiltonians and approximate quantum codes (Nirkhe, et al., ICALP 2018) to a result showing that random Clifford circuits of polylogarithmic depth yield asymptotically good quantum codes (Brown and Fawzi, ISIT 2013). Then, in order to obtain a code with sparse checks and strong detection of local errors, we use a spacetime circuittoHamiltonian construction in order to take advantage of the parallelism of the BrownFawzi circuits. Because of this, we call our codes spacetime codes. The analysis of the spectral gap of the code Hamiltonian is the main technical contribution of this work. We show that for any depth D quantum circuit on n qubits there is an associated spacetime circuittoHamiltonian construction with spectral gap Ω(n^{−3.09} D^{−2} log^{−6}(n)). To lower bound this gap we use a Markov chain decomposition method to divide the state space of partially completed circuit configurations into overlapping subsets corresponding to uniform circuit segments of depth logn, which are based on bitonic sorting circuits. We use the combinatorial properties of these circuit configurations to show rapid mixing between the subsets, and within the subsets we develop a novel isomorphism between the local update Markov chain on bitonic circuit configurations and the edgeflip Markov chain on equalarea dyadic tilings, whose mixing time was recently shown to be polynomial (Cannon, Levin, and Stauffer, RANDOM 2017). Previous lower bounds on the spectral gap of spacetime circuit Hamiltonians have all been based on a connection to exactly solvable quantum spin chains and applied only to 1+1 dimensional nearestneighbor quantum circuits with at least linear depth. @InProceedings{STOC19p481, author = {Thomas C. Bohdanowicz and Elizabeth Crosson and Chinmay Nirkhe and Henry Yuen}, title = {Good Approximate Quantum LDPC Codes from Spacetime Circuit Hamiltonians}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {481490}, doi = {10.1145/3313276.3316384}, year = {2019}, } Publisher's Version 

Boroujeni, Mahdi 
STOC '19: "1+ε Approximation ..."
1+ε Approximation of Tree Edit Distance in Quadratic Time
Mahdi Boroujeni, Mohammad Ghodsi, MohammadTaghi Hajiaghayi, and Saeed Seddighin (Sharif University of Technology, Iran; Institute for Research in Fundamental Sciences, Iran; University of Maryland, USA) Edit distance is one of the most fundamental problems in computer science. Tree edit distance is a natural generalization of edit distance to ordered rooted trees. Such a generalization extends the applications of edit distance to areas such as computational biology, structured data analysis (e.g., XML), image analysis, and compiler optimization. Perhaps the most notable application of tree edit distance is in the analysis of RNA molecules in computational biology where the secondary structure of RNA is typically represented as a rooted tree. The bestknown solution for tree edit distance runs in cubic time. Recently, Bringmann et al. show that an O(n^{2.99}) algorithm for weighted tree edit distance is unlikely by proving a conditional lower bound on the computational complexity of tree edit distance. This shows a substantial gap between the computational complexity of tree edit distance and that of edit distance for which a simple dynamic program solves the problem in quadratic time. In this work, we give the first nontrivial approximation algorithms for tree edit distance. Our main result is a quadratic time approximation scheme for tree edit distance that approximates the solution within a factor of 1+є for any constant є > 0. @InProceedings{STOC19p709, author = {Mahdi Boroujeni and Mohammad Ghodsi and MohammadTaghi Hajiaghayi and Saeed Seddighin}, title = {1+<i>ε</i> Approximation of Tree Edit Distance in Quadratic Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {709720}, doi = {10.1145/3313276.3316388}, year = {2019}, } Publisher's Version 

Brakensiek, Joshua 
STOC '19: "Bridging between 0/1 and Linear ..."
Bridging between 0/1 and Linear Programming via Random Walks
Joshua Brakensiek and Venkatesan Guruswami (Stanford University, USA; Carnegie Mellon University, USA) Under the Strong Exponential Time Hypothesis, an integer linear program with n Booleanvalued variables and m equations cannot be solved in c^{n} time for any constant c < 2. If the domain of the variables is relaxed to [0,1], the associated linear program can of course be solved in polynomial time. In this work, we give a natural algorithmic bridging between these extremes of 01 and linear programming. Specifically, for any subset (finite union of intervals) E ⊂ [0,1] containing {0,1}, we give a randomwalk based algorithm with runtime O_{E}((2−measure(E))^{n}poly(n,m)) that finds a solution in E^{n} to any nvariable linear program with m constraints that is feasible over {0,1}^{n}. Note that as E expands from {0,1} to [0,1], the runtime improves smoothly from 2^{n} to polynomial. Taking E = [0,1/k) ∪ (1−1/k,1] in our result yields as a corollary a randomized (2−2/k)^{n}poly(n) time algorithm for kSAT. While our approach has some high level resemblance to Sch'oning’s beautiful algorithm, our general algorithm is based on a more sophisticated random walk that incorporates several new ingredients, such as a multiplicative potential to measure progress, a judicious choice of starting distribution, and a time varying distribution for the evolution of the random walk that is itself computed via an LP at each step (a solution to which is guaranteed based on the minimax theorem). Plugging the LP algorithm into our earlier polymorphic framework yields fast exponential algorithms for any CSP (like kSAT, 1in3SAT, NAE kSAT) that admit socalled “threshold partial polymorphisms.” @InProceedings{STOC19p568, author = {Joshua Brakensiek and Venkatesan Guruswami}, title = {Bridging between 0/1 and Linear Programming via Random Walks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {568577}, doi = {10.1145/3313276.3316347}, year = {2019}, } Publisher's Version STOC '19: "CSPs with Global Modular Constraints: ..." CSPs with Global Modular Constraints: Algorithms and Hardness via Polynomial Representations Joshua Brakensiek, Sivakanth Gopi, and Venkatesan Guruswami (Stanford University, USA; Microsoft Research, USA; Carnegie Mellon University, USA) We study the complexity of Boolean constraint satisfaction problems (CSPs) when the assignment must have Hamming weight in some congruence class modulo M, for various choices of the modulus M. Due to the known classification of tractable Boolean CSPs, this mainly reduces to the study of three cases: 2SAT, HORNSAT, and LIN2 (linear equations mod 2). We classify the moduli M for which these respective problems are polynomial time solvable, and when they are not (assuming the ETH). Our study reveals that this modular constraint lends a surprising richness to these classic, wellstudied problems, with interesting broader connections to complexity theory and coding theory. The HORNSAT case is connected to the covering complexity of polynomials representing the NAND function mod M. The LIN2 case is tied to the sparsity of polynomials representing the OR function mod M, which in turn has connections to modular weight distribution properties of linear codes and locally decodable codes. In both cases, the analysis of our algorithm as well as the hardness reduction rely on these polynomial representations, highlighting an interesting algebraic common ground between hard cases for our algorithms and the gadgets which show hardness. These new complexity measures of polynomial representations merit further study. The inspiration for our study comes from a recent work by N'agele, Sudakov, and Zenklusen on submodular minimization with a global congruence constraint. Our algorithm for HORNSAT has strong similarities to their algorithm, and in particular identical kind of set systems arise in both cases. Our connection to polynomial representations leads to a simpler analysis of such set systems, and also sheds light on (but does not resolve) the complexity of submodular minimization with a congruency requirement modulo a composite M. @InProceedings{STOC19p590, author = {Joshua Brakensiek and Sivakanth Gopi and Venkatesan Guruswami}, title = {CSPs with Global Modular Constraints: Algorithms and Hardness via Polynomial Representations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {590601}, doi = {10.1145/3313276.3316401}, year = {2019}, } Publisher's Version 

Bresler, Guy 
STOC '19: "Learning Restricted Boltzmann ..."
Learning Restricted Boltzmann Machines via Influence Maximization
Guy Bresler, Frederic Koehler, and Ankur Moitra (Massachusetts Institute of Technology, USA) Graphical models are a rich language for describing highdimensional distributions in terms of their dependence structure. While there are algorithms with provable guarantees for learning undirected graphical models in a variety of settings, there has been much less progress in the important scenario when there are latent variables. Here we study Restricted Boltzmann Machines (or RBMs), which are a popular model with wideranging applications in dimensionality reduction, collaborative filtering, topic modeling, feature extraction and deep learning. The main message of our paper is a strong dichotomy in the feasibility of learning RBMs, depending on the nature of the interactions between variables: ferromagnetic models can be learned efficiently, while general models cannot. In particular, we give a simple greedy algorithm based on influence maximization to learn ferromagnetic RBMs with bounded degree. In fact, we learn a description of the distribution on the observed variables as a Markov Random Field. Our analysis is based on tools from mathematical physics that were developed to show the concavity of magnetization. Our algorithm extends straighforwardly to general ferromagnetic Ising models with latent variables. Conversely, we show that even for a contant number of latent variables with constant degree, without ferromagneticity the problem is as hard as sparse parity with noise. This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs: the distribution on their observed variables can simulate any bounded order MRF. This result is of independent interest since RBMs are the building blocks of deep belief networks. @InProceedings{STOC19p828, author = {Guy Bresler and Frederic Koehler and Ankur Moitra}, title = {Learning Restricted Boltzmann Machines via Influence Maximization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {828839}, doi = {10.1145/3313276.3316372}, year = {2019}, } Publisher's Version 

Bringmann, Karl 
STOC '19: "Approximating APSP without ..."
Approximating APSP without Scaling: Equivalence of Approximate MinPlus and Exact MinMax
Karl Bringmann, Marvin Künnemann, and Karol Węgrzycki (Max Planck Institute for Informatics, Germany; University of Warsaw, Poland) Zwick’s (1+ε)approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time Õ(n^{ω}/ε logW), where ω ≤ 2.373 is the exponent of matrix multiplication and W denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimumweight triangle, and minimumweight cycle in the same time bound. Since Zwick’s algorithm uses the scaling technique, it has a factor logW in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of W; this is called strongly polynomial. Our main results are as follows. (1) We design approximation schemes in strongly polynomial time O(n^{ω}/ε polylog(n/ε)) for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimumweight triangle, and minimumweight cycle on directed or undirected graphs. (2) For APSP on directed graphs we design an approximation scheme in strongly polynomial time O(n^{ω + 3/2} ε^{−1} polylog(n/ε)). This is significantly faster than the best exact algorithm. (3) We explain why our approximation scheme for APSP on directed graphs has a worse exponent than ω: Any improvement over our exponent ω + 3/2 would improve the best known algorithm for MinMax Product. In fact, we prove that approximating directed APSP and exactly computing the MinMax Product are equivalent. Our techniques yield a framework for approximation problems over the (min,+)semiring that can be applied more generally. In particular, we obtain the first strongly polynomial approximation scheme for MinPlus Convolution in strongly subquadratic time, and we prove an equivalence of approximate MinPlus Convolution and exact MinMax Convolution. @InProceedings{STOC19p943, author = {Karl Bringmann and Marvin Künnemann and Karol Węgrzycki}, title = {Approximating APSP without Scaling: Equivalence of Approximate MinPlus and Exact MinMax}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {943954}, doi = {10.1145/3313276.3316373}, year = {2019}, } Publisher's Version 

Bubeck, Sébastien 
STOC '19: "Competitively Chasing Convex ..."
Competitively Chasing Convex Bodies
Sébastien Bubeck, Yin Tat Lee, Yuanzhi Li, and Mark Sellke (Microsoft Research, USA; University of Washington, USA; Stanford University, USA) Let F be a family of sets in some metric space. In the Fchasing problem, an online algorithm observes a request sequence of sets in F and responds (online) by giving a sequence of points in these sets. The movement cost is the distance between consecutive such points. The competitive ratio is the worst case ratio (over request sequences) between the total movement of the online algorithm and the smallest movement one could have achieved by knowing in advance the request sequence. The family F is said to be chaseable if there exists an online algorithm with finite competitive ratio. In 1991, Linial and Friedman conjectured that the family of convex sets in Euclidean space is chaseable. We prove this conjecture. @InProceedings{STOC19p861, author = {Sébastien Bubeck and Yin Tat Lee and Yuanzhi Li and Mark Sellke}, title = {Competitively Chasing Convex Bodies}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {861868}, doi = {10.1145/3313276.3316314}, year = {2019}, } Publisher's Version 

Bulín, Jakub 
STOC '19: "Algebraic Approach to Promise ..."
Algebraic Approach to Promise Constraint Satisfaction
Jakub Bulín, Andrei Krokhin, and Jakub Opršal (Charles University in Prague, Czechia; University of Durham, UK) The complexity and approximability of the constraint satisfaction problem (CSP) has been actively studied over the last 20 years. A new version of the CSP, the promise CSP (PCSP) has recently been proposed, motivated by open questions about the approximability of variants of satisfiability and graph colouring. The PCSP significantly extends the standard decision CSP. The complexity of CSPs with a fixed constraint language on a finite domain has recently been fully classified, greatly guided by the algebraic approach, which uses polymorphisms — highdimensional symmetries of solution spaces — to analyse the complexity of problems. The corresponding classification for PCSPs is wide open and includes some longstanding open questions, such as the complexity of approximate graph colouring, as special cases. The basic algebraic approach to PCSP was initiated by Brakensiek and Guruswami, and in this paper we significantly extend it and lift it from concrete properties of polymorphisms to their abstract properties. We introduce a new class of problems that can be viewed as algebraic versions of the (Gap) Label Cover problem, and show that every PCSP with a fixed constraint language is equivalent to a problem of this form. This allows us to identify a ”measure of symmetry” that is well suited for comparing and relating the complexity of different PCSPs via the algebraic approach. We demonstrate how our theory can be applied by improving the stateoftheart in approximate graph colouring: we show that, for any k≥ 3, it is NPhard to find a (2k−1)colouring of a given kcolourable graph. @InProceedings{STOC19p602, author = {Jakub Bulín and Andrei Krokhin and Jakub Opršal}, title = {Algebraic Approach to Promise Constraint Satisfaction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {602613}, doi = {10.1145/3313276.3316300}, year = {2019}, } Publisher's Version 

Bury, Marc 
STOC '19: "Oblivious Dimension Reduction ..."
Oblivious Dimension Reduction for kMeans: Beyond Subspaces and the JohnsonLindenstrauss Lemma
Luca Becchetti, Marc Bury, Vincent CohenAddad, Fabrizio Grandoni, and Chris Schwiegelshohn (Sapienza University of Rome, Italy; Zalando, Switzerland; CNRS, France; IDSIA, Switzerland) We show that for n points in ddimensional Euclidean space, a data oblivious random projection of the columns onto m∈ O((logk+loglogn)ε^{−6}log1/ε) dimensions is sufficient to approximate the cost of all kmeans clusterings up to a multiplicative (1±ε) factor. The previousbest upper bounds on m are O(logn· ε^{−2}) given by a direct application of the JohnsonLindenstrauss Lemma, and O(kε^{−2}) given by [Cohen et al.STOC’15]. @InProceedings{STOC19p1039, author = {Luca Becchetti and Marc Bury and Vincent CohenAddad and Fabrizio Grandoni and Chris Schwiegelshohn}, title = {Oblivious Dimension Reduction for <i>k</i>Means: Beyond Subspaces and the JohnsonLindenstrauss Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10391050}, doi = {10.1145/3313276.3316318}, year = {2019}, } Publisher's Version 

Canetti, Ran 
STOC '19: "FiatShamir: From Practice ..."
FiatShamir: From Practice to Theory
Ran Canetti, Yilei Chen, Justin Holmgren, Alex Lombardi, Guy N. Rothblum, Ron D. Rothblum, and Daniel Wichs (Boston University, USA; Tel Aviv University, Israel; Visa Research, USA; Princeton University, USA; Massachusetts Institute of Technology, USA; Weizmann Institute of Science, Israel; Technion, Israel; Northeastern University, USA) We give new instantiations of the FiatShamir transform using explicit, efficiently computable hash functions. We improve over prior work by reducing the security of these protocols to qualitatively simpler and weaker computational hardness assumptions. As a consequence of our framework, we obtain the following concrete results. 1) There exists a succinct publicly verifiable noninteractive argument system for logspace uniform computations, under the assumption that any one of a broad class of fully homomorphic encryption (FHE) schemes has almost optimal security against polynomialtime adversaries. The class includes all FHE schemes in the literature that are based on the learning with errors (LWE) problem. 2) There exists a noninteractive zeroknowledge argument system for in the common reference string model, under either of the following two assumptions: (i) Almost optimal hardness of searchLWE against polynomialtime adversaries, or (ii) The existence of a circularsecure FHE scheme with a standard (polynomial time, negligible advantage) level of security. 3) The classic quadratic residuosity protocol of [Goldwasser, Micali, and Rackoff, SICOMP ’89] is not zero knowledge when repeated in parallel, under any of the hardness assumptions above. @InProceedings{STOC19p1082, author = {Ran Canetti and Yilei Chen and Justin Holmgren and Alex Lombardi and Guy N. Rothblum and Ron D. Rothblum and Daniel Wichs}, title = {FiatShamir: From Practice to Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10821090}, doi = {10.1145/3313276.3316380}, year = {2019}, } Publisher's Version 

Canonne, Clément L. 
STOC '19: "The Structure of Optimal Private ..."
The Structure of Optimal Private Tests for Simple Hypotheses
Clément L. Canonne, Gautam Kamath, Audra McMillan, Adam Smith, and Jonathan Ullman (Stanford University, USA; Simons Institute for the Theory of Computing Berkeley, USA; Boston University, USA; Northeastern University, USA) Hypothesis testing plays a central role in statistical inference, and is used in many settings where privacy concerns are paramount. This work answers a basic question about privately testing simple hypotheses: given two distributions P and Q, and a privacy level ε, how many i.i.d. samples are needed to distinguish P from Q subject to εdifferential privacy, and what sort of tests have optimal sample complexity? Specifically, we characterize this sample complexity up to constant factors in terms of the structure of P and Q and the privacy level ε, and show that this sample complexity is achieved by a certain randomized and clamped variant of the loglikelihood ratio test. Our result is an analogue of the classical NeymanPearson lemma in the setting of private hypothesis testing. We also give an application of our result to the private changepoint detection. Our characterization applies more generally to hypothesis tests satisfying essentially any notion of algorithmic stability, which is known to imply strong generalization bounds in adaptive data analysis, and thus our results have applications even when privacy is not a primary concern. @InProceedings{STOC19p310, author = {Clément L. Canonne and Gautam Kamath and Audra McMillan and Adam Smith and Jonathan Ullman}, title = {The Structure of Optimal Private Tests for Simple Hypotheses}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {310321}, doi = {10.1145/3313276.3316336}, year = {2019}, } Publisher's Version 

Capalbo, Michael 
STOC '19: "Explicit 𝑁Vertex Graphs ..."
Explicit 𝑁Vertex Graphs with Maximum Degree 𝐾 and Diameter [1+𝑜(1)]log_{𝐾1} 𝑁 for Each 𝐾1 a Prime Power
Michael Capalbo (Center for Computing Sciences, USA) Here we first present the solution of a longstanding open question–the explicit construction of an infinite family of Nvertex cubic graphs that have diameter [1+o(1)]log_{2} N. We then extend the techniques to construct, for each K of the form 2^{s}+1 or K=p^{s}+1; s an integer and p a prime, an infinite family of Kregular graphs on N vertices with diameter [1+o(1)]log_{K−1} N. @InProceedings{STOC19p1191, author = {Michael Capalbo}, title = {Explicit 𝑁Vertex Graphs with Maximum Degree 𝐾 and Diameter [1+𝑜(1)]log<sub>𝐾1</sub> 𝑁 for Each 𝐾1 a Prime Power}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11911202}, doi = {10.1145/3313276.3316399}, year = {2019}, } Publisher's Version 

Chakrabarty, Deeparnab 
STOC '19: "Approximation Algorithms for ..."
Approximation Algorithms for Minimum Norm and Ordered Optimization Problems
Deeparnab Chakrabarty and Chaitanya Swamy (Dartmouth College, USA; University of Waterloo, Canada) In many optimization problems, a feasible solution induces a multidimensional cost vector. For example, in loadbalancing a schedule induces a load vector across the machines. In kclustering, opening k facilities induces an assignment cost vector across the clients. Typically, one seeks a solution which either minimizes the sum or the max of this vector, and these problems (makespan minimization, kmedian, and kcenter) are classic NPhard problems which have been extensively studied. In this paper we consider the minimumnorm optimization problem. Given an arbitrary monotone, symmetric norm, the problem asks to find a solution which minimizes the norm of the induced costvector. Such norms are versatile and include ℓ_{p} norms, Topℓ norm (sum of the ℓ largest coordinates in absolute value), and ordered norms (nonnegative linear combination of Topℓ norms), and consequently, the minimumnorm problem models a wide variety of problems under one umbrella, We give a general framework to tackle the minimumnorm problem, and illustrate its efficacy in the unrelated machine load balancing and kclustering setting. Our concrete results are the following. (a) We give constant factor approximation algorithms for the minimum norm load balancing problem in unrelated machines, and the minimum norm kclustering problem. To our knowledge, our results constitute the first constantfactor approximations for such a general suite of objectives. (b) For load balancing on unrelated machines, we give a (2+ε)approximation for ordered load balancing (i.e., minnorm loadbalancing under an ordered norm). (c) For kclustering, we give a (5+ε)approximation for the ordered kmedian problem, which significantly improves upon the previousbest constantfactor approximation (Chakrabarty and Swamy (ICALP 2018); Byrka, Sornat, and Spoerhase (STOC 2018)). (d) Our techniques also imply O(1) approximations to the instancewise best simultaneous approximation factor for unrelatedmachine loadbalancing and kclustering. To our knowledge, these are the first positive simultaneous approximation results in these settings. At a technical level, one of our chief insights is that minimumnorm optimization can be reduced to a special case that we call minmax ordered optimization. Both the reduction, and the task of devising algorithms for the latter problem, require a sparsification idea that we develop, which is of interest for ordered optimization problems. The main ingredient in solving minmax ordered optimization is a deterministic, oblivious rounding procedure (that we devise) for suitable LP relaxations of the loadbalancing and kclustering problem; this may be of independent interest. @InProceedings{STOC19p126, author = {Deeparnab Chakrabarty and Chaitanya Swamy}, title = {Approximation Algorithms for Minimum Norm and Ordered Optimization Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {126137}, doi = {10.1145/3313276.3316322}, year = {2019}, } Publisher's Version 

Charalampopoulos, Panagiotis 
STOC '19: "Almost Optimal Distance Oracles ..."
Almost Optimal Distance Oracles for Planar Graphs
Panagiotis Charalampopoulos, Paweł Gawrychowski, Shay Mozes, and Oren Weimann (King's College London, UK; University of Wrocław, Poland; IDC Herzliya, Israel; University of Haifa, Israel) We present new tradeoffs between space and querytime for exact distance oracles in directed weighted planar graphs. These tradeoffs are almost optimal in the sense that they are within polylogarithmic, subpolynomial or arbitrarily small polynomial factors from the naïve linear space, constant querytime lower bound. These tradeoffs include: (i) an oracle with space O(n^{1+є}) and querytime Õ(1) for any constant є>0, (ii) an oracle with space Õ(n) and querytime O(n^{є}) for any constant є>0, and (iii) an oracle with space n^{1+o(1)} and querytime n^{o(1)}. @InProceedings{STOC19p138, author = {Panagiotis Charalampopoulos and Paweł Gawrychowski and Shay Mozes and Oren Weimann}, title = {Almost Optimal Distance Oracles for Planar Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {138151}, doi = {10.1145/3313276.3316316}, year = {2019}, } Publisher's Version 

Charikar, Moses 
STOC '19: "Efficient Profile Maximum ..."
Efficient Profile Maximum Likelihood for Universal Symmetric Property Estimation
Moses Charikar, Kirankumar Shiragur, and Aaron Sidford (Stanford University, USA) Estimating symmetric properties of a distribution, e.g. support size, coverage, entropy, distance to uniformity, are among the most fundamental problems in algorithmic statistics. While these properties have been studied extensively and separate optimal estimators have been produced, in striking recent work Acharya et al. provided a single estimator that is competitive for each. They showed that the value of the property on the distribution that approximately maximizes profile likelihood (PML), i.e. the probability of observed frequency of frequencies, is sample competitive with respect to a broad class of estimators. Unfortunately, prior to this work, there was no known polynomial time algorithm to compute such an approximation or use PML to obtain a universal plugin estimator. In this paper we provide an algorithm that, given n samples from a distribution, computes an approximate PML distribution up to a multiplicative error of exp(n^{2/3} poly log(n)) in nearly linear time. Generalizing work of Acharya et al. we show that our algorithm yields a universal plugin estimator that is competitive with a broad range of estimators up to accuracy є = Ω(n^{−0.166}). Further, we provide efficient polynomialtime algorithms for computing a ddimensional generalization of PML (for constant d) that allows for universal plugin estimation of symmetric relationships between distributions. @InProceedings{STOC19p780, author = {Moses Charikar and Kirankumar Shiragur and Aaron Sidford}, title = {Efficient Profile Maximum Likelihood for Universal Symmetric Property Estimation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {780791}, doi = {10.1145/3313276.3316398}, year = {2019}, } Publisher's Version 

Chattopadhyay, Arkadev 
STOC '19: "The LogApproximateRank Conjecture ..."
The LogApproximateRank Conjecture Is False
Arkadev Chattopadhyay, Nikhil S. Mande, and Suhail Sherif (TIFR, India; Georgetown University, USA) We construct a simple and total XOR function F on 2n variables that has only O(√n) spectral norm, O(n^{2}) approximate rank and O(n^{2.5}) approximate nonnegative rank. We show it has polynomially large randomized boundederror communication complexity of Ω(√n). This yields the first exponential gap between the logarithm of the approximate rank and randomized communication complexity for total functions. Thus F witnesses a refutation of the LogApproximateRank Conjecture (LARC) which was posed by Lee and Shraibman as a very natural analogue for randomized communication of the still unresolved LogRank Conjecture for deterministic communication. The best known previous gap for any total function between the two measures is a recent 4thpower separation by G'o'os, Jayram, Pitassi and Watson. Additionally, our function F refutes Grolmusz’s Conjecture and a variant of the LogApproximateNonnegativeRank Conjecture, suggested recently by Kol, Moran, Shpilka and Yehudayoff, both of which are implied by the LARC. The complement of F has exponentially large approximate nonnegative rank. This answers a question of Lee and Kol et al., showing that approximate nonnegative rank can be exponentially larger than approximate rank. The function F also falsifies a conjecture about parity measures of Boolean functions made by Tsang, Wong, Xie and Zhang. The latter conjecture implied the LogRank Conjecture for XOR functions. We are pleased to note that shortly after we published our results two independent groups of researchers, Anshu, Boddu and Touchette, and Sinha and de Wolf, used our function F to prove that the QuantumLogRank Conjecture is also false by showing that F has Ω(n^{1/6}) quantum communication complexity. @InProceedings{STOC19p42, author = {Arkadev Chattopadhyay and Nikhil S. Mande and Suhail Sherif}, title = {The LogApproximateRank Conjecture Is False}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {4253}, doi = {10.1145/3313276.3316353}, year = {2019}, } Publisher's Version 

Chekuri, Chandra 
STOC '19: "Parallelizing Greedy for Submodular ..."
Parallelizing Greedy for Submodular Set Function Maximization in Matroids and Beyond
Chandra Chekuri and Kent Quanrud (University of Illinois at UrbanaChampaign, USA) We consider parallel, or low adaptivity, algorithms for submodular function maximization. This line of work was recently initiated by Balkanski and Singer and has already led to several interesting results on the cardinality constraint and explicit packing constraints. An important open problem is the classical setting of matroid constraint, which has been instrumental for developments in submodular function maximization. In this paper we develop a general strategy to parallelize the wellstudied greedy algorithm and use it to obtain a randomized (1 / 2 − є)approximation in O( log^{2}(n) / ^{2} ) rounds of adaptivity. We rely on this algorithm, and an elegant amplification approach due to Badanidiyuru and Vondrák to obtain a fractional solution that yields a nearoptimal randomized ( 1 − 1/e − є )approximation in O( log^{2}(n) / є^{3} ) rounds of adaptivity. For nonnegative functions we obtain a ( 3−2√2 − є )approximation and a fractional solution that yields a ( 1 / e − є)approximation. Our approach for parallelizing greedy yields approximations for intersections of matroids and matchoids, and the approximation ratios are comparable to those known for sequential greedy. @InProceedings{STOC19p78, author = {Chandra Chekuri and Kent Quanrud}, title = {Parallelizing Greedy for Submodular Set Function Maximization in Matroids and Beyond}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {7889}, doi = {10.1145/3313276.3316406}, year = {2019}, } Publisher's Version 

Chen, Lijie 
STOC '19: "Bootstrapping Results for ..."
Bootstrapping Results for Threshold Circuits “Just Beyond” Known Lower Bounds
Lijie Chen and Roei Tell (Massachusetts Institute of Technology, USA; Weizmann Institute of Science, Israel) The best known lower bounds for the circuit class TC^{0} are only slightly superlinear. Similarly, the best known algorithm for derandomization of this class is an algorithm for quantified derandomization (i.e., a weak type of derandomization) of circuits of slightly superlinear size. In this paper we show that even very mild quantitative improvements of either of the two foregoing results would already imply superpolynomial lower bounds for TC^{0}. Specifically: (1) If for every c>1 and sufficiently large d∈ℕ it holds that nbit TC^{0} circuits of depth d require n^{1+c−d} wires to compute certain NC^{1}complete functions, then TC^{0}≠NC^{1}. In fact, even lower bounds for TC^{0} circuits of size n^{1+c−d} against these functions when c>1 is fixed and sufficiently small would yield lower bounds for polynomialsized circuits. Lower bounds of the form n^{1+c−d} against these functions are already known, but for a fixed c≈2.41 that is too large to yield new lower bounds via our results. (2) If there exists a deterministic algorithm that gets as input an nbit TC^{0} circuit of depth d and n^{1+(1.61)−d} wires, runs in time 2^{no(1)}, and distinguishes circuits that accept at most B(n)=2^{n1−(1.61)−d} inputs from circuits that reject at most B(n) inputs, then NEXP⊈TC^{0}. An algorithm for this “quantified derandomization” task is already known, but it works only when the number of wires is n^{1+c−d}, for c>30, and with a smaller B(n)≈2^{n1−(30/c)d}. Intuitively, the “takeaway” message from our work is that the gap between currentlyknown results and results that would suffice to get superpolynomial lower bounds for TC^{0} boils down to the precise constant c>1 in the bound n^{1+c−d} on the number of wires. Our results improve previous results of Allender and Koucký (2010) and of the second author (2018), respectively, whose hypotheses referred to circuits with n^{1+c/d} wires (rather than n^{1+c−d} wires). We also prove results similar to two results above for other circuit classes (i.e., ACC^{0} and CC^{0}). @InProceedings{STOC19p34, author = {Lijie Chen and Roei Tell}, title = {Bootstrapping Results for Threshold Circuits “Just Beyond” Known Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3441}, doi = {10.1145/3313276.3316333}, year = {2019}, } Publisher's Version 

Chen, Lin 
STOC '19: "Unconstrained Submodular Maximization ..."
Unconstrained Submodular Maximization with Constant Adaptive Complexity
Lin Chen, Moran Feldman, and Amin Karbasi (Yale University, USA; Open University of Israel, Israel) In this paper, we consider the unconstrained submodular maximization problem. We propose the first algorithm for this problem that achieves a tight (1/2−ε)approximation guarantee using Õ(ε^{−1}) adaptive rounds and a linear number of function evaluations. No previously known algorithm for this problem achieves an approximation ratio better than 1/3 using less than Ω(n) rounds of adaptivity, where n is the size of the ground set. Moreover, our algorithm easily extends to the maximization of a nonnegative continuous DRsubmodular function subject to a box constraint, and achieves a tight (1/2−ε)approximation guarantee for this problem while keeping the same adaptive and query complexities. @InProceedings{STOC19p102, author = {Lin Chen and Moran Feldman and Amin Karbasi}, title = {Unconstrained Submodular Maximization with Constant Adaptive Complexity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {102113}, doi = {10.1145/3313276.3316327}, year = {2019}, } Publisher's Version 

Chen, Sitan 
STOC '19: "Beyond the LowDegree Algorithm: ..."
Beyond the LowDegree Algorithm: Mixtures of Subcubes and Their Applications
Sitan Chen and Ankur Moitra (Massachusetts Institute of Technology, USA) We introduce the problem of learning mixtures of k subcubes over {0,1}^{n}, which contains many classic learning theory problems as a special case (and is itself a special case of others). We give a surprising n^{O(logk)}time learning algorithm based on higherorder multilinear moments. It is not possible to learn the parameters because the same distribution can be represented by quite different models. Instead, we develop a framework for reasoning about how multilinear moments can pinpoint essential features of the mixture, like the number of components. We also give applications of our algorithm to learning decision trees with stochastic transitions (which also capture interesting scenarios where the transitions are deterministic but there are latent variables). Using our algorithm for learning mixtures of subcubes, we can approximate the Bayes optimal classifier within additive error є on kleaf decision trees with at most s stochastic transitions on any roottoleaf path in n^{O(s + logk)}·poly(1/є) time. In this stochastic setting, the classic n^{O(logk)}·poly(1/є)time algorithms of Rivest, Blum, and EhrenfreuchtHaussler for learning decision trees with zero stochastic transitions break down because they are fundamentally Occam algorithms. The lowdegree algorithm of LinialMansourNisan is able to get a constant factor approximation to the optimal error (again within an additive є) and runs in time n^{O(s + log(k/є))}. The quasipolynomial dependence on 1/є is inherent to the lowdegree approach because the degree needs to grow as the target accuracy decreases, which is undesirable when є is small. In contrast, as we will show, mixtures of k subcubes are uniquely determined by their 2 logk order moments and hence provide a useful abstraction for simultaneously achieving the polynomial dependence on 1/є of the classic Occam algorithms for decision trees and the flexibility of the lowdegree algorithm in being able to accommodate stochastic transitions. Using our multilinear moment techniques, we also give the first improved upper and lower bounds since the work of FeldmanO’DonnellServedio for the related but harder problem of learning mixtures of binary product distributions. @InProceedings{STOC19p869, author = {Sitan Chen and Ankur Moitra}, title = {Beyond the LowDegree Algorithm: Mixtures of Subcubes and Their Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {869880}, doi = {10.1145/3313276.3316375}, year = {2019}, } Publisher's Version 

Chen, Xi 
STOC '19: "Testing Unateness Nearly Optimally ..."
Testing Unateness Nearly Optimally
Xi Chen and Erik Waingarten (Columbia University, USA) We present an Õ(n^{2/3}/є^{2})query algorithm that tests whether an unknown Boolean function f∶{0,1}^{n}→ {0,1} is unate (i.e., every variable is either nondecreasing or nonincreasing) or єfar from unate. The upper bound is nearly optimal given the Ω(n^{2/3}) lower bound of Chen, Waingarten and Xie (2017). The algorithm builds on a novel use of the binary search procedure and its analysis over long random paths. @InProceedings{STOC19p547, author = {Xi Chen and Erik Waingarten}, title = {Testing Unateness Nearly Optimally}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {547558}, doi = {10.1145/3313276.3316351}, year = {2019}, } Publisher's Version 

Chen, Yilei 
STOC '19: "FiatShamir: From Practice ..."
FiatShamir: From Practice to Theory
Ran Canetti, Yilei Chen, Justin Holmgren, Alex Lombardi, Guy N. Rothblum, Ron D. Rothblum, and Daniel Wichs (Boston University, USA; Tel Aviv University, Israel; Visa Research, USA; Princeton University, USA; Massachusetts Institute of Technology, USA; Weizmann Institute of Science, Israel; Technion, Israel; Northeastern University, USA) We give new instantiations of the FiatShamir transform using explicit, efficiently computable hash functions. We improve over prior work by reducing the security of these protocols to qualitatively simpler and weaker computational hardness assumptions. As a consequence of our framework, we obtain the following concrete results. 1) There exists a succinct publicly verifiable noninteractive argument system for logspace uniform computations, under the assumption that any one of a broad class of fully homomorphic encryption (FHE) schemes has almost optimal security against polynomialtime adversaries. The class includes all FHE schemes in the literature that are based on the learning with errors (LWE) problem. 2) There exists a noninteractive zeroknowledge argument system for in the common reference string model, under either of the following two assumptions: (i) Almost optimal hardness of searchLWE against polynomialtime adversaries, or (ii) The existence of a circularsecure FHE scheme with a standard (polynomial time, negligible advantage) level of security. 3) The classic quadratic residuosity protocol of [Goldwasser, Micali, and Rackoff, SICOMP ’89] is not zero knowledge when repeated in parallel, under any of the hardness assumptions above. @InProceedings{STOC19p1082, author = {Ran Canetti and Yilei Chen and Justin Holmgren and Alex Lombardi and Guy N. Rothblum and Ron D. Rothblum and Daniel Wichs}, title = {FiatShamir: From Practice to Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10821090}, doi = {10.1145/3313276.3316380}, year = {2019}, } Publisher's Version 

Chen, Yu 
STOC '19: "Polynomial Pass Lower Bounds ..."
Polynomial Pass Lower Bounds for Graph Streaming Algorithms
Sepehr Assadi, Yu Chen, and Sanjeev Khanna (Princeton University, USA; University of Pennsylvania, USA) We present new lower bounds that show that a polynomial number of passes are necessary for solving some fundamental graph problems in the streaming model of computation. For instance, we show that any streaming algorithm that finds a weighted minimum st cut in an nvertex undirected graph requires n^{2−o(1)} space unless it makes n^{Ω(1)} passes over the stream. To prove our lower bounds, we introduce and analyze a new fourplayer communication problem that we refer to as the hiddenpointer chasing problem. This is a problem in spirit of the standard pointer chasing problem with the key difference that the pointers in this problem are hidden to players and finding each one of them requires solving another communication problem, namely the set intersection problem. Our lower bounds for graph problems are then obtained by reductions from the hiddenpointer chasing problem. Our hiddenpointer chasing problem appears flexible enough to find other applications and is therefore interesting in its own right. To showcase this, we further present an interesting application of this problem beyond streaming algorithms. Using a reduction from hiddenpointer chasing, we prove that any algorithm for submodular function minimization needs to make n^{2−o(1)} value queries to the function unless it has a polynomial degree of adaptivity. @InProceedings{STOC19p265, author = {Sepehr Assadi and Yu Chen and Sanjeev Khanna}, title = {Polynomial Pass Lower Bounds for Graph Streaming Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {265276}, doi = {10.1145/3313276.3316361}, year = {2019}, } Publisher's Version 

Choudhuri, Arka Rai 
STOC '19: "Finding a Nash Equilibrium ..."
Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir
Arka Rai Choudhuri, Pavel Hubáček, Chethan Kamath, Krzysztof Pietrzak, Alon Rosen, and Guy N. Rothblum (Johns Hopkins University, USA; Charles University in Prague, Czechia; IST Austria, Austria; IDC Herzliya, Israel; Weizmann Institute of Science, Israel) The FiatShamir heuristic transforms a publiccoin interactive proof into a noninteractive argument, by replacing the verifier with a cryptographic hash function that is applied to the protocol’s transcript. Constructing hash functions for which this transformation is sound is a central and longstanding open question in cryptography. We show that solving the END−OF−METERED−LINE problem is no easier than breaking the soundness of the FiatShamir transformation when applied to the sumcheck protocol. In particular, if the transformed protocol is sound, then any hard problem in #P gives rise to a hard distribution in the class CLS, which is contained in PPAD. Our result opens up the possibility of sampling moderatelysized games for which it is hard to find a Nash equilibrium, by reducing the inversion of appropriately chosen oneway functions to #SAT. Our main technical contribution is a stateful incrementally verifiable procedure that, given a SAT instance over n variables, counts the number of satisfying assignments. This is accomplished via an exponential sequence of small steps, each computable in time poly(n). Incremental verifiability means that each intermediate state includes a sumcheckbased proof of its correctness, and the proof can be updated and verified in time poly(n). @InProceedings{STOC19p1103, author = {Arka Rai Choudhuri and Pavel Hubáček and Chethan Kamath and Krzysztof Pietrzak and Alon Rosen and Guy N. Rothblum}, title = {Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11031114}, doi = {10.1145/3313276.3316400}, year = {2019}, } Publisher's Version 

Chuzhoy, Julia 
STOC '19: "A New Algorithm for Decremental ..."
A New Algorithm for Decremental SingleSource Shortest Paths with Applications to VertexCapacitated Flow and Cut Problems
Julia Chuzhoy and Sanjeev Khanna (Toyota Technological Institute at Chicago, USA; University of Pennsylvania, USA) We study the vertexdecremental SingleSource Shortest Paths (SSSP) problem: given an undirected graph G=(V,E) with lengths ℓ(e)≥ 1 on its edges that undergoes vertex deletions, and a source vertex s, we need to support (approximate) shortestpath queries in G: given a vertex v, return a path connecting s to v, whose length is at most (1+є) times the length of the shortest such path, where є is a given accuracy parameter. The problem has many applications, for example to flow and cut problems in vertexcapacitated graphs. Decremental SSSP is a fundamental problem in dynamic algorithms that has been studied extensively, especially in the more standard edgedecremental setting, where the input graph G undergoes edge deletions. The classical algorithm of Even and Shiloach supports exact shortestpath queries in O(mn) total update time. A series of recent results have improved this bound to O(m^{1+o(1)}logL), where L is the largest length of any edge. However, these improved results are randomized algorithms that assume an oblivious adversary. To go beyond the oblivious adversary restriction, recently, Bernstein, and Bernstein and Chechik designed deterministic algorithms for the problem, with total update time Õ(n^{2}logL), that by definition work against an adaptive adversary. Unfortunately, their algorithms introduce a new limitation, namely, they can only return the approximate length of a shortest path, and not the path itself. Many applications of the decremental SSSP problem, including the ones considered in this paper, crucially require both that the algorithm returns the approximate shortest paths themselves and not just their lengths, and that it works against an adaptive adversary. Our main result is a randomized algorithm for vertexdecremental SSSP with total expected update time O(n^{2+o(1)}logL), that responds to each shortestpath query in Õ(nlogL) time in expectation, returning a (1+є)approximate shortest path. The algorithm works against an adaptive adversary. The main technical ingredient of our algorithm is an Õ(E(G)+ n^{1+o(1)})time algorithm to compute a core decomposition of a given dense graph G, which allows us to compute short paths between pairs of query vertices in G efficiently. We use our result for vertexdecremental SSSP to obtain (1+є)approximation algorithms for maximum st flow and minimum st cut in vertexcapacitated graphs, in expected time n^{2+o(1)}, and an O(log^{4}n)approximation algorithm for the vertex version of the sparsest cut problem with expected running time n^{2+o(1)}. These results improve upon the previous best known algorithms for these problems in the regime where m= ω(n^{1.5 + o(1)}). @InProceedings{STOC19p389, author = {Julia Chuzhoy and Sanjeev Khanna}, title = {A New Algorithm for Decremental SingleSource Shortest Paths with Applications to VertexCapacitated Flow and Cut Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {389400}, doi = {10.1145/3313276.3316320}, year = {2019}, } Publisher's Version 

Coester, Christian 
STOC '19: "The Online 𝑘Taxi Problem ..."
The Online 𝑘Taxi Problem
Christian Coester and Elias Koutsoupias (University of Oxford, UK) We consider the online ktaxi problem, a generalization of the kserver problem, in which k taxis serve a sequence of requests in a metric space. A request consists of two points s and t, representing a passenger that wants to be carried by a taxi from s to t. The goal is to serve all requests while minimizing the total distance traveled by all taxis. The problem comes in two flavors, called the easy and the hard ktaxi problem: In the easy ktaxi problem, the cost is defined as the total distance traveled by the taxis; in the hard ktaxi problem, the cost is only the distance of empty runs. The hard ktaxi problem is substantially more difficult than the easy version with at least an exponential deterministic competitive ratio, Ω(2^{k}), admitting a reduction from the layered graph traversal problem. In contrast, the easy ktaxi problem has exactly the same competitive ratio as the kserver problem. We focus mainly on the hard version. For hierarchically separated trees (HSTs), we present a memoryless randomized algorithm with competitive ratio 2^{k}−1 against adaptive online adversaries and provide two matching lower bounds: for arbitrary algorithms against adaptive adversaries and for memoryless algorithms against oblivious adversaries. Due to wellknown HST embedding techniques, the algorithm implies a randomized O(2^{k}logn)competitive algorithm for arbitrary npoint metrics. This is the first competitive algorithm for the hard ktaxi problem for general finite metric spaces and general k. For the special case of k=2, we obtain a precise answer of 9 for the competitive ratio in general metrics. With an algorithm based on growing, shrinking and shifting regions, we show that one can achieve a constant competitive ratio also for the hard 3taxi problem on the line (abstracting the scheduling of three elevators). @InProceedings{STOC19p1136, author = {Christian Coester and Elias Koutsoupias}, title = {The Online 𝑘Taxi Problem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11361147}, doi = {10.1145/3313276.3316370}, year = {2019}, } Publisher's Version 

Cohen, Michael B. 
STOC '19: "Solving Linear Programs in ..."
Solving Linear Programs in the Current Matrix Multiplication Time
Michael B. Cohen, Yin Tat Lee, and Zhao Song (Massachusetts Institute of Technology, USA; University of Washington, USA; Microsoft Research, USA; University of Texas at Austin, USA) This paper shows how to solve linear programs of the form min_{Ax=b,x≥0} c^{⊤}x with n variables in time O^{*}((n^{ω}+n^{2.5−α/2}+n^{2+1/6}) log(n/δ)) where ω is the exponent of matrix multiplication, α is the dual exponent of matrix multiplication, and δ is the relative accuracy. For the current value of ω∼2.37 and α∼0.31, our algorithm takes O^{*}(n^{ω} log(n/δ)) time. When ω = 2, our algorithm takes O^{*}(n^{2+1/6} log(n/δ)) time. Our algorithm utilizes several new concepts that we believe may be of independent interest: (1) We define a stochastic central path method. (2) We show how to maintain a projection matrix √W A^{⊤}(AWA^{⊤})^{−1}A√W in subquadratic time under ℓ_{2} multiplicative changes in the diagonal matrix W. @InProceedings{STOC19p938, author = {Michael B. Cohen and Yin Tat Lee and Zhao Song}, title = {Solving Linear Programs in the Current Matrix Multiplication Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {938942}, doi = {10.1145/3313276.3316303}, year = {2019}, } Publisher's Version 

CohenAddad, Vincent 
STOC '19: "Oblivious Dimension Reduction ..."
Oblivious Dimension Reduction for kMeans: Beyond Subspaces and the JohnsonLindenstrauss Lemma
Luca Becchetti, Marc Bury, Vincent CohenAddad, Fabrizio Grandoni, and Chris Schwiegelshohn (Sapienza University of Rome, Italy; Zalando, Switzerland; CNRS, France; IDSIA, Switzerland) We show that for n points in ddimensional Euclidean space, a data oblivious random projection of the columns onto m∈ O((logk+loglogn)ε^{−6}log1/ε) dimensions is sufficient to approximate the cost of all kmeans clusterings up to a multiplicative (1±ε) factor. The previousbest upper bounds on m are O(logn· ε^{−2}) given by a direct application of the JohnsonLindenstrauss Lemma, and O(kε^{−2}) given by [Cohen et al.STOC’15]. @InProceedings{STOC19p1039, author = {Luca Becchetti and Marc Bury and Vincent CohenAddad and Fabrizio Grandoni and Chris Schwiegelshohn}, title = {Oblivious Dimension Reduction for <i>k</i>Means: Beyond Subspaces and the JohnsonLindenstrauss Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10391050}, doi = {10.1145/3313276.3316318}, year = {2019}, } Publisher's Version 

Conte, Alessio 
STOC '19: "New Polynomial Delay Bounds ..."
New Polynomial Delay Bounds for Maximal Subgraph Enumeration by Proximity Search
Alessio Conte and Takeaki Uno (National Institute of Informatics, Japan; University of Pisa, Italy) In this paper we propose polynomial delay algorithms for several maximal subgraph listing problems, by means of a seemingly novel technique which we call proximity search. Our result involves modeling the space of solutions as an implicit directed graph called “solution graph”, a method common to other enumeration paradigms such as reverse search. Such methods, however, can become inefficient due to this graph having vertices with high (potentially exponential) degree. The novelty of our algorithm consists in providing a technique for generating better solution graphs, reducing the outdegree of its vertices with respect to existing approaches, and proving that it remains strongly connected. Applying this technique, we obtain polynomial delay listing algorithms for several problems for which outputsensitive results were, to the best of our knowledge, not known. These include Maximal Bipartite Subgraphs, Maximal kDegenerate Subgraphs (for bounded k), Maximal Induced Chordal Subgraphs, and Maximal Induced Trees. We present these algorithms, and give insight on how this general technique can be applied to other problems. @InProceedings{STOC19p1179, author = {Alessio Conte and Takeaki Uno}, title = {New Polynomial Delay Bounds for Maximal Subgraph Enumeration by Proximity Search}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11791190}, doi = {10.1145/3313276.3316402}, year = {2019}, } Publisher's Version 

Crosson, Elizabeth 
STOC '19: "Good Approximate Quantum LDPC ..."
Good Approximate Quantum LDPC Codes from Spacetime Circuit Hamiltonians
Thomas C. Bohdanowicz, Elizabeth Crosson, Chinmay Nirkhe, and Henry Yuen (California Institute of Technology, USA; University of New Mexico, USA; University of California at Berkeley, USA; University of Toronto, Canada) We study approximate quantum lowdensity paritycheck (QLDPC) codes, which are approximate quantum errorcorrecting codes specified as the ground space of a frustrationfree local Hamiltonian, whose terms do not necessarily commute. Such codes generalize stabilizer QLDPC codes, which are exact quantum errorcorrecting codes with sparse, lowweight stabilizer generators (i.e. each stabilizer generator acts on a few qubits, and each qubit participates in a few stabilizer generators). Our investigation is motivated by an important question in Hamiltonian complexity and quantum coding theory: do stabilizer QLDPC codes with constant rate, linear distance, and constantweight stabilizers exist? We show that obtaining such optimal scaling of parameters (modulo polylogarithmic corrections) is possible if we go beyond stabilizer codes: we prove the existence of a family of [[N,k,d,ε]] approximate QLDPC codes that encode k = Ω(N) logical qubits into N physical qubits with distance d = Ω(N) and approximation infidelity ε = 1/(N). The code space is stabilized by a set of 10local noncommuting projectors, with each physical qubit only participating in N projectors. We prove the existence of an efficient encoding map and show that the spectral gap of the code Hamiltonian scales as Ω(N^{−3.09}). We also show that arbitrary Pauli errors can be locally detected by circuits of polylogarithmic depth. Our family of approximate QLDPC codes is based on applying a recent connection between circuit Hamiltonians and approximate quantum codes (Nirkhe, et al., ICALP 2018) to a result showing that random Clifford circuits of polylogarithmic depth yield asymptotically good quantum codes (Brown and Fawzi, ISIT 2013). Then, in order to obtain a code with sparse checks and strong detection of local errors, we use a spacetime circuittoHamiltonian construction in order to take advantage of the parallelism of the BrownFawzi circuits. Because of this, we call our codes spacetime codes. The analysis of the spectral gap of the code Hamiltonian is the main technical contribution of this work. We show that for any depth D quantum circuit on n qubits there is an associated spacetime circuittoHamiltonian construction with spectral gap Ω(n^{−3.09} D^{−2} log^{−6}(n)). To lower bound this gap we use a Markov chain decomposition method to divide the state space of partially completed circuit configurations into overlapping subsets corresponding to uniform circuit segments of depth logn, which are based on bitonic sorting circuits. We use the combinatorial properties of these circuit configurations to show rapid mixing between the subsets, and within the subsets we develop a novel isomorphism between the local update Markov chain on bitonic circuit configurations and the edgeflip Markov chain on equalarea dyadic tilings, whose mixing time was recently shown to be polynomial (Cannon, Levin, and Stauffer, RANDOM 2017). Previous lower bounds on the spectral gap of spacetime circuit Hamiltonians have all been based on a connection to exactly solvable quantum spin chains and applied only to 1+1 dimensional nearestneighbor quantum circuits with at least linear depth. @InProceedings{STOC19p481, author = {Thomas C. Bohdanowicz and Elizabeth Crosson and Chinmay Nirkhe and Henry Yuen}, title = {Good Approximate Quantum LDPC Codes from Spacetime Circuit Hamiltonians}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {481490}, doi = {10.1145/3313276.3316384}, year = {2019}, } Publisher's Version 

Czerwiński, Wojciech 
STOC '19: "The Reachability Problem for ..."
The Reachability Problem for Petri Nets Is Not Elementary
Wojciech Czerwiński, Sławomir Lasota, Ranko Lazić, Jérôme Leroux, and Filip Mazowiecki (University of Warsaw, Poland; University of Warwick, UK; CNRS, France; University of Bordeaux, France) Petri nets, also known as vector addition systems, are a long established model of concurrency with extensive applications in modelling and analysis of hardware, software and database systems, as well as chemical, biological and business processes. The central algorithmic problem for Petri nets is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal STOC 1981 work, and the currently best published upper bound is nonprimitive recursive Ackermannian of Leroux and Schmitz from LICS 2019. We establish a nonelementary lower bound, i.e. that the reachability problem needs a tower of exponentials of time and space. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. The new lower bound is a major breakthrough for several reasons. Firstly, it shows that the reachability problem is much harder than the coverability (i.e., state reachability) problem, which is also ubiquitous but has been known to be complete for exponential space since the late 1970s. Secondly, it implies that a plethora of problems from formal languages, logic, concurrent systems, process calculi and other areas, that are known to admit reductions from the Petri nets reachability problem, are also not elementary. Thirdly, it makes obsolete the currently best lower bounds for the reachability problems for two key extensions of Petri nets: with branching and with a pushdown stack. At the heart of our proof is a novel gadget so called the factorial amplifier that, assuming availability of counters that are zero testable and bounded by k, guarantees to produce arbitrarily large pairs of values whose ratio is exactly the factorial of k. We also develop a novel construction that uses arbitrarily large pairs of values with ratio R to provide zero testable counters that are bounded by R. Repeatedly composing the factorial amplifier with itself by means of the construction then enables us to compute in linear time Petri nets that simulate Minsky machines whose counters are bounded by a tower of exponentials, which yields the nonelementary lower bound. By refining this scheme further, we in fact establish hardness for hexponential space already for Petri nets with h + 13 counters. @InProceedings{STOC19p24, author = {Wojciech Czerwiński and Sławomir Lasota and Ranko Lazić and Jérôme Leroux and Filip Mazowiecki}, title = {The Reachability Problem for Petri Nets Is Not Elementary}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {2433}, doi = {10.1145/3313276.3316369}, year = {2019}, } Publisher's Version 

Dadush, Daniel 
STOC '19: "On Approximating the Covering ..."
On Approximating the Covering Radius and Finding Dense Lattice Subspaces
Daniel Dadush (CWI, Netherlands) In this work, we give a novel algorithm for computing dense lattice subspaces, a conjecturally tight characterization of the lattice covering radius, and provide a bound on the slicing constant of lattice Voronoi cells. Our work is motivated by the pursuit of faster algorithms for integer programming, for which we give a conditional speedup based on the recent resolution of the ℓ_{2} KannanLovász conjecture. Through these results, we hope to motivate further study of the interplay between the recently developed reverse Minkowski theory, lattice algorithms and convex geometry. On the algorithmic side, our main contribution is a 2^{O(n)}time algorithm for computing a O(C_{η}(n))approximate sublattice of minimum normalized determinant on any ndimensional lattice, where C_{η}(n) = O(logn) is the reverse Minkowski constant in dimension n. Our method for finding dense lattice subspaces is surprisingly simple: we iteratively descend to a random codimension 1 subspace chosen to be the orthogonal space to a discrete Gaussian sample from the dual lattice. Applying this algorithm within a “filtration reduction” scheme, we further show how to compute a O(C_{η}(n))approximate canonical filtration of any lattice, which corresponds to a canonical way of decomposing a lattice into dense blocks. As a primary application, we get the first 2^{O(n)}time algorithm for computing a sparse lattice projection whose “volume radius” provides a lower bound on the lattice covering radius that is tight within a O(log^{2.5} n)factor. This provides an efficient algorithmic version of the ℓ_{2} KannanLovász conjecture, which was recently resolved by Regev and StephensDavidowitz (STOC ’2017). On the structural side, we prove a new lower bound on the covering radius which combines volumetric lower bounds across a chain of lattice projections. Assuming Bourgain’s slicing conjecture restricted to Voronoi cells of stable lattices, our lower bound implies (somewhat surprisingly) that the problem of approximating the lattice covering radius to within a constant factor is in . Complementing this result, we show that the slicing constant of any ndimensional Voronoi cell is bounded by O(C_{KL,2}(n)) = O(log^{1.5} n), the ℓ_{2} KannanLovász constant, which complements the O(logn) bound of Regev and StephensDavidowitz for stable Voronoi cells. @InProceedings{STOC19p1021, author = {Daniel Dadush}, title = {On Approximating the Covering Radius and Finding Dense Lattice Subspaces}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10211026}, doi = {10.1145/3313276.3316397}, year = {2019}, } Publisher's Version 

Daga, Mohit 
STOC '19: "Distributed Edge Connectivity ..."
Distributed Edge Connectivity in Sublinear Time
Mohit Daga, Monika Henzinger, Danupon Nanongkai, and Thatchaphol Saranurak (KTH, Sweden; University of Vienna, Austria; Toyota Technological Institute at Chicago, USA) We present the first sublineartime algorithm that can compute the edge connectivity λ of a network exactly on distributed messagepassing networks (the CONGEST model), as long as the network contains no multiedge. We present the first sublineartime algorithm for a distributed messagepassing network sto compute its edge connectivity λ exactly in the CONGEST model, as long as there are no parallel edges. Our algorithm takes Õ(n^{1−1/353}D^{1/353}+n^{1−1/706}) time to compute λ and a cut of cardinality λ with high probability, where n and D are the number of nodes and the diameter of the network, respectively, and Õ hides polylogarithmic factors. This running time is sublinear in n (i.e. Õ(n^{1−є})) whenever D is. Previous sublineartime distributed algorithms can solve this problem either (i) exactly only when λ=O(n^{1/8−є}) [Thurimella PODC’95; Pritchard, Thurimella, ACM Trans. Algorithms’11; Nanongkai, Su, DISC’14] or (ii) approximately [Ghaffari, Kuhn, DISC’13; Nanongkai, Su, DISC’14]. To achieve this we develop and combine several new techniques. First, we design the first distributed algorithm that can compute a kedge connectivity certificate for any k=O(n^{1−є}) in time Õ(√nk+D). The previous sublineartime algorithm can do so only when k=o(√n) [Thurimella PODC’95]. In fact, our algorithm can be turned into the first parallel algorithm with polylogarithmic depth and nearlinear work. Previous nearlinear work algorithms are essentially sequential and previous polylogarithmicdepth algorithms require Ω(mk) work in the worst case (e.g. [Karger, Motwani, STOC’93]). Second, we show that by combining the recent distributed expander decomposition technique of [Chang, Pettie, Zhang, SODA’19] with techniques from the sequential deterministic edge connectivity algorithm of [Kawarabayashi, Thorup, STOC’15], we can decompose the network into a sublinear number of clusters with small average diameter and without any mincut separating a cluster (except the “trivial” ones). This leads to a simplification of the KawarabayashiThorup framework (except that we are randomized while they are deterministic). This might make this framework more useful in other models of computation. Finally, by extending the tree packing technique from [Karger STOC’96], we can find the minimum cut in time proportional to the number of components. As a byproduct of this technique, we obtain an Õ(n)time algorithm for computing exact minimum cut for weighted graphs. @InProceedings{STOC19p343, author = {Mohit Daga and Monika Henzinger and Danupon Nanongkai and Thatchaphol Saranurak}, title = {Distributed Edge Connectivity in Sublinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {343354}, doi = {10.1145/3313276.3316346}, year = {2019}, } Publisher's Version 

Dalirrooyfard, Mina 
STOC '19: "Graph Pattern Detection: Hardness ..."
Graph Pattern Detection: Hardness for All Induced Patterns and Faster Noninduced Cycles
Mina Dalirrooyfard, Thuy Duong Vuong, and Virginia Vassilevska Williams (Massachusetts Institute of Technology, USA) We consider the pattern detection problem in graphs: given a constant size pattern graph H and a host graph G, determine whether G contains a subgraph isomorphic to H. We present the following new improved upper and lower bounds: We prove that if a pattern H contains a kclique subgraph, then detecting whether an n node host graph contains a not necessarily induced copy of H requires at least the time for detecting whether an n node graph contains a kclique. The previous result of this nature required that H contains a kclique which is disjoint from all other kcliques of H. We show that if the famous Hadwiger conjecture from graph theory is true, then detecting whether an n node host graph contains a not necessarily induced copy of a pattern with chromatic number t requires at least the time for detecting whether an n node graph contains a tclique. This implies that: (a) under Hadwiger’s conjecture for every knode pattern H, finding an induced copy of H requires at least the time of √kclique detection and size ω(n^{√k/4}) for any constant depth circuit, and (b) unconditionally, detecting an induced copy of a random G(k,p) pattern w.h.p. requires at least the time of Θ(k/logk)clique detection, and hence also at least size n^{Ω(k/logk)} for circuits of constant depth. We show that for every k, there exists a knode pattern that contains a k−1clique and that can be detected as an induced subgraph in n node graphs in the best known running time for k−1Clique detection. Previously such a result was only known for infinitely many k. Finally, we consider the case when the pattern is a directed cycle on k nodes, and we would like to detect whether a directed medge graph G contains a kCycle as a not necessarily induced subgraph. We resolve a 14 year old conjecture of [YusterZwick SODA’04] on the complexity of kCycle detection by giving a tight analysis of their kCycle algorithm. Our analysis improves the best bounds for kCycle detection in directed graphs, for all k>5. @InProceedings{STOC19p1167, author = {Mina Dalirrooyfard and Thuy Duong Vuong and Virginia Vassilevska Williams}, title = {Graph Pattern Detection: Hardness for All Induced Patterns and Faster Noninduced Cycles}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11671178}, doi = {10.1145/3313276.3316329}, year = {2019}, } Publisher's Version 

Daskalakis, Constantinos 
STOC '19: "Regression from Dependent ..."
Regression from Dependent Observations
Constantinos Daskalakis, Nishanth Dikkala, and Ioannis Panageas (Massachusetts Institute of Technology, USA; Singapore University of Technology and Design, Singapore) The standard linear and logistic regression models assume that the response variables are independent, but share the same linear relationship to their corresponding vectors of covariates. The assumption that the response variables are independent is, however, too strong. In many applications, these responses are collected on nodes of a network, or some spatial or temporal domain, and are dependent. Examples abound in financial and meteorological applications, and dependencies naturally arise in social networks through peer effects. Regression with dependent responses has thus received a lot of attention in the Statistics and Economics literature, but there are no strong consistency results unless multiple independent samples of the vectors of dependent responses can be collected from these models. We present computationally and statistically efficient methods for linear and logistic regression models when the response variables are dependent on a network. Given one sample from a networked linear or logistic regression model and under mild assumptions, we prove strong consistency results for recovering the vector of coefficients and the strength of the dependencies, recovering the rates of standard regression under independent observations. We use projected gradient descent on the negative loglikelihood, or negative logpseudolikelihood, and establish their strong convexity and consistency using concentration of measure for dependent random variables. @InProceedings{STOC19p881, author = {Constantinos Daskalakis and Nishanth Dikkala and Ioannis Panageas}, title = {Regression from Dependent Observations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {881889}, doi = {10.1145/3313276.3316362}, year = {2019}, } Publisher's Version 

Diakonikolas, Ilias 
STOC '19: "Degree𝑑 Chow Parameters ..."
Degree𝑑 Chow Parameters Robustly Determine Degree𝑑 PTFs (and Algorithmic Applications)
Ilias Diakonikolas and Daniel M. Kane (University of Southern California, USA; University of California at San Diego, USA) The degreed Chow parameters of a Boolean function are its degree at most d Fourier coefficients. It is wellknown that degreed Chow parameters uniquely characterize degreed polynomial threshold functions (PTFs) within the space of all bounded functions. In this paper, we prove a robust version of this theorem: For f any Boolean degreed PTF and g any bounded function, if the degreed Chow parameters of f are close to the degreed Chow parameters of g in ℓ_{2}norm, then f is close to g in ℓ_{1}distance. Notably, our bound relating the two distances is independent of the dimension. That is, we show that Boolean degreed PTFs are robustly identifiable from their degreed Chow parameters. No nontrivial bound was previously known for d >1. Our robust identifiability result gives the following algorithmic applications: First, we show that Boolean degreed PTFs can be efficiently approximately reconstructed from approximations to their degreed Chow parameters. This immediately implies that degreed PTFs are efficiently learnable in the uniform distribution dRFA model. As a byproduct of our approach, we also obtain the first low integerweight approximations of degreed PTFs, for d>1. As our second application, our robust identifiability result gives the first efficient algorithm, with dimensionindependent error guarantees, for malicious learning of Boolean degreed PTFs under the uniform distribution. The proof of our robust identifiability result involves several new technical ingredients, including the following structural result for degreed multivariate polynomials with very poor anticoncentration: If p is a degreed polynomial where p(x) is very close to 0 on a large number of points in { ± 1 }^{n}, then there exists a degreed hypersurface that exactly passes though almost all of these points. We leverage this structural result to show that if the degreed Chow distance between f and g is small, then we can find many degreed polynomials that vanish on their disagreement region, and in particular enough that forces the ℓ_{1}distance between f and g to also be small. To implement this proof strategy, we require additional technical ideas. In particular, in the d=2 case we show that for any large vector space of degree2 polynomials with a large number of common zeroes, there exists a linear function that vanishes on almost all of these zeroes. The degreed degree generalization of this statement is significantly more complex, and can be viewed as an effective version of Hilbert’s Basis Theorem for our setting. @InProceedings{STOC19p804, author = {Ilias Diakonikolas and Daniel M. Kane}, title = {Degree𝑑 Chow Parameters Robustly Determine Degree𝑑 PTFs (and Algorithmic Applications)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {804815}, doi = {10.1145/3313276.3316301}, year = {2019}, } Publisher's Version 

Dikkala, Nishanth 
STOC '19: "Regression from Dependent ..."
Regression from Dependent Observations
Constantinos Daskalakis, Nishanth Dikkala, and Ioannis Panageas (Massachusetts Institute of Technology, USA; Singapore University of Technology and Design, Singapore) The standard linear and logistic regression models assume that the response variables are independent, but share the same linear relationship to their corresponding vectors of covariates. The assumption that the response variables are independent is, however, too strong. In many applications, these responses are collected on nodes of a network, or some spatial or temporal domain, and are dependent. Examples abound in financial and meteorological applications, and dependencies naturally arise in social networks through peer effects. Regression with dependent responses has thus received a lot of attention in the Statistics and Economics literature, but there are no strong consistency results unless multiple independent samples of the vectors of dependent responses can be collected from these models. We present computationally and statistically efficient methods for linear and logistic regression models when the response variables are dependent on a network. Given one sample from a networked linear or logistic regression model and under mild assumptions, we prove strong consistency results for recovering the vector of coefficients and the strength of the dependencies, recovering the rates of standard regression under independent observations. We use projected gradient descent on the negative loglikelihood, or negative logpseudolikelihood, and establish their strong convexity and consistency using concentration of measure for dependent random variables. @InProceedings{STOC19p881, author = {Constantinos Daskalakis and Nishanth Dikkala and Ioannis Panageas}, title = {Regression from Dependent Observations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {881889}, doi = {10.1145/3313276.3316362}, year = {2019}, } Publisher's Version 

Ding, Jian 
STOC '19: "Capacity Lower Bound for the ..."
Capacity Lower Bound for the Ising Perceptron
Jian Ding and Nike Sun (University of Pennsylvania, USA; Massachusetts Institute of Technology, USA) We consider the Ising perceptron with gaussian disorder, which is equivalent to the discrete cube {−1,+1}^{N} intersected by M random halfspaces. The perceptron’s capacity is the largest integer M_{N} for which the intersection is nonempty. It is conjectured by Krauth and Mézard (1989) that the (random) ratio M_{N}/N converges in probability to an explicit constant α_{⋆}≐ 0.83. Kim and Roche (1998) proved the existence of a positive constant γ such that γ ≤ M_{N}/N ≤ 1−γ with high probability; see also Talagrand (1999). In this paper we show that the Krauth–Mézard conjecture α_{⋆} is a lower bound with positive probability, under the condition that an explicit univariate function S(λ) is maximized at λ=0. Our proof is an application of the second moment method to a certain slice of perceptron configurations, as selected by the socalled TAP (Thouless, Anderson, and Palmer, 1977) or AMP (approximate message passing) iteration, whose scaling limit has been characterized by Bayati and Montanari (2011) and Bolthausen (2012). For verifying the condition on S(λ) we outline one approach, which is implemented in the current version using (nonrigorous) numerical integration packages. In a future version of this paper we intend to complete the verification by implementing a rigorous numerical method. @InProceedings{STOC19p816, author = {Jian Ding and Nike Sun}, title = {Capacity Lower Bound for the Ising Perceptron}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {816827}, doi = {10.1145/3313276.3316383}, year = {2019}, } Publisher's Version 

Dobzinski, Shahar 
STOC '19: "The Communication Complexity ..."
The Communication Complexity of Local Search
Yakov Babichenko, Shahar Dobzinski, and Noam Nisan (Technion, Israel; Weizmann Institute of Science, Israel; Hebrew University of Jerusalem, Israel) We study a communication variant of local search. There is some fixed, commonly known graph G. Alice holds f_{A} and Bob holds f_{B}, both are functions that specify a value for each vertex. The goal is to find a local maximum of f_{A}+f_{B} with respect to G, i.e., a vertex v for which (f_{A}+f_{B})(v)≥ (f_{A}+f_{B})(u) for each neighbor u of v. Our main result is that finding a local maximum requires polynomial (in the number of vertices) bits of communication. The result holds for the following families of graphs: three dimensional grids, hypercubes, odd graphs, and degree 4 graphs. Moreover, we prove an optimal communication bound of Ω(√N) for the hypercube, and for a constant dimension grid, where N is the number of vertices in the graph. We provide applications of our main result in two domains, exact potential games and combinatorial auctions. Each one of the results demonstrates an exponential separation between the nondeterministic communication complexity and the randomized communication complexity of a total search problem. First, we show that finding a pure Nash equilibrium in 2player Naction exact potential games requires poly(N) communication. We also show that finding a pure Nash equilibrium in nplayer 2action exact potential games requires exp(n) communication. The second domain that we consider is combinatorial auctions, in which we prove that finding a local maximum in combinatorial auctions requires exponential (in the number of items) communication even when the valuations are submodular. @InProceedings{STOC19p650, author = {Yakov Babichenko and Shahar Dobzinski and Noam Nisan}, title = {The Communication Complexity of Local Search}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {650661}, doi = {10.1145/3313276.3316354}, year = {2019}, } Publisher's Version 

Dudek, Bartłomiej 
STOC '19: "Computing Quartet Distance ..."
Computing Quartet Distance Is Equivalent to Counting 4Cycles
Bartłomiej Dudek and Paweł Gawrychowski (University of Wrocław, Poland) The quartet distance is a measure of similarity used to compare two unrooted phylogenetic trees on the same set of n leaves, defined as the number of subsets of four leaves related by a different topology in both trees. After a series of previous results, Brodal et al. [SODA 2013] presented an algorithm that computes this number in O(ndlogn) time, where d is the maximum degree of a node. For the related triplet distance between rooted phylogenetic trees, the same authors were able to design an O(nlogn) time algorithm, that is, with running time independent of d. This raises the question of achieving such complexity for computing the quartet distance, or at least improving the dependency on d. Our main contribution is a twoway reduction establishing that the complexity of computing the quartet distance between two trees on n leaves is the same, up to polylogarithmic factors, as the complexity of counting 4cycles in an undirected simple graph with m edges. The latter problem has been extensively studied, and the fastest known algorithm by Vassilevska Williams [SODA 2015] works in O(m^{1.48}) time. In fact, even for the seemingly simpler problem of detecting a 4cycle, the best known algorithm works in O(m^{4/3}) time, and a conjecture of Yuster and Zwick implies that this might be optimal. In particular, an almostlinear time for computing the quartet distance would imply a surprisingly efficient algorithm for counting 4cycles. In the other direction, by plugging in the stateoftheart algorithms for counting 4cycles, our reduction allows us to significantly decrease the complexity of computing the quartet distance. For trees with unbounded degrees we obtain an O(n^{1.48}) time algorithm, which is a substantial improvement on the previous bound of O(n^{2}logn). For trees with degrees bounded by d, by analysing the reduction more carefully, we are able to obtain an Õ(nd^{0.77}) time algorithm, which is again a nontrivial improvement on the previous bound of O(ndlogn). @InProceedings{STOC19p733, author = {Bartłomiej Dudek and Paweł Gawrychowski}, title = {Computing Quartet Distance Is Equivalent to Counting 4Cycles}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {733743}, doi = {10.1145/3313276.3316390}, year = {2019}, } Publisher's Version 

Durfee, David 
STOC '19: "Fully Dynamic Spectral Vertex ..."
Fully Dynamic Spectral Vertex Sparsifiers and Applications
David Durfee, Yu Gao, Gramoz Goranci, and Richard Peng (Georgia Tech, USA; University of Vienna, Austria) We study dynamic algorithms for maintaining spectral vertex sparsifiers of graphs with respect to a set of terminals T of our choice. Such objects preserve pairwise resistances, solutions to systems of linear equations, and energy of electrical flows between the terminals in T. We give a data structure that supports insertions and deletions of edges, and terminal additions, all in sublinear time. We then show the applicability of our result to the following problems. (1) A data structure for dynamically maintaining solutions to Laplacian systems L x = b, where L is the graph Laplacian matrix and b is a demand vector. For a bounded degree, unweighted graph, we support modifications to both L and b while providing access to єapproximations to the energy of routing an electrical flow with demand b, as well as query access to entries of a vector x such that ∥x−L^{†} b ∥_{L} ≤ є ∥L^{†} b ∥_{L} in Õ(n^{11/12}є^{−5}) expected amortized update and query time. (2) A data structure for maintaining fully dynamic AllPairs Effective Resistance. For an intermixed sequence of edge insertions, deletions, and resistance queries, our data structures returns (1 ± є)approximation to all the resistance queries against an oblivious adversary with high probability. Its expected amortized update and query times are Õ(min(m^{3/4},n^{5/6} є^{−2}) є^{−4}) on an unweighted graph, and Õ(n^{5/6}є^{−6}) on weighted graphs. The key ingredients in these results are (1) the intepretation of Schur complement as a sum of random walks, and (2) a suitable choice of terminals based on the behavior of these random walks to make sure that the majority of walks are local, even when the graph itself is highly connected and (3) maintenance of these local walks and numerical solutions using data structures. These results together represent the first data structures for maintain key primitives from the Laplacian paradigm for graph algorithms in sublinear time without assumptions on the underlying graph topologies. The importance of routines such as effective resistance, electrical flows, and Laplacian solvers in the static setting make us optimistic that some of our components can provide new building blocks for dynamic graph algorithms. @InProceedings{STOC19p914, author = {David Durfee and Yu Gao and Gramoz Goranci and Richard Peng}, title = {Fully Dynamic Spectral Vertex Sparsifiers and Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {914925}, doi = {10.1145/3313276.3316379}, year = {2019}, } Publisher's Version 

Dvir, Zeev 
STOC '19: "Static Data Structure Lower ..."
Static Data Structure Lower Bounds Imply Rigidity
Zeev Dvir, Alexander Golovnev, and Omri Weinstein (Princeton University, USA; Harvard University, USA; Columbia University, USA) We show that static data structure lower bounds in the group (linear) model imply semiexplicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t ≥ ω(log^{2} n) on the cellprobe complexity of linear data structures in the group model, even against arbitrarily small linear space (s= (1+)n), would already imply a semiexplicit (P^{NP}) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy and Yekhanin, 2009). Our results further assert that polynomial (t≥ n^{δ}) data structure lower bounds against nearoptimal space, would imply superlinear circuit lower bounds for logdepth linear circuits (a fourdecade open question). In the succinct space regime (s=n+o(n)), we show that any improvement on current cellprobe lower bounds in the linear model would also imply new rigidity bounds. Our results rely on a new connection between the “inner” and “outer” dimensions of a matrix (Paturi and Pudlák, 2006), and on a new reduction from worstcase to averagecase rigidity, which is of independent interest. @InProceedings{STOC19p967, author = {Zeev Dvir and Alexander Golovnev and Omri Weinstein}, title = {Static Data Structure Lower Bounds Imply Rigidity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {967978}, doi = {10.1145/3313276.3316348}, year = {2019}, } Publisher's Version Info 

Ellen, Faith 
STOC '19: "Why ExtensionBased Proofs ..."
Why ExtensionBased Proofs Fail
Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, and Leqi Zhu (IST Austria, Austria; Yale University, USA; University of Toronto, Canada) It is impossible to deterministically solve waitfree consensus in an asynchronous system. The classic proof uses a valency argument, which constructs an infinite execution by repeatedly extending a finite execution. We introduce extensionbased proofs, a class of impossibility proofs that are modelled as an interaction between a prover and a protocol and that include valency arguments. Using proofs based on combinatorial topology, it has been shown that it is impossible to deterministically solve kset agreement among n > k ≥ 2 processes in a waitfree manner. However, it was unknown whether proofs based on simpler techniques were possible. We show that this impossibility result cannot be obtained by an extensionbased proof and, hence, extensionbased proofs are limited in power. @InProceedings{STOC19p986, author = {Dan Alistarh and James Aspnes and Faith Ellen and Rati Gelashvili and Leqi Zhu}, title = {Why ExtensionBased Proofs Fail}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {986996}, doi = {10.1145/3313276.3316407}, year = {2019}, } Publisher's Version 

Ene, Alina 
STOC '19: "Submodular Maximization with ..."
Submodular Maximization with Matroid and Packing Constraints in Parallel
Alina Ene, Huy L. Nguyễn, and Adrian Vladu (Boston University, USA; Northeastern University, USA) We consider the problem of maximizing the multilinear extension of a submodular function subject a single matroid constraint or multiple packing constraints with a small number of adaptive rounds of evaluation queries. We obtain the first algorithms with low adaptivity for submodular maximization with a matroid constraint. Our algorithms achieve a 1−1/e−є approximation for monotone functions and a 1/e−є approximation for nonmonotone functions, which nearly matches the best guarantees known in the fully adaptive setting. The number of rounds of adaptivity is O(log^{2} n/є^{3}), which is an exponential speedup over the existing algorithms. We obtain the first parallel algorithm for nonmonotone submodular maximization subject to packing constraints. Our algorithm achieves a 1/e−є approximation using O(log(n/є) log(1/є) log(n+m)/ є^{2}) parallel rounds, which is again an exponential speedup in parallel time over the existing algorithms. For monotone functions, we obtain a 1−1/e−є approximation in O(log(n/є)logm/є^{2}) parallel rounds. The number of parallel rounds of our algorithm matches that of the state of the art algorithm for solving packing LPs with a linear objective (Mahoney et al., 2016). Our results apply more generally to the problem of maximizing a diminishing returns submodular (DRsubmodular) function. @InProceedings{STOC19p90, author = {Alina Ene and Huy L. Nguyễn and Adrian Vladu}, title = {Submodular Maximization with Matroid and Packing Constraints in Parallel}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {90101}, doi = {10.1145/3313276.3316389}, year = {2019}, } Publisher's Version 

FarachColton, Martín 
STOC '19: "Achieving Optimal Backlog ..."
Achieving Optimal Backlog in Multiprocessor Cup Games
Michael A. Bender, Martín FarachColton, and William Kuszmaul (Stony Brook University, USA; Rutgers University, USA; Massachusetts Institute of Technology, USA) Many problems in processor scheduling, deamortization, and buffer management can be modeled as single and multiprocessor cup games. At the beginning of the singleprocessor ncup game, all cups are empty. In each step of the game, a filler distributes 1−є units of water among the cups, and then an emptier selects a cup and removes up to 1 unit of water from it. The goal of the emptier is to minimize the amount of water in the fullest cup, also known as the backlog. The greedy algorithm (i.e., empty from the fullest cup) is known to achieve backlog O(logn), and no deterministic algorithm can do better. We show that the performance of the greedy algorithm can be exponentially improved with a small amount of randomization: After each step and for any k ≥ Ω(logє^{−1}), the emptier achieves backlog at most O(k) with probability at least 1 −O(2^{−2k}). We call our algorithm the smoothed greedy algorithm because if follows from a smoothed analysis of the (standard) greedy algorithm. In each step of the pprocessor ncup game, the filler distributes p(1−є) unit of water among the cups, with no cup receiving more than 1−δ units of water, and then the emptier selects p cups and removes 1 unit of water from each. Proving nontrivial bounds on the backlog for the multiprocessor cup game has remained open for decades. We present a simple analysis of the greedy algorithm for the multiprocessor cup game, establishing a backlog of O(є^{−1} logn), as long as δ > 1/poly(n). Turning to randomized algorithms, we find that the backlog drops to constant. Specifically, we show that if є and δ satisfy reasonable constraints, then there exists an algorithm that bounds the backlog after a given step by 3 with probability at least 1 − O(exp(−Ω(є^{2} p)). We prove that our results are asymptotically optimal for constant є, in the sense that no algorithms can achieve better bounds, up to constant factors in the backlog and in p. Moreover, we prove robustness results, demonstrating that our randomized algorithms continue to behave well even when placed in bad starting states. @InProceedings{STOC19p1148, author = {Michael A. Bender and Martín FarachColton and William Kuszmaul}, title = {Achieving Optimal Backlog in Multiprocessor Cup Games}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11481157}, doi = {10.1145/3313276.3316342}, year = {2019}, } Publisher's Version 

Farhadi, Alireza 
STOC '19: "Lower Bounds for External ..."
Lower Bounds for External Memory Integer Sorting via Network Coding
Alireza Farhadi, MohammadTaghi Hajiaghayi, Kasper Green Larsen, and Elaine Shi (University of Maryland, USA; Aarhus University, Denmark; Cornell University, USA) Sorting extremely large datasets is a frequently occuring task in practice. These datasets are usually much larger than the computer’s main memory; thus external memory sorting algorithms, first introduced by Aggarwal and Vitter (1988), are often used. The complexity of comparison based external memory sorting has been understood for decades by now, however the situation remains elusive if we assume the keys to be sorted are integers. In internal memory, one can sort a set of n integer keys of Θ(lgn) bits each in O(n) time using the classic Radix Sort algorithm, however in external memory, there are no faster integer sorting algorithms known than the simple comparison based ones. Whether such algorithms exist has remained a central open problem in external memory algorithms for more than three decades. In this paper, we present a tight conditional lower bound on the complexity of external memory sorting of integers. Our lower bound is based on a famous conjecture in network coding by Li and Li (2004), who conjectured that network coding cannot help anything beyond the standard multicommodity flow rate in undirected graphs. The only previous work connecting the Li and Li conjecture to lower bounds for algorithms is due to Adler et al. (2006). Adler et al. indeed obtain relatively simple lower bounds for oblivious algorithms (the memory access pattern is fixed and independent of the input data). Unfortunately obliviousness is a strong limitations, especially for integer sorting: we show that the Li and Li conjecture implies an Ω(n logn) lower bound for internal memory oblivious sorting when the keys are Θ(lgn) bits. This is in sharp contrast to the classic (nonoblivious) Radix Sort algorithm. Indeed going beyond obliviousness is highly nontrivial; we need to introduce several new methods and involved techniques, which are of their own interest, to obtain our tight lower bound for external memory integer sorting. @InProceedings{STOC19p997, author = {Alireza Farhadi and MohammadTaghi Hajiaghayi and Kasper Green Larsen and Elaine Shi}, title = {Lower Bounds for External Memory Integer Sorting via Network Coding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {9971008}, doi = {10.1145/3313276.3316337}, year = {2019}, } Publisher's Version 

Feldman, Moran 
STOC '19: "Unconstrained Submodular Maximization ..."
Unconstrained Submodular Maximization with Constant Adaptive Complexity
Lin Chen, Moran Feldman, and Amin Karbasi (Yale University, USA; Open University of Israel, Israel) In this paper, we consider the unconstrained submodular maximization problem. We propose the first algorithm for this problem that achieves a tight (1/2−ε)approximation guarantee using Õ(ε^{−1}) adaptive rounds and a linear number of function evaluations. No previously known algorithm for this problem achieves an approximation ratio better than 1/3 using less than Ω(n) rounds of adaptivity, where n is the size of the ground set. Moreover, our algorithm easily extends to the maximization of a nonnegative continuous DRsubmodular function subject to a box constraint, and achieves a tight (1/2−ε)approximation guarantee for this problem while keeping the same adaptive and query complexities. @InProceedings{STOC19p102, author = {Lin Chen and Moran Feldman and Amin Karbasi}, title = {Unconstrained Submodular Maximization with Constant Adaptive Complexity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {102113}, doi = {10.1145/3313276.3316327}, year = {2019}, } Publisher's Version 

Feng, Weiming 
STOC '19: "Dynamic Sampling from Graphical ..."
Dynamic Sampling from Graphical Models
Weiming Feng, Nisheeth K. Vishnoi, and Yitong Yin (Nanjing University, China; Yale University, USA) In this paper, we study the problem of sampling from a graphical model when the model itself is changing dynamically with time. This problem derives its interest from a variety of inference, learning, and sampling settings in machine learning, computer vision, statistical physics, and theoretical computer science. While the problem of sampling from a static graphical model has received considerable attention, theoretical works for its dynamic variants have been largely lacking. The main contribution of this paper is an algorithm that can sample dynamically from a broad class of graphical models over discrete random variables. Our algorithm is parallel and Las Vegas: it knows when to stop and it outputs samples from the exact distribution. We also provide sufficient conditions under which this algorithm runs in time proportional to the size of the update, on general graphical models as well as wellstudied specific spin systems. In particular we obtain, for the Ising model (ferromagnetic or antiferromagnetic) and for the hardcore model the first dynamic sampling algorithms that can handle both edge and vertex updates (addition, deletion, change of functions), both efficient within regimes that are close to the respective uniqueness regimes, beyond which, even for the static and approximate sampling, no local algorithms were known or the problem itself is intractable. Our dynamic sampling algorithm relies on a local resampling algorithm and a new ``equilibrium'' property that is shown to be satisfied by our algorithm at each step, and enables us to prove its correctness. This equilibrium property is robust enough to guarantee the correctness of our algorithm, helps us improve bounds on fast convergence on specific models, and should be of independent interest. @InProceedings{STOC19p1070, author = {Weiming Feng and Nisheeth K. Vishnoi and Yitong Yin}, title = {Dynamic Sampling from Graphical Models}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10701081}, doi = {10.1145/3313276.3316365}, year = {2019}, } Publisher's Version 

FilosRatsikas, Aris 
STOC '19: "The Complexity of Splitting ..."
The Complexity of Splitting Necklaces and Bisecting Ham Sandwiches
Aris FilosRatsikas and Paul W. Goldberg (EPFL, Switzerland; University of Oxford, UK) We resolve the computational complexity of two problems known as Necklace Splitting and Discrete Ham Sandwich, showing that they are PPAcomplete. For Necklace Splitting, this result is specific to the important special case in which two thieves share the necklace. We do this via a PPAcompleteness result for an approximate version of the Consensus Halving problem, strengthening our recent result that the problem is PPAcomplete for inverseexponential precision. At the heart of our construction is a smooth embedding of the highdimensional Mobius strip in the Consensus Halving problem. These results settle the status of PPA as a class that captures the complexity of “natural” problems whose definitions do not incorporate a circuit. @InProceedings{STOC19p638, author = {Aris FilosRatsikas and Paul W. Goldberg}, title = {The Complexity of Splitting Necklaces and Bisecting Ham Sandwiches}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {638649}, doi = {10.1145/3313276.3316334}, year = {2019}, } Publisher's Version 

Fitzsimons, Joseph 
STOC '19: "Quantum Proof Systems for ..."
Quantum Proof Systems for Iterated Exponential Time, and Beyond
Joseph Fitzsimons, Zhengfeng Ji, Thomas Vidick, and Henry Yuen (Horizon Quantum Computing, Singapore; University of Technology Sydney, Australia; California Institute of Technology, USA; University of Toronto, Canada) We show that any language solvable in nondeterministic time exp( exp(⋯exp(n))), where the number of iterated exponentials is an arbitrary function R(n), can be decided by a multiprover interactive proof system with a classical polynomialtime verifier and a constant number of quantum entangled provers, with completeness 1 and soundness 1 − exp(−Cexp(⋯exp(n))), where the number of iterated exponentials is R(n)−1 and C>0 is a universal constant. The result was previously known for R=1 and R=2; we obtain it for any timeconstructible function R. The result is based on a compression technique for interactive proof systems with entangled provers that significantly simplifies and strengthens a protocol compression result of Ji (STOC’17). As a separate consequence of this technique we obtain a different proof of Slofstra’s recent result on the uncomputability of the entangled value of multiprover games (Forum of Mathematics, Pi 2019). Finally, we show that even minor improvements to our compression result would yield remarkable consequences in computational complexity theory and the foundations of quantum mechanics: first, it would imply that the class MIP* contains all computable languages; second, it would provide a negative resolution to a multipartite version of Tsirelson’s problem on the relation between the commuting operator and tensor product models for quantum correlations. @InProceedings{STOC19p473, author = {Joseph Fitzsimons and Zhengfeng Ji and Thomas Vidick and Henry Yuen}, title = {Quantum Proof Systems for Iterated Exponential Time, and Beyond}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {473480}, doi = {10.1145/3313276.3316343}, year = {2019}, } Publisher's Version Info 

Förster, Henry 
STOC '19: "Planar Graphs of Bounded Degree ..."
Planar Graphs of Bounded Degree Have Bounded Queue Number
Michael Bekos, Henry Förster, Martin Gronemann, Tamara Mchedlidze, Fabrizio Montecchiani, Chrysanthi Raftopoulou, and Torsten Ueckerdt (University of Tübingen, Germany; University of Cologne, Germany; KIT, Germany; University of Perugia, Italy; National Technical University of Athens, Greece) A queue layout of a graph consists of a linear order of its vertices and a partition of its edges into queues, so that no two independent edges of the same queue are nested. The queue number of a graph is the minimum number of queues required by any of its queue layouts. A longstanding conjecture by Heath, Leighton and Rosenberg states that the queue number of planar graphs is bounded.This conjecture has been partially settled in the positive for several sub families of planar graphs (most of which have bounded treewidth). In this paper, we make a further important step towards settling this conjecture. We prove that planar graphs of bounded degree (which may have unbounded treewidth) have bounded queue number. A notable implication of this result is that every planar graph of bounded degree admits a threedimensional straightline grid drawing in linear volume. Further implications are that every planar graph of bounded degree has bounded track number, and that every kplanar graph (i.e., every graph that can be drawn in the plane with at most k crossings per edge) of bounded degree as bounded queue number. @InProceedings{STOC19p176, author = {Michael Bekos and Henry Förster and Martin Gronemann and Tamara Mchedlidze and Fabrizio Montecchiani and Chrysanthi Raftopoulou and Torsten Ueckerdt}, title = {Planar Graphs of Bounded Degree Have Bounded Queue Number}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {176184}, doi = {10.1145/3313276.3316324}, year = {2019}, } Publisher's Version 

Forster, Sebastian 
STOC '19: "Dynamic LowStretch Trees ..."
Dynamic LowStretch Trees via Dynamic LowDiameter Decompositions
Sebastian Forster and Gramoz Goranci (University of Salzburg, Austria; University of Vienna, Austria) Spanning trees of low average stretch on the nontree edges, as introduced by Alon et al. [SICOMP 1995], are a natural graphtheoretic object. In recent years, they have found significant applications in solvers for symmetric diagonally dominant (SDD) linear systems. In this work, we provide the first dynamic algorithm for maintaining such trees under edge insertions and deletions to the input graph. Our algorithm has update time n^{1/2 + o(1)} and the average stretch of the maintained tree is n^{o(1)} , which matches the stretch in the seminal result of Alon et al. Similar to Alon et al., our dynamic lowstretch tree algorithm employs a dynamic hierarchy of lowdiameter decompositions (LDDs). As a major building block we use a dynamic LDD that we obtain by adapting the randomshift clustering of Miller et al. [SPAA 2013] to the dynamic setting. The major technical challenge in our approach is to control the propagation of updates within our hierarchy of LDDs: each update to one level of the hierarchy could potentially induce several insertions and deletions to the next level of the hierarchy. We achieve this goal by a sophisticated amortization approach. In particular, we give a bound on the number of changes made to the LDD per update to the input graph that is significantly better than the trivial bound implied by the update time. We believe that the dynamic randomshift clustering might be useful for independent applications. One of these applications is the dynamic spanner problem. By combining the randomshift clustering with the recent spanner construction of Elkin and Neiman [SODA 2017]. We obtain a fully dynamic algorithm for maintaining a spanner of stretch 2k − 1 and size O (n^{1 + 1/k} logn) with amortized update time O (k log^{2} n) for any integer 2 ≤ k ≤ logn . Compared to the stateofthe art in this regime Baswana et al. [TALG 2012], we improve upon the size of the spanner and the update time by a factor of k . @InProceedings{STOC19p377, author = {Sebastian Forster and Gramoz Goranci}, title = {Dynamic LowStretch Trees via Dynamic LowDiameter Decompositions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {377388}, doi = {10.1145/3313276.3316381}, year = {2019}, } Publisher's Version 

Ganesh, Arun 
STOC '19: "Optimal Sequence Length Requirements ..."
Optimal Sequence Length Requirements for Phylogenetic Tree Reconstruction with Indels
Arun Ganesh and Qiuyi (Richard) Zhang (University of California at Berkeley, USA) We consider the phylogenetic tree reconstruction problem with insertions and deletions (indels). Phylogenetic algorithms proceed under a model where sequences evolve down the model tree, and given sequences at the leaves, the problem is to reconstruct the model tree with high probability. Traditionally, sequences mutate by substitutiononly processes, although some recent work considers evolutionary processes with insertions and deletions. In this paper, we improve on previous work by giving a reconstruction algorithm that simultaneously has O(poly logn) sequence length and tolerates constant indel probabilities on each edge. Our recursivelyreconstructed distancebased technique provably outputs the model tree when the model tree has O(poly logn) diameter and discretized branch lengths, allowing for the probability of insertion and deletion to be nonuniform and asymmetric on each edge. Our polylogarithmic sequence length bounds improve significantly over previous polynomial sequence length bounds and match sequence length bounds in the substitutiononly models of phylogenetic evolution, thereby challenging the idea that many global misalignments caused by insertions and deletions when p_{indel} is large are a fundamental obstruction to reconstruction with short sequences. We build upon a signature scheme for sequences, introduced by Daskalakis and Roch, that is robust to insertions and deletions. Our main contribution is to show that an averaging procedure gives an accurate reconstruction of signatures for ancestors, even while the explicit ancestral sequences cannot be reconstructed due to misalignments. Because these signatures are not as sensitive to indels, we can bound the noise that arise from indelinduced shifts and provide a novel analysis that provably reconstructs the model tree with O(poly logn) sequence length as long as the rate of mutation is less than the well known KestenStigum threshold. The upper bound on the rate of mutation is optimal as beyond this threshold, an informationtheoretic lower bound of Ω(poly(n)) sequence length requirement exists. @InProceedings{STOC19p721, author = {Arun Ganesh and Qiuyi (Richard) Zhang}, title = {Optimal Sequence Length Requirements for Phylogenetic Tree Reconstruction with Indels}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {721732}, doi = {10.1145/3313276.3316345}, year = {2019}, } Publisher's Version 

Gao, Yu 
STOC '19: "Fully Dynamic Spectral Vertex ..."
Fully Dynamic Spectral Vertex Sparsifiers and Applications
David Durfee, Yu Gao, Gramoz Goranci, and Richard Peng (Georgia Tech, USA; University of Vienna, Austria) We study dynamic algorithms for maintaining spectral vertex sparsifiers of graphs with respect to a set of terminals T of our choice. Such objects preserve pairwise resistances, solutions to systems of linear equations, and energy of electrical flows between the terminals in T. We give a data structure that supports insertions and deletions of edges, and terminal additions, all in sublinear time. We then show the applicability of our result to the following problems. (1) A data structure for dynamically maintaining solutions to Laplacian systems L x = b, where L is the graph Laplacian matrix and b is a demand vector. For a bounded degree, unweighted graph, we support modifications to both L and b while providing access to єapproximations to the energy of routing an electrical flow with demand b, as well as query access to entries of a vector x such that ∥x−L^{†} b ∥_{L} ≤ є ∥L^{†} b ∥_{L} in Õ(n^{11/12}є^{−5}) expected amortized update and query time. (2) A data structure for maintaining fully dynamic AllPairs Effective Resistance. For an intermixed sequence of edge insertions, deletions, and resistance queries, our data structures returns (1 ± є)approximation to all the resistance queries against an oblivious adversary with high probability. Its expected amortized update and query times are Õ(min(m^{3/4},n^{5/6} є^{−2}) є^{−4}) on an unweighted graph, and Õ(n^{5/6}є^{−6}) on weighted graphs. The key ingredients in these results are (1) the intepretation of Schur complement as a sum of random walks, and (2) a suitable choice of terminals based on the behavior of these random walks to make sure that the majority of walks are local, even when the graph itself is highly connected and (3) maintenance of these local walks and numerical solutions using data structures. These results together represent the first data structures for maintain key primitives from the Laplacian paradigm for graph algorithms in sublinear time without assumptions on the underlying graph topologies. The importance of routines such as effective resistance, electrical flows, and Laplacian solvers in the static setting make us optimistic that some of our components can provide new building blocks for dynamic graph algorithms. @InProceedings{STOC19p914, author = {David Durfee and Yu Gao and Gramoz Goranci and Richard Peng}, title = {Fully Dynamic Spectral Vertex Sparsifiers and Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {914925}, doi = {10.1145/3313276.3316379}, year = {2019}, } Publisher's Version 

Garg, Jugal 
STOC '19: "A Strongly Polynomial Algorithm ..."
A Strongly Polynomial Algorithm for Linear Exchange Markets
Jugal Garg and László A. Végh (University of Illinois at UrbanaChampaign, USA; London School of Economics and Political Science, UK) We present a strongly polynomial algorithm for computing an equilibrium in ArrowDebreu exchange markets with linear utilities. Our algorithm is based on a variant of the weaklypolynomial DuanMehlhorn (DM) algorithm. We use the DM algorithm as a subroutine to identify revealed edges, i.e., pairs of agents and goods that must correspond to best bangperbuck transactions in every equilibrium solution. Every time a new revealed edge is found, we use another subroutine that decides if there is an optimal solution using the current set of revealed edges, or if none exists, finds the solution that approximately minimizes the violation of the demand and supply constraints. This task can be reduced to solving a linear program (LP). Even though we are unable to solve this LP in strongly polynomial time, we show that it can be approximated by a simpler LP with two variables per inequality that is solvable in strongly polynomial time. @InProceedings{STOC19p54, author = {Jugal Garg and László A. Végh}, title = {A Strongly Polynomial Algorithm for Linear Exchange Markets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {5465}, doi = {10.1145/3313276.3316340}, year = {2019}, } Publisher's Version 

Gawrychowski, Paweł 
STOC '19: "Almost Optimal Distance Oracles ..."
Almost Optimal Distance Oracles for Planar Graphs
Panagiotis Charalampopoulos, Paweł Gawrychowski, Shay Mozes, and Oren Weimann (King's College London, UK; University of Wrocław, Poland; IDC Herzliya, Israel; University of Haifa, Israel) We present new tradeoffs between space and querytime for exact distance oracles in directed weighted planar graphs. These tradeoffs are almost optimal in the sense that they are within polylogarithmic, subpolynomial or arbitrarily small polynomial factors from the naïve linear space, constant querytime lower bound. These tradeoffs include: (i) an oracle with space O(n^{1+є}) and querytime Õ(1) for any constant є>0, (ii) an oracle with space Õ(n) and querytime O(n^{є}) for any constant є>0, and (iii) an oracle with space n^{1+o(1)} and querytime n^{o(1)}. @InProceedings{STOC19p138, author = {Panagiotis Charalampopoulos and Paweł Gawrychowski and Shay Mozes and Oren Weimann}, title = {Almost Optimal Distance Oracles for Planar Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {138151}, doi = {10.1145/3313276.3316316}, year = {2019}, } Publisher's Version STOC '19: "Computing Quartet Distance ..." Computing Quartet Distance Is Equivalent to Counting 4Cycles Bartłomiej Dudek and Paweł Gawrychowski (University of Wrocław, Poland) The quartet distance is a measure of similarity used to compare two unrooted phylogenetic trees on the same set of n leaves, defined as the number of subsets of four leaves related by a different topology in both trees. After a series of previous results, Brodal et al. [SODA 2013] presented an algorithm that computes this number in O(ndlogn) time, where d is the maximum degree of a node. For the related triplet distance between rooted phylogenetic trees, the same authors were able to design an O(nlogn) time algorithm, that is, with running time independent of d. This raises the question of achieving such complexity for computing the quartet distance, or at least improving the dependency on d. Our main contribution is a twoway reduction establishing that the complexity of computing the quartet distance between two trees on n leaves is the same, up to polylogarithmic factors, as the complexity of counting 4cycles in an undirected simple graph with m edges. The latter problem has been extensively studied, and the fastest known algorithm by Vassilevska Williams [SODA 2015] works in O(m^{1.48}) time. In fact, even for the seemingly simpler problem of detecting a 4cycle, the best known algorithm works in O(m^{4/3}) time, and a conjecture of Yuster and Zwick implies that this might be optimal. In particular, an almostlinear time for computing the quartet distance would imply a surprisingly efficient algorithm for counting 4cycles. In the other direction, by plugging in the stateoftheart algorithms for counting 4cycles, our reduction allows us to significantly decrease the complexity of computing the quartet distance. For trees with unbounded degrees we obtain an O(n^{1.48}) time algorithm, which is a substantial improvement on the previous bound of O(n^{2}logn). For trees with degrees bounded by d, by analysing the reduction more carefully, we are able to obtain an Õ(nd^{0.77}) time algorithm, which is again a nontrivial improvement on the previous bound of O(ndlogn). @InProceedings{STOC19p733, author = {Bartłomiej Dudek and Paweł Gawrychowski}, title = {Computing Quartet Distance Is Equivalent to Counting 4Cycles}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {733743}, doi = {10.1145/3313276.3316390}, year = {2019}, } Publisher's Version 

Gelashvili, Rati 
STOC '19: "Why ExtensionBased Proofs ..."
Why ExtensionBased Proofs Fail
Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, and Leqi Zhu (IST Austria, Austria; Yale University, USA; University of Toronto, Canada) It is impossible to deterministically solve waitfree consensus in an asynchronous system. The classic proof uses a valency argument, which constructs an infinite execution by repeatedly extending a finite execution. We introduce extensionbased proofs, a class of impossibility proofs that are modelled as an interaction between a prover and a protocol and that include valency arguments. Using proofs based on combinatorial topology, it has been shown that it is impossible to deterministically solve kset agreement among n > k ≥ 2 processes in a waitfree manner. However, it was unknown whether proofs based on simpler techniques were possible. We show that this impossibility result cannot be obtained by an extensionbased proof and, hence, extensionbased proofs are limited in power. @InProceedings{STOC19p986, author = {Dan Alistarh and James Aspnes and Faith Ellen and Rati Gelashvili and Leqi Zhu}, title = {Why ExtensionBased Proofs Fail}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {986996}, doi = {10.1145/3313276.3316407}, year = {2019}, } Publisher's Version 

Gharan, Shayan Oveis 
STOC '19: "LogConcave Polynomials II: ..."
LogConcave Polynomials II: HighDimensional Walks and an FPRAS for Counting Bases of a Matroid
Nima Anari, Kuikui Liu, Shayan Oveis Gharan, and Cynthia Vinzant (Stanford University, USA; University of Washington, USA; North Carolina State University, USA) We design an FPRAS to count the number of bases of any matroid given by an independent set oracle, and to estimate the partition function of the random cluster model of any matroid in the regime where 0<q<1. Consequently, we can sample random spanning forests in a graph and estimate the reliability polynomial of any matroid. We also prove the thirty year old conjecture of Mihail and Vazirani that the bases exchange graph of any matroid has edge expansion at least 1. Our algorithm and proof build on the recent results of Dinur, Kaufman, Mass and Oppenheim who show that a high dimensional walk on a weighted simplicial complex mixes rapidly if for every link of the complex, the corresponding localized random walk on the 1skeleton is a strong spectral expander. One of our key observations is that a weighted simplicial complex X is a 0local spectral expander if and only if a naturally associated generating polynomial p_{X} is strongly logconcave. More generally, to every pure simplicial complex with positive weights on its maximal faces, we can associate to X a multiaffine homogeneous polynomial p_{X} such that the eigenvalues of the localized random walks on X correspond to the eigenvalues of the Hessian of derivatives of p_{X}. @InProceedings{STOC19p1, author = {Nima Anari and Kuikui Liu and Shayan Oveis Gharan and Cynthia Vinzant}, title = {LogConcave Polynomials II: HighDimensional Walks and an FPRAS for Counting Bases of a Matroid}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {112}, doi = {10.1145/3313276.3316385}, year = {2019}, } Publisher's Version 

Ghodsi, Mohammad 
STOC '19: "1+ε Approximation ..."
1+ε Approximation of Tree Edit Distance in Quadratic Time
Mahdi Boroujeni, Mohammad Ghodsi, MohammadTaghi Hajiaghayi, and Saeed Seddighin (Sharif University of Technology, Iran; Institute for Research in Fundamental Sciences, Iran; University of Maryland, USA) Edit distance is one of the most fundamental problems in computer science. Tree edit distance is a natural generalization of edit distance to ordered rooted trees. Such a generalization extends the applications of edit distance to areas such as computational biology, structured data analysis (e.g., XML), image analysis, and compiler optimization. Perhaps the most notable application of tree edit distance is in the analysis of RNA molecules in computational biology where the secondary structure of RNA is typically represented as a rooted tree. The bestknown solution for tree edit distance runs in cubic time. Recently, Bringmann et al. show that an O(n^{2.99}) algorithm for weighted tree edit distance is unlikely by proving a conditional lower bound on the computational complexity of tree edit distance. This shows a substantial gap between the computational complexity of tree edit distance and that of edit distance for which a simple dynamic program solves the problem in quadratic time. In this work, we give the first nontrivial approximation algorithms for tree edit distance. Our main result is a quadratic time approximation scheme for tree edit distance that approximates the solution within a factor of 1+є for any constant є > 0. @InProceedings{STOC19p709, author = {Mahdi Boroujeni and Mohammad Ghodsi and MohammadTaghi Hajiaghayi and Saeed Seddighin}, title = {1+<i>ε</i> Approximation of Tree Edit Distance in Quadratic Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {709720}, doi = {10.1145/3313276.3316388}, year = {2019}, } Publisher's Version 

Gilyén, András 
STOC '19: "Quantum Singular Value Transformation ..."
Quantum Singular Value Transformation and Beyond: Exponential Improvements for Quantum Matrix Arithmetics
András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe (CWI, Netherlands; University of Amsterdam, Netherlands; University of Maryland, USA; Microsoft Research, USA) An nqubit quantum circuit performs a unitary operation on an exponentially large, 2^{n}dimensional, Hilbert space, which is a major source of quantum speedups. We develop a new “Quantum singular value transformation” algorithm that can directly harness the advantages of exponential dimensionality by applying polynomial transformations to the singular values of a block of a unitary operator. The transformations are realized by quantum circuits with a very simple structure  typically using only a constant number of ancilla qubits  leading to optimal algorithms with appealing constant factors. We show that our framework allows describing many quantum algorithms on a high level, and enables remarkably concise proofs for many prominent quantum algorithms, ranging from optimal Hamiltonian simulation to various quantum machine learning applications. We also devise a new singular vector transformation algorithm, describe how to exponentially improve the complexity of implementing fractional queries to unitaries with a gapped spectrum, and show how to efficiently implement principal component regression. Finally, we also prove a quantum lower bound on spectral transformations. @InProceedings{STOC19p193, author = {András Gilyén and Yuan Su and Guang Hao Low and Nathan Wiebe}, title = {Quantum Singular Value Transformation and Beyond: Exponential Improvements for Quantum Matrix Arithmetics}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {193204}, doi = {10.1145/3313276.3316366}, year = {2019}, } Publisher's Version 

Gishboliner, Lior 
STOC '19: "Testing Graphs against an ..."
Testing Graphs against an Unknown Distribution
Lior Gishboliner and Asaf Shapira (Tel Aviv University, Israel) The classical model of graph property testing, introduced by Goldreich, Goldwasser and Ron, assumes that the algorithm can obtain uniformly distributed vertices from the input graph. Goldreich introduced a more general model, called the VertexDistributionFree model (or VDF for short) in which the testing algorithm obtains vertices drawn from an arbitrary and unknown distribution. The main motivation for this investigation is that it can allow one to give different weight/importance to different parts of the input graph, as well as handle situations where one cannot obtain uniformly selected vertices from the input. Goldreich proved that any property which is testable in this model must (essentially) be hereditary, and that several hereditary properties can indeed be tested in this model. He further asked which properties are testable in this model. In this paper we completely solve Goldreich’s problem by giving a precise characterization of the graph properties that are testable in the VDF model. Somewhat surprisingly this characterization takes the following clean form: say that a graph property P is extendable if given any graph G satisfying P, one can add one more vertex to G, and connect it to some of the vertices of G in a way that the resulting graph satisfies P. Then a property P is testable in the VDF model if and only if P is hereditary and extendable. @InProceedings{STOC19p535, author = {Lior Gishboliner and Asaf Shapira}, title = {Testing Graphs against an Unknown Distribution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {535546}, doi = {10.1145/3313276.3316308}, year = {2019}, } Publisher's Version 

Goldberg, Paul W. 
STOC '19: "The Complexity of Splitting ..."
The Complexity of Splitting Necklaces and Bisecting Ham Sandwiches
Aris FilosRatsikas and Paul W. Goldberg (EPFL, Switzerland; University of Oxford, UK) We resolve the computational complexity of two problems known as Necklace Splitting and Discrete Ham Sandwich, showing that they are PPAcomplete. For Necklace Splitting, this result is specific to the important special case in which two thieves share the necklace. We do this via a PPAcompleteness result for an approximate version of the Consensus Halving problem, strengthening our recent result that the problem is PPAcomplete for inverseexponential precision. At the heart of our construction is a smooth embedding of the highdimensional Mobius strip in the Consensus Halving problem. These results settle the status of PPA as a class that captures the complexity of “natural” problems whose definitions do not incorporate a circuit. @InProceedings{STOC19p638, author = {Aris FilosRatsikas and Paul W. Goldberg}, title = {The Complexity of Splitting Necklaces and Bisecting Ham Sandwiches}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {638649}, doi = {10.1145/3313276.3316334}, year = {2019}, } Publisher's Version 

Goldreich, Oded 
STOC '19: "Testing Graphs in VertexDistributionFree ..."
Testing Graphs in VertexDistributionFree Models
Oded Goldreich (Weizmann Institute of Science, Israel) Prior studies of testing graph properties presume that the tester can obtain uniformly distributed vertices in the tested graph (in addition to obtaining answers to the some type of graphqueries). Here we envision settings in which it is only feasible to obtain random vertices drawn according to an arbitrary distribution (and, in addition, obtain answers to the usual graphqueries). We initiate a study of testing graph properties in such settings, while adapting the definition of distance between graphs so that it reflects the different probability weight of different vertices. Hence, the distance to the property represents the relative importance of the “part of the graph” that violates the property. We consider such “vertexdistribution free” (VDF) versions of the two moststudied models of testing graph properties (i.e., the dense graph model and the boundeddegree model). In both cases, we show that VDF testing within complexity that is independent of the distribution on the vertexset (of the tested graph) is possible only if the same property can be tested in the standard model with onesided error and sizeindependent complexity. We also show that this necessary condition is not sufficient; yet, we present sizeindependent VDF testers for many of the natural properties that satisfy the necessary condition. @InProceedings{STOC19p527, author = {Oded Goldreich}, title = {Testing Graphs in VertexDistributionFree Models}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {527534}, doi = {10.1145/3313276.3316302}, year = {2019}, } Publisher's Version 

Golovnev, Alexander 
STOC '19: "Static Data Structure Lower ..."
Static Data Structure Lower Bounds Imply Rigidity
Zeev Dvir, Alexander Golovnev, and Omri Weinstein (Princeton University, USA; Harvard University, USA; Columbia University, USA) We show that static data structure lower bounds in the group (linear) model imply semiexplicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t ≥ ω(log^{2} n) on the cellprobe complexity of linear data structures in the group model, even against arbitrarily small linear space (s= (1+)n), would already imply a semiexplicit (P^{NP}) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy and Yekhanin, 2009). Our results further assert that polynomial (t≥ n^{δ}) data structure lower bounds against nearoptimal space, would imply superlinear circuit lower bounds for logdepth linear circuits (a fourdecade open question). In the succinct space regime (s=n+o(n)), we show that any improvement on current cellprobe lower bounds in the linear model would also imply new rigidity bounds. Our results rely on a new connection between the “inner” and “outer” dimensions of a matrix (Paturi and Pudlák, 2006), and on a new reduction from worstcase to averagecase rigidity, which is of independent interest. @InProceedings{STOC19p967, author = {Zeev Dvir and Alexander Golovnev and Omri Weinstein}, title = {Static Data Structure Lower Bounds Imply Rigidity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {967978}, doi = {10.1145/3313276.3316348}, year = {2019}, } Publisher's Version Info 

Gopi, Sivakanth 
STOC '19: "CSPs with Global Modular Constraints: ..."
CSPs with Global Modular Constraints: Algorithms and Hardness via Polynomial Representations
Joshua Brakensiek, Sivakanth Gopi, and Venkatesan Guruswami (Stanford University, USA; Microsoft Research, USA; Carnegie Mellon University, USA) We study the complexity of Boolean constraint satisfaction problems (CSPs) when the assignment must have Hamming weight in some congruence class modulo M, for various choices of the modulus M. Due to the known classification of tractable Boolean CSPs, this mainly reduces to the study of three cases: 2SAT, HORNSAT, and LIN2 (linear equations mod 2). We classify the moduli M for which these respective problems are polynomial time solvable, and when they are not (assuming the ETH). Our study reveals that this modular constraint lends a surprising richness to these classic, wellstudied problems, with interesting broader connections to complexity theory and coding theory. The HORNSAT case is connected to the covering complexity of polynomials representing the NAND function mod M. The LIN2 case is tied to the sparsity of polynomials representing the OR function mod M, which in turn has connections to modular weight distribution properties of linear codes and locally decodable codes. In both cases, the analysis of our algorithm as well as the hardness reduction rely on these polynomial representations, highlighting an interesting algebraic common ground between hard cases for our algorithms and the gadgets which show hardness. These new complexity measures of polynomial representations merit further study. The inspiration for our study comes from a recent work by N'agele, Sudakov, and Zenklusen on submodular minimization with a global congruence constraint. Our algorithm for HORNSAT has strong similarities to their algorithm, and in particular identical kind of set systems arise in both cases. Our connection to polynomial representations leads to a simpler analysis of such set systems, and also sheds light on (but does not resolve) the complexity of submodular minimization with a congruency requirement modulo a composite M. @InProceedings{STOC19p590, author = {Joshua Brakensiek and Sivakanth Gopi and Venkatesan Guruswami}, title = {CSPs with Global Modular Constraints: Algorithms and Hardness via Polynomial Representations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {590601}, doi = {10.1145/3313276.3316401}, year = {2019}, } Publisher's Version 

Goranci, Gramoz 
STOC '19: "Fully Dynamic Spectral Vertex ..."
Fully Dynamic Spectral Vertex Sparsifiers and Applications
David Durfee, Yu Gao, Gramoz Goranci, and Richard Peng (Georgia Tech, USA; University of Vienna, Austria) We study dynamic algorithms for maintaining spectral vertex sparsifiers of graphs with respect to a set of terminals T of our choice. Such objects preserve pairwise resistances, solutions to systems of linear equations, and energy of electrical flows between the terminals in T. We give a data structure that supports insertions and deletions of edges, and terminal additions, all in sublinear time. We then show the applicability of our result to the following problems. (1) A data structure for dynamically maintaining solutions to Laplacian systems L x = b, where L is the graph Laplacian matrix and b is a demand vector. For a bounded degree, unweighted graph, we support modifications to both L and b while providing access to єapproximations to the energy of routing an electrical flow with demand b, as well as query access to entries of a vector x such that ∥x−L^{†} b ∥_{L} ≤ є ∥L^{†} b ∥_{L} in Õ(n^{11/12}є^{−5}) expected amortized update and query time. (2) A data structure for maintaining fully dynamic AllPairs Effective Resistance. For an intermixed sequence of edge insertions, deletions, and resistance queries, our data structures returns (1 ± є)approximation to all the resistance queries against an oblivious adversary with high probability. Its expected amortized update and query times are Õ(min(m^{3/4},n^{5/6} є^{−2}) є^{−4}) on an unweighted graph, and Õ(n^{5/6}є^{−6}) on weighted graphs. The key ingredients in these results are (1) the intepretation of Schur complement as a sum of random walks, and (2) a suitable choice of terminals based on the behavior of these random walks to make sure that the majority of walks are local, even when the graph itself is highly connected and (3) maintenance of these local walks and numerical solutions using data structures. These results together represent the first data structures for maintain key primitives from the Laplacian paradigm for graph algorithms in sublinear time without assumptions on the underlying graph topologies. The importance of routines such as effective resistance, electrical flows, and Laplacian solvers in the static setting make us optimistic that some of our components can provide new building blocks for dynamic graph algorithms. @InProceedings{STOC19p914, author = {David Durfee and Yu Gao and Gramoz Goranci and Richard Peng}, title = {Fully Dynamic Spectral Vertex Sparsifiers and Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {914925}, doi = {10.1145/3313276.3316379}, year = {2019}, } Publisher's Version STOC '19: "Dynamic LowStretch Trees ..." Dynamic LowStretch Trees via Dynamic LowDiameter Decompositions Sebastian Forster and Gramoz Goranci (University of Salzburg, Austria; University of Vienna, Austria) Spanning trees of low average stretch on the nontree edges, as introduced by Alon et al. [SICOMP 1995], are a natural graphtheoretic object. In recent years, they have found significant applications in solvers for symmetric diagonally dominant (SDD) linear systems. In this work, we provide the first dynamic algorithm for maintaining such trees under edge insertions and deletions to the input graph. Our algorithm has update time n^{1/2 + o(1)} and the average stretch of the maintained tree is n^{o(1)} , which matches the stretch in the seminal result of Alon et al. Similar to Alon et al., our dynamic lowstretch tree algorithm employs a dynamic hierarchy of lowdiameter decompositions (LDDs). As a major building block we use a dynamic LDD that we obtain by adapting the randomshift clustering of Miller et al. [SPAA 2013] to the dynamic setting. The major technical challenge in our approach is to control the propagation of updates within our hierarchy of LDDs: each update to one level of the hierarchy could potentially induce several insertions and deletions to the next level of the hierarchy. We achieve this goal by a sophisticated amortization approach. In particular, we give a bound on the number of changes made to the LDD per update to the input graph that is significantly better than the trivial bound implied by the update time. We believe that the dynamic randomshift clustering might be useful for independent applications. One of these applications is the dynamic spanner problem. By combining the randomshift clustering with the recent spanner construction of Elkin and Neiman [SODA 2017]. We obtain a fully dynamic algorithm for maintaining a spanner of stretch 2k − 1 and size O (n^{1 + 1/k} logn) with amortized update time O (k log^{2} n) for any integer 2 ≤ k ≤ logn . Compared to the stateofthe art in this regime Baswana et al. [TALG 2012], we improve upon the size of the spanner and the update time by a factor of k . @InProceedings{STOC19p377, author = {Sebastian Forster and Gramoz Goranci}, title = {Dynamic LowStretch Trees via Dynamic LowDiameter Decompositions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {377388}, doi = {10.1145/3313276.3316381}, year = {2019}, } Publisher's Version 

Goyal, Navin 
STOC '19: "NonGaussian Component Analysis ..."
NonGaussian Component Analysis using Entropy Methods
Navin Goyal and Abhishek Shetty (Microsoft Research, India) NonGaussian component analysis (NGCA) is a problem in multidimensional data analysis which, since its formulation in 2006, has attracted considerable attention in statistics and machine learning. In this problem, we have a random variable X in ndimensional Euclidean space. There is an unknown subspace Γ of the ndimensional Euclidean space such that the orthogonal projection of X onto Γ is standard multidimensional Gaussian and the orthogonal projection of X onto Γ^{⊥}, the orthogonal complement of Γ, is nonGaussian, in the sense that all its onedimensional marginals are different from the Gaussian in a certain metric defined in terms of moments. The NGCA problem is to approximate the nonGaussian subspace Γ^{⊥} given samples of X. Vectors in Γ^{⊥} correspond to ‘interesting’ directions, whereas vectors in Γ correspond to the directions where data is very noisy. The most interesting applications of the NGCA model is for the case when the magnitude of the noise is comparable to that of the true signal, a setting in which traditional noise reduction techniques such as PCA don’t apply directly. NGCA is also related to dimension reduction and to other data analysis problems such as ICA. NGCAlike problems have been studied in statistics for a long time using techniques such as projection pursuit. We give an algorithm that takes polynomial time in the dimension n and has an inverse polynomial dependence on the error parameter measuring the angle distance between the nonGaussian subspace and the subspace output by the algorithm. Our algorithm is based on relative entropy as the contrast function and fits under the projection pursuit framework. The techniques we develop for analyzing our algorithm maybe of use for other related problems. @InProceedings{STOC19p840, author = {Navin Goyal and Abhishek Shetty}, title = {NonGaussian Component Analysis using Entropy Methods}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {840851}, doi = {10.1145/3313276.3316309}, year = {2019}, } Publisher's Version 

Grandoni, Fabrizio 
STOC '19: "Oblivious Dimension Reduction ..."
Oblivious Dimension Reduction for kMeans: Beyond Subspaces and the JohnsonLindenstrauss Lemma
Luca Becchetti, Marc Bury, Vincent CohenAddad, Fabrizio Grandoni, and Chris Schwiegelshohn (Sapienza University of Rome, Italy; Zalando, Switzerland; CNRS, France; IDSIA, Switzerland) We show that for n points in ddimensional Euclidean space, a data oblivious random projection of the columns onto m∈ O((logk+loglogn)ε^{−6}log1/ε) dimensions is sufficient to approximate the cost of all kmeans clusterings up to a multiplicative (1±ε) factor. The previousbest upper bounds on m are O(logn· ε^{−2}) given by a direct application of the JohnsonLindenstrauss Lemma, and O(kε^{−2}) given by [Cohen et al.STOC’15]. @InProceedings{STOC19p1039, author = {Luca Becchetti and Marc Bury and Vincent CohenAddad and Fabrizio Grandoni and Chris Schwiegelshohn}, title = {Oblivious Dimension Reduction for <i>k</i>Means: Beyond Subspaces and the JohnsonLindenstrauss Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10391050}, doi = {10.1145/3313276.3316318}, year = {2019}, } Publisher's Version STOC '19: "O(log² k / ..." O(log² k / log log k)Approximation Algorithm for Directed Steiner Tree: A Tight QuasiPolynomialTime Algorithm Fabrizio Grandoni, Bundit Laekhanukit, and Shi Li (IDSIA, Switzerland; Shanghai University of Finance and Economics, China; SUNY Buffalo, USA) In the Directed Steiner Tree (DST) problem we are given an nvertex directed edgeweighted graph, a root r , and a collection of k terminal nodes. Our goal is to find a minimumcost subgraph that contains a directed path from r to every terminal. We present an O(log^2 k /log log k )approximation algorithm for DST that runs in quasipolynomialtime, i.e., in time n^{polylog(k)}. By making standard complexity assumptions, we show the matching lower bound of Omega(log^2 k/loglogk) for the class of quasipolynomial time algorithms, meaning that our approximation ratio is asymptotically the best possible. This is the first improvement on the DST problem since the classical quasipolynomialtime O (log^3 k ) approximation algorithm by Charikar et al. [SODA’98 & J. Algorithms’99]. (The paper erroneously claims an O (log^2 k ) approximation due to a mistake in prior work.) Our approach is based on two main ingredients. First, we derive an approximation preserving reduction to the Group Steiner Tree on Trees with Dependency Constraint (GSTTD) problem. Compared to the classic Group Steiner Tree on Trees problem, in GSTTD we are additionally given some dependency constraints among the nodes in the output tree that must be satisfied. The GSTTD instance has quasipolynomial size and logarithmic height. We remark that, in contrast, Zelikovsky’s heighreduction theorem [Algorithmica’97] used in all prior work on DST achieves a reduction to a tree instance of the related Group Steiner Tree (GST) problem of similar height, however losing a logarithmic factor in the approximation ratio. Our second ingredient is an LProunding algorithm to approximately solve GSTTD instances, which is inspired by the framework developed by [Rothvob, Preprint’11; Friggstad et al., IPCO’14]. We consider a SheraliAdams lifting of a proper LP relaxation of GSTTD. Our rounding algorithm proceeds level by level from the root to the leaves, rounding and conditioning each time on a proper subset of label variables. The limited height of the tree and small number of labels on roottoleaf paths guarantee that a small enough (namely, polylogarithmic) number of SheraliAdams lifting levels is sufficient to condition up to the leaves. We believe that our basic strategy of combining labelbased reductions with a roundandcondition type of LProunding over hierarchies might find applications to other related problems. @InProceedings{STOC19p253, author = {Fabrizio Grandoni and Bundit Laekhanukit and Shi Li}, title = {<i>O</i>(log² <i>k</i> / log log <i>k</i>)Approximation Algorithm for Directed Steiner Tree: A Tight QuasiPolynomialTime Algorithm}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {253264}, doi = {10.1145/3313276.3316349}, year = {2019}, } Publisher's Version STOC '19: "Dynamic Set Cover: Improved ..." Dynamic Set Cover: Improved Algorithms and Lower Bounds Amir Abboud, Raghavendra Addanki, Fabrizio Grandoni, Debmalya Panigrahi, and Barna Saha (IBM Research, USA; University of Massachusetts at Amherst, USA; IDSIA, Switzerland; Duke University, USA) We give new upper and lower bounds for the dynamic set cover problem. First, we give a (1+є) fapproximation for fully dynamic set cover in O(f^{2}logn/є^{5}) (amortized) update time, for any є > 0, where f is the maximum number of sets that an element belongs to. In the decremental setting, the update time can be improved to O(f^{2}/є^{5}), while still obtaining an (1+є) fapproximation. These are the first algorithms that obtain an approximation factor linear in f for dynamic set cover, thereby almost matching the best bounds known in the offline setting and improving upon the previous best approximation of O(f^{2}) in the dynamic setting. To complement our upper bounds, we also show that a linear dependence of the update time on f is necessary unless we can tolerate much worse approximation factors. Using the recent distributed PCPframework, we show that any dynamic set cover algorithm that has an amortized update time of O(f^{1−є}) must have an approximation factor that is Ω(n^{δ}) for some constant δ>0 under the Strong Exponential Time Hypothesis. @InProceedings{STOC19p114, author = {Amir Abboud and Raghavendra Addanki and Fabrizio Grandoni and Debmalya Panigrahi and Barna Saha}, title = {Dynamic Set Cover: Improved Algorithms and Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {114125}, doi = {10.1145/3313276.3316376}, year = {2019}, } Publisher's Version 

Gronemann, Martin 
STOC '19: "Planar Graphs of Bounded Degree ..."
Planar Graphs of Bounded Degree Have Bounded Queue Number
Michael Bekos, Henry Förster, Martin Gronemann, Tamara Mchedlidze, Fabrizio Montecchiani, Chrysanthi Raftopoulou, and Torsten Ueckerdt (University of Tübingen, Germany; University of Cologne, Germany; KIT, Germany; University of Perugia, Italy; National Technical University of Athens, Greece) A queue layout of a graph consists of a linear order of its vertices and a partition of its edges into queues, so that no two independent edges of the same queue are nested. The queue number of a graph is the minimum number of queues required by any of its queue layouts. A longstanding conjecture by Heath, Leighton and Rosenberg states that the queue number of planar graphs is bounded.This conjecture has been partially settled in the positive for several sub families of planar graphs (most of which have bounded treewidth). In this paper, we make a further important step towards settling this conjecture. We prove that planar graphs of bounded degree (which may have unbounded treewidth) have bounded queue number. A notable implication of this result is that every planar graph of bounded degree admits a threedimensional straightline grid drawing in linear volume. Further implications are that every planar graph of bounded degree has bounded track number, and that every kplanar graph (i.e., every graph that can be drawn in the plane with at most k crossings per edge) of bounded degree as bounded queue number. @InProceedings{STOC19p176, author = {Michael Bekos and Henry Förster and Martin Gronemann and Tamara Mchedlidze and Fabrizio Montecchiani and Chrysanthi Raftopoulou and Torsten Ueckerdt}, title = {Planar Graphs of Bounded Degree Have Bounded Queue Number}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {176184}, doi = {10.1145/3313276.3316324}, year = {2019}, } Publisher's Version 

Guo, Chenghao 
STOC '19: "Settling the Sample Complexity ..."
Settling the Sample Complexity of SingleParameter Revenue Maximization
Chenghao Guo, Zhiyi Huang, and Xinzhi Zhang (Tsinghua University, China; University of Hong Kong, China) This paper settles the sample complexity of singleparameter revenue maximization by showing matching upper and lower bounds, up to a polylogarithmic factor, for all families of value distributions that have been considered in the literature. The upper bounds are unified under a novel framework, which builds on the strong revenue monotonicity by Devanur, Huang, and Psomas (STOC 2016), and an information theoretic argument. This is fundamentally different from the previous approaches that rely on either constructing an єnet of the mechanism space, explicitly or implicitly via statistical learning theory, or learning an approximately accurate version of the virtual values. To our knowledge, it is the first time information theoretical arguments are used to show sample complexity upper bounds, instead of lower bounds. Our lower bounds are also unified under a meta construction of hard instances. @InProceedings{STOC19p662, author = {Chenghao Guo and Zhiyi Huang and Xinzhi Zhang}, title = {Settling the Sample Complexity of SingleParameter Revenue Maximization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {662673}, doi = {10.1145/3313276.3316325}, year = {2019}, } Publisher's Version Info 

Gupta, Anupam 
STOC '19: "The Number of Minimum kCuts: ..."
The Number of Minimum kCuts: Improving the KargerStein Bound
Anupam Gupta, Euiwoong Lee, and Jason Li (Carnegie Mellon University, USA; New York University, USA) Given an edgeweighted graph, how many minimum kcuts can it have? This is a fundamental question in the intersection of algorithms, extremal combinatorics, and graph theory. It is particularly interesting in that the best known bounds are algorithmic: they stem from algorithms that compute the minimum kcut. In 1994, Karger and Stein obtained a randomized contraction algorithm that finds a minimum kcut in O(n^{(2−o(1))k}) time. It can also enumerate all such kcuts in the same running time, establishing a corresponding extremal bound of O(n^{(2−o(1))k}). Since then, the algorithmic side of the minimum kcut problem has seen much progress, leading to a deterministic algorithm based on a tree packing result of Thorup, which enumerates all minimum kcuts in the same asymptotic running time, and gives an alternate proof of the O(n^{(2−o(1))k}) bound. However, beating the Karger–Stein bound, even for computing a single minimum kcut, has remained out of reach. In this paper, we give an algorithm to enumerate all minimum kcuts in O(n^{(1.981+o(1))k}) time, breaking the algorithmic and extremal barriers for enumerating minimum kcuts. To obtain our result, we combine ideas from both the Karger–Stein and Thorup results, and draw a novel connection between minimum kcut and extremal set theory. In particular, we give and use tighter bounds on the size of set systems with bounded dual VCdimension, which may be of independent interest. @InProceedings{STOC19p229, author = {Anupam Gupta and Euiwoong Lee and Jason Li}, title = {The Number of Minimum <i>k</i>Cuts: Improving the KargerStein Bound}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {229240}, doi = {10.1145/3313276.3316395}, year = {2019}, } Publisher's Version 

Guruswami, Venkatesan 
STOC '19: "Bridging between 0/1 and Linear ..."
Bridging between 0/1 and Linear Programming via Random Walks
Joshua Brakensiek and Venkatesan Guruswami (Stanford University, USA; Carnegie Mellon University, USA) Under the Strong Exponential Time Hypothesis, an integer linear program with n Booleanvalued variables and m equations cannot be solved in c^{n} time for any constant c < 2. If the domain of the variables is relaxed to [0,1], the associated linear program can of course be solved in polynomial time. In this work, we give a natural algorithmic bridging between these extremes of 01 and linear programming. Specifically, for any subset (finite union of intervals) E ⊂ [0,1] containing {0,1}, we give a randomwalk based algorithm with runtime O_{E}((2−measure(E))^{n}poly(n,m)) that finds a solution in E^{n} to any nvariable linear program with m constraints that is feasible over {0,1}^{n}. Note that as E expands from {0,1} to [0,1], the runtime improves smoothly from 2^{n} to polynomial. Taking E = [0,1/k) ∪ (1−1/k,1] in our result yields as a corollary a randomized (2−2/k)^{n}poly(n) time algorithm for kSAT. While our approach has some high level resemblance to Sch'oning’s beautiful algorithm, our general algorithm is based on a more sophisticated random walk that incorporates several new ingredients, such as a multiplicative potential to measure progress, a judicious choice of starting distribution, and a time varying distribution for the evolution of the random walk that is itself computed via an LP at each step (a solution to which is guaranteed based on the minimax theorem). Plugging the LP algorithm into our earlier polymorphic framework yields fast exponential algorithms for any CSP (like kSAT, 1in3SAT, NAE kSAT) that admit socalled “threshold partial polymorphisms.” @InProceedings{STOC19p568, author = {Joshua Brakensiek and Venkatesan Guruswami}, title = {Bridging between 0/1 and Linear Programming via Random Walks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {568577}, doi = {10.1145/3313276.3316347}, year = {2019}, } Publisher's Version STOC '19: "An Exponential Lower Bound ..." An Exponential Lower Bound on the SubPacketization of MSR Codes Omar Alrabiah and Venkatesan Guruswami (Carnegie Mellon University, USA) An (n,k,ℓ)vector MDS code is a Flinear subspace of (F^{ℓ})^{n} (for some field F) of dimension kℓ, such that any k (vector) symbols of the codeword suffice to determine the remaining r=n−k (vector) symbols. The length ℓ of each codeword symbol is called the SubPacketization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading ℓ/r field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large SubPacketization ℓ ≳ r^{k/r}. Our main result is an almost tight lower bound showing that for an MSR code, one must have ℓ ≥ exp(Ω(k/r)). Previously, a lower bound of ≈ exp(√k/r), and a tight lower bound for a restricted class of ”optimal access” MSR codes, were known. Our work settles a central open question concerning MSR codes that has received much attention. Further our proof is really short, hinging on one key definition that is somewhat inspired by Galois theory. @InProceedings{STOC19p979, author = {Omar Alrabiah and Venkatesan Guruswami}, title = {An Exponential Lower Bound on the SubPacketization of MSR Codes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {979985}, doi = {10.1145/3313276.3316387}, year = {2019}, } Publisher's Version STOC '19: "CSPs with Global Modular Constraints: ..." CSPs with Global Modular Constraints: Algorithms and Hardness via Polynomial Representations Joshua Brakensiek, Sivakanth Gopi, and Venkatesan Guruswami (Stanford University, USA; Microsoft Research, USA; Carnegie Mellon University, USA) We study the complexity of Boolean constraint satisfaction problems (CSPs) when the assignment must have Hamming weight in some congruence class modulo M, for various choices of the modulus M. Due to the known classification of tractable Boolean CSPs, this mainly reduces to the study of three cases: 2SAT, HORNSAT, and LIN2 (linear equations mod 2). We classify the moduli M for which these respective problems are polynomial time solvable, and when they are not (assuming the ETH). Our study reveals that this modular constraint lends a surprising richness to these classic, wellstudied problems, with interesting broader connections to complexity theory and coding theory. The HORNSAT case is connected to the covering complexity of polynomials representing the NAND function mod M. The LIN2 case is tied to the sparsity of polynomials representing the OR function mod M, which in turn has connections to modular weight distribution properties of linear codes and locally decodable codes. In both cases, the analysis of our algorithm as well as the hardness reduction rely on these polynomial representations, highlighting an interesting algebraic common ground between hard cases for our algorithms and the gadgets which show hardness. These new complexity measures of polynomial representations merit further study. The inspiration for our study comes from a recent work by N'agele, Sudakov, and Zenklusen on submodular minimization with a global congruence constraint. Our algorithm for HORNSAT has strong similarities to their algorithm, and in particular identical kind of set systems arise in both cases. Our connection to polynomial representations leads to a simpler analysis of such set systems, and also sheds light on (but does not resolve) the complexity of submodular minimization with a congruency requirement modulo a composite M. @InProceedings{STOC19p590, author = {Joshua Brakensiek and Sivakanth Gopi and Venkatesan Guruswami}, title = {CSPs with Global Modular Constraints: Algorithms and Hardness via Polynomial Representations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {590601}, doi = {10.1145/3313276.3316401}, year = {2019}, } Publisher's Version 

Hadar, Uri 
STOC '19: "Communication Complexity of ..."
Communication Complexity of Estimating Correlations
Uri Hadar, Jingbo Liu, Yury Polyanskiy, and Ofer Shayevitz (Tel Aviv University, Israel; Massachusetts Institute of Technology, USA) We characterize the communication complexity of the following distributed estimation problem. Alice and Bob observe infinitely many iid copies of ρcorrelated unitvariance (Gaussian or ±1 binary) random variables, with unknown ρ∈[−1,1]. By interactively exchanging k bits, Bob wants to produce an estimate ρ of ρ. We show that the best possible performance (optimized over interaction protocol Π and estimator ρ) satisfies inf_{Π ρ}sup_{ρ}E [ρ−ρ^{2}] = k^{−1} (1/2 ln2 + o(1)). Curiously, the number of samples in our achievability scheme is exponential in k; by contrast, a naive scheme exchanging k samples achieves the same Ω(1/k) rate but with a suboptimal prefactor. Our protocol achieving optimal performance is oneway (noninteractive). We also prove the Ω(1/k) bound even when ρ is restricted to any small open subinterval of [−1,1] (i.e. a local minimax lower bound). Our proof techniques rely on symmetric strong dataprocessing inequalities and various tensorization techniques from informationtheoretic interactive commonrandomness extraction. Our results also imply an Ω(n) lower bound on the information complexity of the GapHamming problem, for which we show a direct informationtheoretic proof. @InProceedings{STOC19p792, author = {Uri Hadar and Jingbo Liu and Yury Polyanskiy and Ofer Shayevitz}, title = {Communication Complexity of Estimating Correlations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {792803}, doi = {10.1145/3313276.3316332}, year = {2019}, } Publisher's Version 

Haeupler, Bernhard 
STOC '19: "NearLinear Time InsertionDeletion ..."
NearLinear Time InsertionDeletion Codes and (1+ε)Approximating Edit Distance via Indexing
Bernhard Haeupler, Aviad Rubinstein, and Amirbehshad Shahrasbi (Carnegie Mellon University, USA; Stanford University, USA) We introduce fastdecodable indexing schemes for edit distance which can be used to speed up edit distance computations to nearlinear time if one of the strings is indexed by an indexing string I. In particular, for every length n and every ε >0, one can in near linear time construct a string I ∈ Σ′^{n} with Σ′ = O_{ε}(1), such that, indexing any string S ∈ Σ^{n}, symbolbysymbol, with I results in a string S′ ∈ Σ″^{n} where Σ″ = Σ × Σ′ for which edit distance computations are easy, i.e., one can compute a (1+ε)approximation of the edit distance between S′ and any other string in O(n (logn)) time. Our indexing schemes can be used to improve the decoding complexity of stateoftheart error correcting codes for insertions and deletions. In particular, they lead to nearlinear time decoding algorithms for the insertiondeletion codes of [Haeupler, Shahrasbi; STOC ‘17] and faster decoding algorithms for listdecodable insertiondeletion codes of [Haeupler, Shahrasbi, Sudan; ICALP ‘18]. Interestingly, the latter codes are a crucial ingredient in the construction of fastdecodable indexing schemes. @InProceedings{STOC19p697, author = {Bernhard Haeupler and Aviad Rubinstein and Amirbehshad Shahrasbi}, title = {NearLinear Time InsertionDeletion Codes and (1+<i>ε</i>)Approximating Edit Distance via Indexing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {697708}, doi = {10.1145/3313276.3316371}, year = {2019}, } Publisher's Version 

Hajiaghayi, MohammadTaghi 
STOC '19: "Lower Bounds for External ..."
Lower Bounds for External Memory Integer Sorting via Network Coding
Alireza Farhadi, MohammadTaghi Hajiaghayi, Kasper Green Larsen, and Elaine Shi (University of Maryland, USA; Aarhus University, Denmark; Cornell University, USA) Sorting extremely large datasets is a frequently occuring task in practice. These datasets are usually much larger than the computer’s main memory; thus external memory sorting algorithms, first introduced by Aggarwal and Vitter (1988), are often used. The complexity of comparison based external memory sorting has been understood for decades by now, however the situation remains elusive if we assume the keys to be sorted are integers. In internal memory, one can sort a set of n integer keys of Θ(lgn) bits each in O(n) time using the classic Radix Sort algorithm, however in external memory, there are no faster integer sorting algorithms known than the simple comparison based ones. Whether such algorithms exist has remained a central open problem in external memory algorithms for more than three decades. In this paper, we present a tight conditional lower bound on the complexity of external memory sorting of integers. Our lower bound is based on a famous conjecture in network coding by Li and Li (2004), who conjectured that network coding cannot help anything beyond the standard multicommodity flow rate in undirected graphs. The only previous work connecting the Li and Li conjecture to lower bounds for algorithms is due to Adler et al. (2006). Adler et al. indeed obtain relatively simple lower bounds for oblivious algorithms (the memory access pattern is fixed and independent of the input data). Unfortunately obliviousness is a strong limitations, especially for integer sorting: we show that the Li and Li conjecture implies an Ω(n logn) lower bound for internal memory oblivious sorting when the keys are Θ(lgn) bits. This is in sharp contrast to the classic (nonoblivious) Radix Sort algorithm. Indeed going beyond obliviousness is highly nontrivial; we need to introduce several new methods and involved techniques, which are of their own interest, to obtain our tight lower bound for external memory integer sorting. @InProceedings{STOC19p997, author = {Alireza Farhadi and MohammadTaghi Hajiaghayi and Kasper Green Larsen and Elaine Shi}, title = {Lower Bounds for External Memory Integer Sorting via Network Coding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {9971008}, doi = {10.1145/3313276.3316337}, year = {2019}, } Publisher's Version STOC '19: "1+ε Approximation ..." 1+ε Approximation of Tree Edit Distance in Quadratic Time Mahdi Boroujeni, Mohammad Ghodsi, MohammadTaghi Hajiaghayi, and Saeed Seddighin (Sharif University of Technology, Iran; Institute for Research in Fundamental Sciences, Iran; University of Maryland, USA) Edit distance is one of the most fundamental problems in computer science. Tree edit distance is a natural generalization of edit distance to ordered rooted trees. Such a generalization extends the applications of edit distance to areas such as computational biology, structured data analysis (e.g., XML), image analysis, and compiler optimization. Perhaps the most notable application of tree edit distance is in the analysis of RNA molecules in computational biology where the secondary structure of RNA is typically represented as a rooted tree. The bestknown solution for tree edit distance runs in cubic time. Recently, Bringmann et al. show that an O(n^{2.99}) algorithm for weighted tree edit distance is unlikely by proving a conditional lower bound on the computational complexity of tree edit distance. This shows a substantial gap between the computational complexity of tree edit distance and that of edit distance for which a simple dynamic program solves the problem in quadratic time. In this work, we give the first nontrivial approximation algorithms for tree edit distance. Our main result is a quadratic time approximation scheme for tree edit distance that approximates the solution within a factor of 1+є for any constant є > 0. @InProceedings{STOC19p709, author = {Mahdi Boroujeni and Mohammad Ghodsi and MohammadTaghi Hajiaghayi and Saeed Seddighin}, title = {1+<i>ε</i> Approximation of Tree Edit Distance in Quadratic Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {709720}, doi = {10.1145/3313276.3316388}, year = {2019}, } Publisher's Version 

Hansen, Thomas Dueholm 
STOC '19: "Faster kSAT Algorithms ..."
Faster kSAT Algorithms using BiasedPPSZ
Thomas Dueholm Hansen, Haim Kaplan, Or Zamir, and Uri Zwick (University of Copenhagen, Denmark; Tel Aviv University, Israel) The PPSZ algorithm, due to Paturi, Pudlak, Saks and Zane, is currently the fastest known algorithm for the kSAT problem, for every k>3. For 3SAT, a tiny improvement over PPSZ was obtained by Hertli. We introduce a biased version of the PPSZ algorithm using which we obtain an improvement over PPSZ for every k≥ 3. For k=3 we also improve on Herli’s result and get a much more noticeable improvement over PPSZ, though still relatively small. In particular, for Unique 3SAT, we improve the current bound from 1.308^{n} to 1.307^{n}. @InProceedings{STOC19p578, author = {Thomas Dueholm Hansen and Haim Kaplan and Or Zamir and Uri Zwick}, title = {Faster <i>k</i>SAT Algorithms using BiasedPPSZ}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {578589}, doi = {10.1145/3313276.3316359}, year = {2019}, } Publisher's Version 

He, Kun 
STOC '19: "Quantum Lovász Local Lemma: ..."
Quantum Lovász Local Lemma: Shearer’s Bound Is Tight
Kun He, Qian Li, Xiaoming Sun, and Jiapeng Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Shenzhen Institute of Computing Sciences, China; Shenzhen University, China; University of California at San Diego, USA) Lovász Local Lemma (LLL) is a very powerful tool in combinatorics and probability theory to show the possibility of avoiding all “bad” events under some “weakly dependent” condition. Over the last decades, the algorithmic aspect of LLL has also attracted lots of attention in theoretical computer science. A tight criterion under which the abstract version LLL (ALLL) holds was given by Shearer. It turns out that Shearer’s bound is generally not tight for variable version LLL (VLLL). Recently, Ambainis et al. introduced a quantum version LLL (QLLL), which was then shown to be powerful for the quantum satisfiability problem. In this paper, we prove that Shearer’s bound is tight for QLLL, i.e., the relative dimension of the smallest satisfying subspace is completely characterized by the independent set polynomial, affirming a conjecture proposed by Sattath et al. Our result also shows the tightness of Gilyén and Sattath’s algorithm, and implies that the lattice gas partition function fully characterizes quantum satisfiability for almost all Hamiltonians with large enough qudits. Commuting LLL (CLLL), LLL for commuting local Hamiltonians which are widely studied in the literature, is also investigated here. We prove that the tight regions of CLLL and QLLL are different in general. This result might imply that it is possible to design an algorithm for CLLL which is still efficient beyond Shearer’s bound. In applications of LLLs, the symmetric cases are most common, i.e., the events are with the same probability and the Hamiltonians are with the same relative dimension. We give the first lower bound on the gap between the symmetric VLLL and Shearer’s bound. Our result can be viewed as a quantitative study on the separation between quantum and classical constraint satisfaction problems. Additionally, we obtain similar results for the symmetric CLLL. As an application, we give lower bounds on the critical thresholds of VLLL and CLLL for several of the most common lattices. @InProceedings{STOC19p461, author = {Kun He and Qian Li and Xiaoming Sun and Jiapeng Zhang}, title = {Quantum Lovász Local Lemma: Shearer’s Bound Is Tight}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {461472}, doi = {10.1145/3313276.3316392}, year = {2019}, } Publisher's Version 

Helmuth, Tyler 
STOC '19: "Algorithmic PirogovSinai ..."
Algorithmic PirogovSinai Theory
Tyler Helmuth, Will Perkins, and Guus Regts (University of Bristol, UK; University of Illinois at Chicago, USA; University of Amsterdam, Netherlands) We develop an efficient algorithmic approach for approximate counting and sampling in the lowtemperature regime of a broad class of statistical physics models on finite subsets of the lattice ℤ^{d} and on the torus (ℤ/n ℤ)^{d}. Our approach is based on combining contour representations from Pirogov–Sinai theory with Barvinok’s approach to approximate counting using truncated Taylor series. Some consequences of our main results include an FPTAS for approximating the partition function of the hardcore model at sufficiently high fugacity on subsets of ℤ^{d} with appropriate boundary conditions and an efficient sampling algorithm for the ferromagnetic Potts model on the discrete torus (ℤ/n ℤ)^{d} at sufficiently low temperature. @InProceedings{STOC19p1009, author = {Tyler Helmuth and Will Perkins and Guus Regts}, title = {Algorithmic PirogovSinai Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10091020}, doi = {10.1145/3313276.3316305}, year = {2019}, } Publisher's Version 

Henzinger, Monika 
STOC '19: "Distributed Edge Connectivity ..."
Distributed Edge Connectivity in Sublinear Time
Mohit Daga, Monika Henzinger, Danupon Nanongkai, and Thatchaphol Saranurak (KTH, Sweden; University of Vienna, Austria; Toyota Technological Institute at Chicago, USA) We present the first sublineartime algorithm that can compute the edge connectivity λ of a network exactly on distributed messagepassing networks (the CONGEST model), as long as the network contains no multiedge. We present the first sublineartime algorithm for a distributed messagepassing network sto compute its edge connectivity λ exactly in the CONGEST model, as long as there are no parallel edges. Our algorithm takes Õ(n^{1−1/353}D^{1/353}+n^{1−1/706}) time to compute λ and a cut of cardinality λ with high probability, where n and D are the number of nodes and the diameter of the network, respectively, and Õ hides polylogarithmic factors. This running time is sublinear in n (i.e. Õ(n^{1−є})) whenever D is. Previous sublineartime distributed algorithms can solve this problem either (i) exactly only when λ=O(n^{1/8−є}) [Thurimella PODC’95; Pritchard, Thurimella, ACM Trans. Algorithms’11; Nanongkai, Su, DISC’14] or (ii) approximately [Ghaffari, Kuhn, DISC’13; Nanongkai, Su, DISC’14]. To achieve this we develop and combine several new techniques. First, we design the first distributed algorithm that can compute a kedge connectivity certificate for any k=O(n^{1−є}) in time Õ(√nk+D). The previous sublineartime algorithm can do so only when k=o(√n) [Thurimella PODC’95]. In fact, our algorithm can be turned into the first parallel algorithm with polylogarithmic depth and nearlinear work. Previous nearlinear work algorithms are essentially sequential and previous polylogarithmicdepth algorithms require Ω(mk) work in the worst case (e.g. [Karger, Motwani, STOC’93]). Second, we show that by combining the recent distributed expander decomposition technique of [Chang, Pettie, Zhang, SODA’19] with techniques from the sequential deterministic edge connectivity algorithm of [Kawarabayashi, Thorup, STOC’15], we can decompose the network into a sublinear number of clusters with small average diameter and without any mincut separating a cluster (except the “trivial” ones). This leads to a simplification of the KawarabayashiThorup framework (except that we are randomized while they are deterministic). This might make this framework more useful in other models of computation. Finally, by extending the tree packing technique from [Karger STOC’96], we can find the minimum cut in time proportional to the number of components. As a byproduct of this technique, we obtain an Õ(n)time algorithm for computing exact minimum cut for weighted graphs. @InProceedings{STOC19p343, author = {Mohit Daga and Monika Henzinger and Danupon Nanongkai and Thatchaphol Saranurak}, title = {Distributed Edge Connectivity in Sublinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {343354}, doi = {10.1145/3313276.3316346}, year = {2019}, } Publisher's Version 

Holmgren, Justin 
STOC '19: "FiatShamir: From Practice ..."
FiatShamir: From Practice to Theory
Ran Canetti, Yilei Chen, Justin Holmgren, Alex Lombardi, Guy N. Rothblum, Ron D. Rothblum, and Daniel Wichs (Boston University, USA; Tel Aviv University, Israel; Visa Research, USA; Princeton University, USA; Massachusetts Institute of Technology, USA; Weizmann Institute of Science, Israel; Technion, Israel; Northeastern University, USA) We give new instantiations of the FiatShamir transform using explicit, efficiently computable hash functions. We improve over prior work by reducing the security of these protocols to qualitatively simpler and weaker computational hardness assumptions. As a consequence of our framework, we obtain the following concrete results. 1) There exists a succinct publicly verifiable noninteractive argument system for logspace uniform computations, under the assumption that any one of a broad class of fully homomorphic encryption (FHE) schemes has almost optimal security against polynomialtime adversaries. The class includes all FHE schemes in the literature that are based on the learning with errors (LWE) problem. 2) There exists a noninteractive zeroknowledge argument system for in the common reference string model, under either of the following two assumptions: (i) Almost optimal hardness of searchLWE against polynomialtime adversaries, or (ii) The existence of a circularsecure FHE scheme with a standard (polynomial time, negligible advantage) level of security. 3) The classic quadratic residuosity protocol of [Goldwasser, Micali, and Rackoff, SICOMP ’89] is not zero knowledge when repeated in parallel, under any of the hardness assumptions above. @InProceedings{STOC19p1082, author = {Ran Canetti and Yilei Chen and Justin Holmgren and Alex Lombardi and Guy N. Rothblum and Ron D. Rothblum and Daniel Wichs}, title = {FiatShamir: From Practice to Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10821090}, doi = {10.1145/3313276.3316380}, year = {2019}, } Publisher's Version STOC '19: "The Parallel Repetition of ..." The Parallel Repetition of Nonsignaling Games: Counterexamples and Dichotomy Justin Holmgren and Lisa Yang (Princeton University, USA; Massachusetts Institute of Technology, USA) Nonsignaling games are an important object of study in the theory of computation, for their role both in quantum information and in (classical) cryptography. In this work, we study the behavior of these games under parallel repetition. We show that, unlike the situation both for classical games and for twoplayer nonsignaling games, there are kplayer nonsignaling games (for k ≥ 3) whose values do not tend to 0 with sufficient parallel repetition. In fact, parallel repetition sometimes does not decrease their value whatsoever. We show that in general, every game’s nonsignaling value under parallel repetition is either lower bounded by a positive constant or decreases exponentially with the number of repetitions. Furthermore, exponential decrease occurs if and only if the game’s subnonsignaling value (Lancien and Winter, CJTCS ’16) is less than 1. @InProceedings{STOC19p185, author = {Justin Holmgren and Lisa Yang}, title = {The Parallel Repetition of Nonsignaling Games: Counterexamples and Dichotomy}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {185192}, doi = {10.1145/3313276.3316367}, year = {2019}, } Publisher's Version 

Huang, Zhiyi 
STOC '19: "Settling the Sample Complexity ..."
Settling the Sample Complexity of SingleParameter Revenue Maximization
Chenghao Guo, Zhiyi Huang, and Xinzhi Zhang (Tsinghua University, China; University of Hong Kong, China) This paper settles the sample complexity of singleparameter revenue maximization by showing matching upper and lower bounds, up to a polylogarithmic factor, for all families of value distributions that have been considered in the literature. The upper bounds are unified under a novel framework, which builds on the strong revenue monotonicity by Devanur, Huang, and Psomas (STOC 2016), and an information theoretic argument. This is fundamentally different from the previous approaches that rely on either constructing an єnet of the mechanism space, explicitly or implicitly via statistical learning theory, or learning an approximately accurate version of the virtual values. To our knowledge, it is the first time information theoretical arguments are used to show sample complexity upper bounds, instead of lower bounds. Our lower bounds are also unified under a meta construction of hard instances. @InProceedings{STOC19p662, author = {Chenghao Guo and Zhiyi Huang and Xinzhi Zhang}, title = {Settling the Sample Complexity of SingleParameter Revenue Maximization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {662673}, doi = {10.1145/3313276.3316325}, year = {2019}, } Publisher's Version Info 

Hubáček, Pavel 
STOC '19: "Finding a Nash Equilibrium ..."
Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir
Arka Rai Choudhuri, Pavel Hubáček, Chethan Kamath, Krzysztof Pietrzak, Alon Rosen, and Guy N. Rothblum (Johns Hopkins University, USA; Charles University in Prague, Czechia; IST Austria, Austria; IDC Herzliya, Israel; Weizmann Institute of Science, Israel) The FiatShamir heuristic transforms a publiccoin interactive proof into a noninteractive argument, by replacing the verifier with a cryptographic hash function that is applied to the protocol’s transcript. Constructing hash functions for which this transformation is sound is a central and longstanding open question in cryptography. We show that solving the END−OF−METERED−LINE problem is no easier than breaking the soundness of the FiatShamir transformation when applied to the sumcheck protocol. In particular, if the transformed protocol is sound, then any hard problem in #P gives rise to a hard distribution in the class CLS, which is contained in PPAD. Our result opens up the possibility of sampling moderatelysized games for which it is hard to find a Nash equilibrium, by reducing the inversion of appropriately chosen oneway functions to #SAT. Our main technical contribution is a stateful incrementally verifiable procedure that, given a SAT instance over n variables, counts the number of satisfying assignments. This is accomplished via an exponential sequence of small steps, each computable in time poly(n). Incremental verifiability means that each intermediate state includes a sumcheckbased proof of its correctness, and the proof can be updated and verified in time poly(n). @InProceedings{STOC19p1103, author = {Arka Rai Choudhuri and Pavel Hubáček and Chethan Kamath and Krzysztof Pietrzak and Alon Rosen and Guy N. Rothblum}, title = {Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11031114}, doi = {10.1145/3313276.3316400}, year = {2019}, } Publisher's Version 

Jain, Vishesh 
STOC '19: "MeanField Approximation, ..."
MeanField Approximation, Convex Hierarchies, and the Optimality of Correlation Rounding: A Unified Perspective
Vishesh Jain, Frederic Koehler, and Andrej Risteski (Massachusetts Institute of Technology, USA) The free energy is a key quantity of interest in Ising models, but unfortunately, computing it in general is computationally intractable. Two popular (variational) approximation schemes for estimating the free energy of general Ising models (in particular, even in regimes where correlation decay does not hold) are: (i) the meanfield approximation with roots in statistical physics, which estimates the free energy from below, and (ii) hierarchies of convex relaxations with roots in theoretical computer science, which estimate the free energy from above. We show, surprisingly, that the tight regime for both methods to compute the free energy to leading order is identical. More precisely, we show that the meanfield approximation to the free energy is within O((nJ_{F})^{2/3}) of the true free energy, where J_{F} denotes the Frobenius norm of the interaction matrix of the Ising model. This simultaneously subsumes both the breakthrough work of Basak and Mukherjee, who showed the tight result that the meanfield approximation is within o(n) whenever J_{F} = o(√n), as well as the work of Jain, Koehler, and Mossel, who gave the previously best known nonasymptotic bound of O((nJ_{F})^{2/3}log^{1/3}(nJ_{F})). We give a simple, algorithmic proof of this result using a convex relaxation proposed by Risteski based on the SheraliAdams hierarchy, automatically giving subexponential time approximation schemes for the free energy in this entire regime. Our algorithmic result is tight under GapETH. We furthermore combine our techniques with spin glass theory to prove (in a strong sense) the optimality of correlation rounding, refuting a recent conjecture of Allen, O’Donnell, and Zhou. Finally, we give the tight generalization of all of these results to kMRFs, capturing as a special case previous work on approximating MAXkCSP. @InProceedings{STOC19p1226, author = {Vishesh Jain and Frederic Koehler and Andrej Risteski}, title = {MeanField Approximation, Convex Hierarchies, and the Optimality of Correlation Rounding: A Unified Perspective}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12261236}, doi = {10.1145/3313276.3316299}, year = {2019}, } Publisher's Version 

Ji, Zhengfeng 
STOC '19: "Quantum Proof Systems for ..."
Quantum Proof Systems for Iterated Exponential Time, and Beyond
Joseph Fitzsimons, Zhengfeng Ji, Thomas Vidick, and Henry Yuen (Horizon Quantum Computing, Singapore; University of Technology Sydney, Australia; California Institute of Technology, USA; University of Toronto, Canada) We show that any language solvable in nondeterministic time exp( exp(⋯exp(n))), where the number of iterated exponentials is an arbitrary function R(n), can be decided by a multiprover interactive proof system with a classical polynomialtime verifier and a constant number of quantum entangled provers, with completeness 1 and soundness 1 − exp(−Cexp(⋯exp(n))), where the number of iterated exponentials is R(n)−1 and C>0 is a universal constant. The result was previously known for R=1 and R=2; we obtain it for any timeconstructible function R. The result is based on a compression technique for interactive proof systems with entangled provers that significantly simplifies and strengthens a protocol compression result of Ji (STOC’17). As a separate consequence of this technique we obtain a different proof of Slofstra’s recent result on the uncomputability of the entangled value of multiprover games (Forum of Mathematics, Pi 2019). Finally, we show that even minor improvements to our compression result would yield remarkable consequences in computational complexity theory and the foundations of quantum mechanics: first, it would imply that the class MIP* contains all computable languages; second, it would provide a negative resolution to a multipartite version of Tsirelson’s problem on the relation between the commuting operator and tensor product models for quantum correlations. @InProceedings{STOC19p473, author = {Joseph Fitzsimons and Zhengfeng Ji and Thomas Vidick and Henry Yuen}, title = {Quantum Proof Systems for Iterated Exponential Time, and Beyond}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {473480}, doi = {10.1145/3313276.3316343}, year = {2019}, } Publisher's Version Info 

Jin, Yaonan 
STOC '19: "Tight Approximation Ratio ..."
Tight Approximation Ratio of Anonymous Pricing
Yaonan Jin, Pinyan Lu, Qi Qi, Zhihao Gavin Tang, and Tao Xiao (Columbia University, USA; Shanghai University of Finance and Economics, China; Hong Kong University of Science and Technology, China; Shanghai Jiao Tong University, China) This paper considers two canonical Bayesian mechanism design settings. In the singleitem setting, the tight approximation ratio of Anonymous Pricing is obtained: (1) compared to Myerson Auction, Anonymous Pricing always generates at least a 1/2.62fraction of the revenue; (2) there is a matching lowerbound instance. In the unitdemand singlebuyer setting, the tight approximation ratio between the simplest deterministic mechanism and the optimal deterministic mechanism is attained: in terms of revenue, (1) Uniform Pricing admits a 2.62approximation to Item Pricing; (2) a matching lowerbound instance is presented also. These results answer two open questions asked by Alaei et al. (FOCS’15) and Cai and Daskalakis (GEB’15). As an implication, in the singleitem setting: the approximation ratio of SecondPrice Auction with Anonymous Reserve (Hartline and Roughgarden EC’09) is improved to 2.62, which breaks the best known upper bound of e ≈ 2.72. @InProceedings{STOC19p674, author = {Yaonan Jin and Pinyan Lu and Qi Qi and Zhihao Gavin Tang and Tao Xiao}, title = {Tight Approximation Ratio of Anonymous Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {674685}, doi = {10.1145/3313276.3316331}, year = {2019}, } Publisher's Version 

Kalai, Yael Tauman 
STOC '19: "How to Delegate Computations ..."
How to Delegate Computations Publicly
Yael Tauman Kalai, Omer Paneth, and Lisa Yang (Microsoft Research, USA; Massachusetts Institute of Technology, USA) We construct a delegation scheme for all polynomial time computations. Our scheme is publicly verifiable and completely noninteractive in the common reference string (CRS) model. Our scheme is based on an efficiently falsifiable decisional assumption on groups with bilinear maps. Prior to this work, publicly verifiable noninteractive delegation schemes were only known under knowledge assumptions (or in the Random Oracle model) or under nonstandard assumptions related to obfuscation or multilinear maps. We obtain our result in two steps. First, we construct a scheme with a long CRS (polynomial in the running time of the computation) by following the blueprint of Paneth and Rothblum (TCC 2017). Then we bootstrap this scheme to obtain a short CRS. Our bootstrapping theorem exploits the fact that our scheme can securely delegate certain nondeterministic computations. @InProceedings{STOC19p1115, author = {Yael Tauman Kalai and Omer Paneth and Lisa Yang}, title = {How to Delegate Computations Publicly}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11151124}, doi = {10.1145/3313276.3316411}, year = {2019}, } Publisher's Version 

Kamath, Chethan 
STOC '19: "Finding a Nash Equilibrium ..."
Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir
Arka Rai Choudhuri, Pavel Hubáček, Chethan Kamath, Krzysztof Pietrzak, Alon Rosen, and Guy N. Rothblum (Johns Hopkins University, USA; Charles University in Prague, Czechia; IST Austria, Austria; IDC Herzliya, Israel; Weizmann Institute of Science, Israel) The FiatShamir heuristic transforms a publiccoin interactive proof into a noninteractive argument, by replacing the verifier with a cryptographic hash function that is applied to the protocol’s transcript. Constructing hash functions for which this transformation is sound is a central and longstanding open question in cryptography. We show that solving the END−OF−METERED−LINE problem is no easier than breaking the soundness of the FiatShamir transformation when applied to the sumcheck protocol. In particular, if the transformed protocol is sound, then any hard problem in #P gives rise to a hard distribution in the class CLS, which is contained in PPAD. Our result opens up the possibility of sampling moderatelysized games for which it is hard to find a Nash equilibrium, by reducing the inversion of appropriately chosen oneway functions to #SAT. Our main technical contribution is a stateful incrementally verifiable procedure that, given a SAT instance over n variables, counts the number of satisfying assignments. This is accomplished via an exponential sequence of small steps, each computable in time poly(n). Incremental verifiability means that each intermediate state includes a sumcheckbased proof of its correctness, and the proof can be updated and verified in time poly(n). @InProceedings{STOC19p1103, author = {Arka Rai Choudhuri and Pavel Hubáček and Chethan Kamath and Krzysztof Pietrzak and Alon Rosen and Guy N. Rothblum}, title = {Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11031114}, doi = {10.1145/3313276.3316400}, year = {2019}, } Publisher's Version 

Kamath, Gautam 
STOC '19: "The Structure of Optimal Private ..."
The Structure of Optimal Private Tests for Simple Hypotheses
Clément L. Canonne, Gautam Kamath, Audra McMillan, Adam Smith, and Jonathan Ullman (Stanford University, USA; Simons Institute for the Theory of Computing Berkeley, USA; Boston University, USA; Northeastern University, USA) Hypothesis testing plays a central role in statistical inference, and is used in many settings where privacy concerns are paramount. This work answers a basic question about privately testing simple hypotheses: given two distributions P and Q, and a privacy level ε, how many i.i.d. samples are needed to distinguish P from Q subject to εdifferential privacy, and what sort of tests have optimal sample complexity? Specifically, we characterize this sample complexity up to constant factors in terms of the structure of P and Q and the privacy level ε, and show that this sample complexity is achieved by a certain randomized and clamped variant of the loglikelihood ratio test. Our result is an analogue of the classical NeymanPearson lemma in the setting of private hypothesis testing. We also give an application of our result to the private changepoint detection. Our characterization applies more generally to hypothesis tests satisfying essentially any notion of algorithmic stability, which is known to imply strong generalization bounds in adaptive data analysis, and thus our results have applications even when privacy is not a primary concern. @InProceedings{STOC19p310, author = {Clément L. Canonne and Gautam Kamath and Audra McMillan and Adam Smith and Jonathan Ullman}, title = {The Structure of Optimal Private Tests for Simple Hypotheses}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {310321}, doi = {10.1145/3313276.3316336}, year = {2019}, } Publisher's Version 

Kane, Daniel M. 
STOC '19: "Degree𝑑 Chow Parameters ..."
Degree𝑑 Chow Parameters Robustly Determine Degree𝑑 PTFs (and Algorithmic Applications)
Ilias Diakonikolas and Daniel M. Kane (University of Southern California, USA; University of California at San Diego, USA) The degreed Chow parameters of a Boolean function are its degree at most d Fourier coefficients. It is wellknown that degreed Chow parameters uniquely characterize degreed polynomial threshold functions (PTFs) within the space of all bounded functions. In this paper, we prove a robust version of this theorem: For f any Boolean degreed PTF and g any bounded function, if the degreed Chow parameters of f are close to the degreed Chow parameters of g in ℓ_{2}norm, then f is close to g in ℓ_{1}distance. Notably, our bound relating the two distances is independent of the dimension. That is, we show that Boolean degreed PTFs are robustly identifiable from their degreed Chow parameters. No nontrivial bound was previously known for d >1. Our robust identifiability result gives the following algorithmic applications: First, we show that Boolean degreed PTFs can be efficiently approximately reconstructed from approximations to their degreed Chow parameters. This immediately implies that degreed PTFs are efficiently learnable in the uniform distribution dRFA model. As a byproduct of our approach, we also obtain the first low integerweight approximations of degreed PTFs, for d>1. As our second application, our robust identifiability result gives the first efficient algorithm, with dimensionindependent error guarantees, for malicious learning of Boolean degreed PTFs under the uniform distribution. The proof of our robust identifiability result involves several new technical ingredients, including the following structural result for degreed multivariate polynomials with very poor anticoncentration: If p is a degreed polynomial where p(x) is very close to 0 on a large number of points in { ± 1 }^{n}, then there exists a degreed hypersurface that exactly passes though almost all of these points. We leverage this structural result to show that if the degreed Chow distance between f and g is small, then we can find many degreed polynomials that vanish on their disagreement region, and in particular enough that forces the ℓ_{1}distance between f and g to also be small. To implement this proof strategy, we require additional technical ideas. In particular, in the d=2 case we show that for any large vector space of degree2 polynomials with a large number of common zeroes, there exists a linear function that vanishes on almost all of these zeroes. The degreed degree generalization of this statement is significantly more complex, and can be viewed as an effective version of Hilbert’s Basis Theorem for our setting. @InProceedings{STOC19p804, author = {Ilias Diakonikolas and Daniel M. Kane}, title = {Degree𝑑 Chow Parameters Robustly Determine Degree𝑑 PTFs (and Algorithmic Applications)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {804815}, doi = {10.1145/3313276.3316301}, year = {2019}, } Publisher's Version 

Kaplan, Haim 
STOC '19: "Faster kSAT Algorithms ..."
Faster kSAT Algorithms using BiasedPPSZ
Thomas Dueholm Hansen, Haim Kaplan, Or Zamir, and Uri Zwick (University of Copenhagen, Denmark; Tel Aviv University, Israel) The PPSZ algorithm, due to Paturi, Pudlak, Saks and Zane, is currently the fastest known algorithm for the kSAT problem, for every k>3. For 3SAT, a tiny improvement over PPSZ was obtained by Hertli. We introduce a biased version of the PPSZ algorithm using which we obtain an improvement over PPSZ for every k≥ 3. For k=3 we also improve on Herli’s result and get a much more noticeable improvement over PPSZ, though still relatively small. In particular, for Unique 3SAT, we improve the current bound from 1.308^{n} to 1.307^{n}. @InProceedings{STOC19p578, author = {Thomas Dueholm Hansen and Haim Kaplan and Or Zamir and Uri Zwick}, title = {Faster <i>k</i>SAT Algorithms using BiasedPPSZ}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {578589}, doi = {10.1145/3313276.3316359}, year = {2019}, } Publisher's Version 

Kapralov, Michael 
STOC '19: "A Universal Sampling Method ..."
A Universal Sampling Method for Reconstructing Signals with Simple Fourier Transforms
Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, and Amir Zandieh (Tel Aviv University, Israel; EPFL, Switzerland; Microsoft Research, USA; Princeton University, USA; Google Research, USA) Reconstructing continuous signals based on a small number of discrete samples is a fundamental problem across science and engineering. We are often interested in signals with "simple'' Fourier structure  e.g., those involving frequencies within a bounded range, a small number of frequencies, or a few blocks of frequencies  i.e., bandlimited, sparse, and multiband signals, respectively. More broadly, any prior knowledge on a signal's Fourier power spectrum can constrain its complexity. Intuitively, signals with more highly constrained Fourier structure require fewer samples to reconstruct. We formalize this intuition by showing that, roughly, a continuous signal from a given class can be approximately reconstructed using a number of samples proportional to the statistical dimension of the allowed power spectrum of that class. We prove that, in nearly all settings, this natural measure tightly characterizes the sample complexity of signal reconstruction. Surprisingly, we also show that, up to log factors, a universal nonuniform sampling strategy can achieve this optimal complexity for any class of signals. We present an efficient and general algorithm for recovering a signal from the samples taken. For bandlimited and sparse signals, our method matches the stateoftheart, while providing the the first computationally and sample efficient solution to a broader range of problems, including multiband signal reconstruction and Gaussian process regression tasks in one dimension. Our work is based on a novel connection between randomized linear algebra and the problem of reconstructing signals with constrained Fourier structure. We extend tools based on statistical leverage score sampling and columnbased matrix reconstruction to the approximation of continuous linear operators that arise in the signal reconstruction problem. We believe these extensions are of independent interest and serve as a foundation for tackling a broad range of continuous time problems using randomized methods. @InProceedings{STOC19p1051, author = {Haim Avron and Michael Kapralov and Cameron Musco and Christopher Musco and Ameya Velingker and Amir Zandieh}, title = {A Universal Sampling Method for Reconstructing Signals with Simple Fourier Transforms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10511063}, doi = {10.1145/3313276.3316363}, year = {2019}, } Publisher's Version STOC '19: "An Optimal Space Lower Bound ..." An Optimal Space Lower Bound for Approximating MAXCUT Michael Kapralov and Dmitry Krachun (EPFL, Switzerland; University of Geneva, Switzerland) We consider the problem of estimating the value of MAXCUT in a graph in the streaming model of computation. At one extreme, there is a trivial 2approximation for this problem that uses only O(log n) space, namely, count the number of edges and output half of this value as the estimate for the size of the MAXCUT. On the other extreme, for any fixed є > 0, if one allows Õ(n) space, a (1+є)approximate solution to the MAXCUT value can be obtained by storing an Õ(n)size sparsifier that essentially preserves MAXCUT value. Our main result is that any (randomized) single pass streaming algorithm that breaks the 2approximation barrier requires Ω(n)space, thus resolving the space complexity of any nontrivial approximations of the MAXCUT value to within polylogarithmic factors in the single pass streaming model. We achieve the result by presenting a tight analysis of the Implicit Hidden Partition Problem introduced by Kapralov et al. [SODA’17] for an arbitrarily large number of players. In this problem a number of players receive random matchings of Ω(n) size together with random bits on the edges, and their task is to determine whether the bits correspond to parities of some hidden bipartition, or are just uniformly random. Unlike all previous Fourier analytic communication lower bounds, our analysis does not directly use bounds on the ℓ_{2} norm of Fourier coefficients of a typical message at any given weight level that follow from hypercontractivity. Instead, we use the fact that graphs received by players are sparse (matchings) to obtain strong upper bounds on the ℓ_{1} norm of the Fourier coefficients of the messages of individual players using their special structure, and then argue, using the convolution theorem, that similar strong bounds on the ℓ_{1} norm are essentially preserved (up to an exponential loss in the number of players) once messages of different players are combined. We feel that our main technique is likely of independent interest. @InProceedings{STOC19p277, author = {Michael Kapralov and Dmitry Krachun}, title = {An Optimal Space Lower Bound for Approximating MAXCUT}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {277288}, doi = {10.1145/3313276.3316364}, year = {2019}, } Publisher's Version 

Karbasi, Amin 
STOC '19: "Unconstrained Submodular Maximization ..."
Unconstrained Submodular Maximization with Constant Adaptive Complexity
Lin Chen, Moran Feldman, and Amin Karbasi (Yale University, USA; Open University of Israel, Israel) In this paper, we consider the unconstrained submodular maximization problem. We propose the first algorithm for this problem that achieves a tight (1/2−ε)approximation guarantee using Õ(ε^{−1}) adaptive rounds and a linear number of function evaluations. No previously known algorithm for this problem achieves an approximation ratio better than 1/3 using less than Ω(n) rounds of adaptivity, where n is the size of the ground set. Moreover, our algorithm easily extends to the maximization of a nonnegative continuous DRsubmodular function subject to a box constraint, and achieves a tight (1/2−ε)approximation guarantee for this problem while keeping the same adaptive and query complexities. @InProceedings{STOC19p102, author = {Lin Chen and Moran Feldman and Amin Karbasi}, title = {Unconstrained Submodular Maximization with Constant Adaptive Complexity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {102113}, doi = {10.1145/3313276.3316327}, year = {2019}, } Publisher's Version 

Kawarabayashi, Kenichi 
STOC '19: "Polylogarithmic Approximation ..."
Polylogarithmic Approximation for Euler Genus on Bounded Degree Graphs
Kenichi Kawarabayashi and Anastasios Sidiropoulos (National Institute of Informatics, Japan; University of Illinois at Chicago, USA) Computing the Euler genus of a graph is a fundamental problem in algorithmic graph theory. It has been shown to be NPhard by [Thomassen ’89, Thomassen ’97], even for cubic graphs, and a lineartime fixedparameter algorithm has been obtained by [Mohar ’99]. Despite extensive study, the approximability of the Euler genus remains wide open. While the existence of an O(1)approximation is not ruled out, the currently bestknown upper bound is a O(n^{1−α})approximation, for some universal constant α>0 [Kawarabayashi and Sidiropoulos 2017]. We present an O(log^{2.5} n)approximation polynomial time algorithm for this problem on graphs of bounded degree. Prior to our work, the best known result on graphs of bounded degree was a n^{Ω(1)}approximation [Chekuri and Sidiropoulos 2013]. As an immediate corollary, we also obtain improved approximation algorithms for the crossing number problem and for the minimum vertex planarization problem, on graphs of bounded degree. Specifically, we obtain a polynomialtime O(^{2} log^{3.5} n)approximation algorithm for the minimum vertex planarization problem, on graphs of maximum degree . Moreover we obtain an algorithm which given a graph of crossing number k, computes a drawing with at most k^{2} log^{O(1)} n crossings in polynomial time. This also implies a n^{1/2} log^{O(1)} napproximation polynomial time algorithm. The previously bestknown result is a polynomial time algorithm that computes a drawing with k^{10} log^{O(1)} crossings, which implies a n^{9/10}log^{O(1)} napproximation algorithm [Chuzhoy 2011]. @InProceedings{STOC19p164, author = {Kenichi Kawarabayashi and Anastasios Sidiropoulos}, title = {Polylogarithmic Approximation for Euler Genus on Bounded Degree Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {164175}, doi = {10.1145/3313276.3316409}, year = {2019}, } Publisher's Version 

Kayal, Neeraj 
STOC '19: "Reconstruction of Nondegenerate ..."
Reconstruction of Nondegenerate Homogeneous Depth Three Circuits
Neeraj Kayal and Chandan Saha (Microsoft Research, India; Indian Institute of Science, India) A homogeneous depth three circuit C computes a polynomial f = T_{1} + T_{2} + ... + T_{s}, where each T_{i} is a product of d linear forms in n variables over some underlying field F. Given blackbox access to f, can we efficiently reconstruct (i.e. proper learn) a homogeneous depth three circuit computing f? Learning various subclasses of circuits is natural and interesting from both theoretical and practical standpoints and in particular, properly learning homogeneous depth three circuits efficiently is stated as an open problem in a work by Klivans and Shpilka (COLT 2003) and is wellstudied. Unfortunately, there is substantial amount of evidence to show that this is a hard problem in the worst case. We give a (randomized) poly(n,d,s)time algorithm to reconstruct nondegenerate homogeneous depth three circuits for n = Ω(d^{2}) (with some additional mild requirements on s and the characteristic of F). We call a circuit C as nondegenerate if the dimension of the partial derivative space of f equals the sum of the dimensions of the partial derivative spaces of the terms T_{1}, T_{2}, …, T_{s}. In this sense, the terms are “independent” of each other in a nondegenerate circuit. A random homogeneous depth three circuit (where the coefficients of the linear forms are chosen according to the uniform distribution or any other reasonable distribution) is almost surely nondegenerate. In comparison, previous learning algorithms for this circuit class were either improper (with an exponential dependence on d), or they only worked for s < n (with a doubly exponential dependence of the running time on s). The main contribution of this work is to formulate the following paradigm for efficiently handling addition gates and to successfully implement it for the class of homogeneous depth three circuits. The problem of finding the children of an addition gate with large fanin s is first reduced to the problem of decomposing a suitable vector space U into a (direct) sum of simpler subspaces U_{1}, U_{2}, …, U_{s}. One then constructs a suitable space of operators S consisting of linear maps acting on U such that analyzing the simultaneous global structure of S enables us to efficiently decompose U. In our case, we exploit the structure of the set of low rank matrices in S and of the invariant subspaces of U induced by S. We feel that this paradigm is novel and powerful: it should lead to efficient reconstruction of many other subclasses of circuits for which the efficient reconstruction problem had hitherto looked unapproachable because of the presence of large fanin addition gates. @InProceedings{STOC19p413, author = {Neeraj Kayal and Chandan Saha}, title = {Reconstruction of Nondegenerate Homogeneous Depth Three Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {413424}, doi = {10.1145/3313276.3316360}, year = {2019}, } Publisher's Version 

Kempa, Dominik 
STOC '19: "String Synchronizing Sets: ..."
String Synchronizing Sets: SublinearTime BWT Construction and Optimal LCE Data Structure
Dominik Kempa and Tomasz Kociumaka (University of Warwick, UK; University of Warsaw, Poland; BarIlan University, Israel) Burrows–Wheeler transform (BWT) is an invertible text transformation that, given a text T of length n, permutes its symbols according to the lexicographic order of suffixes of T. BWT is one of the most heavily studied algorithms in data compression with numerous applications in indexing, sequence analysis, and bioinformatics. Its construction is a bottleneck in many scenarios, and settling the complexity of this task is one of the most important unsolved problems in sequence analysis that has remained open for 25 years. Given a binary string of length n, occupying O(n/logn) machine words, the BWT construction algorithm due to Hon et al. (SIAM J. Comput., 2009) runs in O(n) time and O(n/logn) space. Recent advancements (Belazzougui, STOC 2014, and Munro et al., SODA 2017) focus on removing the alphabetsize dependency in the time complexity, but they still require Ω(n) time. Despite the clearly suboptimal running time, the existing techniques appear to have reached their limits. In this paper, we propose the first algorithm that breaks the O(n)time barrier for BWT construction. Given a binary string of length n, our procedure builds the Burrows–Wheeler transform in O(n/√logn) time and O(n/logn) space. We complement this result with a conditional lower bound proving that any further progress in the time complexity of BWT construction would yield faster algorithms for the very well studied problem of counting inversions: it would improve the stateoftheart O(m√logm)time solution by Chan and Pătraşcu (SODA 2010). Our algorithm is based on a novel concept of string synchronizing sets, which is of independent interest. As one of the applications, we show that this technique lets us design a data structure of the optimal size O(n/logn) that answers Longest Common Extension queries (LCE queries) in O(1) time and, furthermore, can be deterministically constructed in the optimal O(n/logn) time. @InProceedings{STOC19p756, author = {Dominik Kempa and Tomasz Kociumaka}, title = {String Synchronizing Sets: SublinearTime BWT Construction and Optimal LCE Data Structure}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {756767}, doi = {10.1145/3313276.3316368}, year = {2019}, } Publisher's Version 

Khanna, Sanjeev 
STOC '19: "Polynomial Pass Lower Bounds ..."
Polynomial Pass Lower Bounds for Graph Streaming Algorithms
Sepehr Assadi, Yu Chen, and Sanjeev Khanna (Princeton University, USA; University of Pennsylvania, USA) We present new lower bounds that show that a polynomial number of passes are necessary for solving some fundamental graph problems in the streaming model of computation. For instance, we show that any streaming algorithm that finds a weighted minimum st cut in an nvertex undirected graph requires n^{2−o(1)} space unless it makes n^{Ω(1)} passes over the stream. To prove our lower bounds, we introduce and analyze a new fourplayer communication problem that we refer to as the hiddenpointer chasing problem. This is a problem in spirit of the standard pointer chasing problem with the key difference that the pointers in this problem are hidden to players and finding each one of them requires solving another communication problem, namely the set intersection problem. Our lower bounds for graph problems are then obtained by reductions from the hiddenpointer chasing problem. Our hiddenpointer chasing problem appears flexible enough to find other applications and is therefore interesting in its own right. To showcase this, we further present an interesting application of this problem beyond streaming algorithms. Using a reduction from hiddenpointer chasing, we prove that any algorithm for submodular function minimization needs to make n^{2−o(1)} value queries to the function unless it has a polynomial degree of adaptivity. @InProceedings{STOC19p265, author = {Sepehr Assadi and Yu Chen and Sanjeev Khanna}, title = {Polynomial Pass Lower Bounds for Graph Streaming Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {265276}, doi = {10.1145/3313276.3316361}, year = {2019}, } Publisher's Version STOC '19: "A New Algorithm for Decremental ..." A New Algorithm for Decremental SingleSource Shortest Paths with Applications to VertexCapacitated Flow and Cut Problems Julia Chuzhoy and Sanjeev Khanna (Toyota Technological Institute at Chicago, USA; University of Pennsylvania, USA) We study the vertexdecremental SingleSource Shortest Paths (SSSP) problem: given an undirected graph G=(V,E) with lengths ℓ(e)≥ 1 on its edges that undergoes vertex deletions, and a source vertex s, we need to support (approximate) shortestpath queries in G: given a vertex v, return a path connecting s to v, whose length is at most (1+є) times the length of the shortest such path, where є is a given accuracy parameter. The problem has many applications, for example to flow and cut problems in vertexcapacitated graphs. Decremental SSSP is a fundamental problem in dynamic algorithms that has been studied extensively, especially in the more standard edgedecremental setting, where the input graph G undergoes edge deletions. The classical algorithm of Even and Shiloach supports exact shortestpath queries in O(mn) total update time. A series of recent results have improved this bound to O(m^{1+o(1)}logL), where L is the largest length of any edge. However, these improved results are randomized algorithms that assume an oblivious adversary. To go beyond the oblivious adversary restriction, recently, Bernstein, and Bernstein and Chechik designed deterministic algorithms for the problem, with total update time Õ(n^{2}logL), that by definition work against an adaptive adversary. Unfortunately, their algorithms introduce a new limitation, namely, they can only return the approximate length of a shortest path, and not the path itself. Many applications of the decremental SSSP problem, including the ones considered in this paper, crucially require both that the algorithm returns the approximate shortest paths themselves and not just their lengths, and that it works against an adaptive adversary. Our main result is a randomized algorithm for vertexdecremental SSSP with total expected update time O(n^{2+o(1)}logL), that responds to each shortestpath query in Õ(nlogL) time in expectation, returning a (1+є)approximate shortest path. The algorithm works against an adaptive adversary. The main technical ingredient of our algorithm is an Õ(E(G)+ n^{1+o(1)})time algorithm to compute a core decomposition of a given dense graph G, which allows us to compute short paths between pairs of query vertices in G efficiently. We use our result for vertexdecremental SSSP to obtain (1+є)approximation algorithms for maximum st flow and minimum st cut in vertexcapacitated graphs, in expected time n^{2+o(1)}, and an O(log^{4}n)approximation algorithm for the vertex version of the sparsest cut problem with expected running time n^{2+o(1)}. These results improve upon the previous best known algorithms for these problems in the regime where m= ω(n^{1.5 + o(1)}). @InProceedings{STOC19p389, author = {Julia Chuzhoy and Sanjeev Khanna}, title = {A New Algorithm for Decremental SingleSource Shortest Paths with Applications to VertexCapacitated Flow and Cut Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {389400}, doi = {10.1145/3313276.3316320}, year = {2019}, } Publisher's Version 

Khurana, Dakshita 
STOC '19: "Weak ZeroKnowledge Beyond ..."
Weak ZeroKnowledge Beyond the BlackBox Barrier
Nir Bitansky, Dakshita Khurana, and Omer Paneth (Tel Aviv University, Israel; Microsoft Research, USA; University of Illinois at UrbanaChampaign, USA; Massachusetts Institute of Technology, USA) The round complexity of zeroknowledge protocols is a longstanding open question, yet to be settled under standard assumptions. So far, the question has appeared equally challenging for relaxations such as weak zeroknowledge and witness hiding. Protocols satisfying these relaxed notions under standard assumptions have at least four messages, just like fullfledged zeroknowledge. The difficulty in improving round complexity stems from a fundamental barrier: none of these notions can be achieved in three messages via reductions (or simulators) that treat the verifier as a black box. We introduce a new nonblackbox technique and use it to obtain the first protocols that cross this barrier under standard assumptions. We obtain weak zeroknowledge for in two messages, assuming the existence of quasipolynomiallysecure fullyhomomorphic encryption and other standard primitives (known based on the quasipolynomial hardness of Learning with Errors), and subexponentiallysecure oneway functions. We also obtain weak zeroknowledge for in three messages under standard polynomial assumptions (following for example from fully homomorphic encryption and factoring). We also give, under polynomial assumptions, a twomessage witnesshiding protocol for any language ∈ that has a witness encryption scheme. This protocol is publicly verifiable. Our technique is based on a new homomorphic trapdoor paradigm, which can be seen as a nonblackbox analog of the classic FeigeLapidotShamir trapdoor paradigm. @InProceedings{STOC19p1091, author = {Nir Bitansky and Dakshita Khurana and Omer Paneth}, title = {Weak ZeroKnowledge Beyond the BlackBox Barrier}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10911102}, doi = {10.1145/3313276.3316382}, year = {2019}, } Publisher's Version 

Kociumaka, Tomasz 
STOC '19: "String Synchronizing Sets: ..."
String Synchronizing Sets: SublinearTime BWT Construction and Optimal LCE Data Structure
Dominik Kempa and Tomasz Kociumaka (University of Warwick, UK; University of Warsaw, Poland; BarIlan University, Israel) Burrows–Wheeler transform (BWT) is an invertible text transformation that, given a text T of length n, permutes its symbols according to the lexicographic order of suffixes of T. BWT is one of the most heavily studied algorithms in data compression with numerous applications in indexing, sequence analysis, and bioinformatics. Its construction is a bottleneck in many scenarios, and settling the complexity of this task is one of the most important unsolved problems in sequence analysis that has remained open for 25 years. Given a binary string of length n, occupying O(n/logn) machine words, the BWT construction algorithm due to Hon et al. (SIAM J. Comput., 2009) runs in O(n) time and O(n/logn) space. Recent advancements (Belazzougui, STOC 2014, and Munro et al., SODA 2017) focus on removing the alphabetsize dependency in the time complexity, but they still require Ω(n) time. Despite the clearly suboptimal running time, the existing techniques appear to have reached their limits. In this paper, we propose the first algorithm that breaks the O(n)time barrier for BWT construction. Given a binary string of length n, our procedure builds the Burrows–Wheeler transform in O(n/√logn) time and O(n/logn) space. We complement this result with a conditional lower bound proving that any further progress in the time complexity of BWT construction would yield faster algorithms for the very well studied problem of counting inversions: it would improve the stateoftheart O(m√logm)time solution by Chan and Pătraşcu (SODA 2010). Our algorithm is based on a novel concept of string synchronizing sets, which is of independent interest. As one of the applications, we show that this technique lets us design a data structure of the optimal size O(n/logn) that answers Longest Common Extension queries (LCE queries) in O(1) time and, furthermore, can be deterministically constructed in the optimal O(n/logn) time. @InProceedings{STOC19p756, author = {Dominik Kempa and Tomasz Kociumaka}, title = {String Synchronizing Sets: SublinearTime BWT Construction and Optimal LCE Data Structure}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {756767}, doi = {10.1145/3313276.3316368}, year = {2019}, } Publisher's Version 

Koehler, Frederic 
STOC '19: "MeanField Approximation, ..."
MeanField Approximation, Convex Hierarchies, and the Optimality of Correlation Rounding: A Unified Perspective
Vishesh Jain, Frederic Koehler, and Andrej Risteski (Massachusetts Institute of Technology, USA) The free energy is a key quantity of interest in Ising models, but unfortunately, computing it in general is computationally intractable. Two popular (variational) approximation schemes for estimating the free energy of general Ising models (in particular, even in regimes where correlation decay does not hold) are: (i) the meanfield approximation with roots in statistical physics, which estimates the free energy from below, and (ii) hierarchies of convex relaxations with roots in theoretical computer science, which estimate the free energy from above. We show, surprisingly, that the tight regime for both methods to compute the free energy to leading order is identical. More precisely, we show that the meanfield approximation to the free energy is within O((nJ_{F})^{2/3}) of the true free energy, where J_{F} denotes the Frobenius norm of the interaction matrix of the Ising model. This simultaneously subsumes both the breakthrough work of Basak and Mukherjee, who showed the tight result that the meanfield approximation is within o(n) whenever J_{F} = o(√n), as well as the work of Jain, Koehler, and Mossel, who gave the previously best known nonasymptotic bound of O((nJ_{F})^{2/3}log^{1/3}(nJ_{F})). We give a simple, algorithmic proof of this result using a convex relaxation proposed by Risteski based on the SheraliAdams hierarchy, automatically giving subexponential time approximation schemes for the free energy in this entire regime. Our algorithmic result is tight under GapETH. We furthermore combine our techniques with spin glass theory to prove (in a strong sense) the optimality of correlation rounding, refuting a recent conjecture of Allen, O’Donnell, and Zhou. Finally, we give the tight generalization of all of these results to kMRFs, capturing as a special case previous work on approximating MAXkCSP. @InProceedings{STOC19p1226, author = {Vishesh Jain and Frederic Koehler and Andrej Risteski}, title = {MeanField Approximation, Convex Hierarchies, and the Optimality of Correlation Rounding: A Unified Perspective}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12261236}, doi = {10.1145/3313276.3316299}, year = {2019}, } Publisher's Version STOC '19: "Learning Restricted Boltzmann ..." Learning Restricted Boltzmann Machines via Influence Maximization Guy Bresler, Frederic Koehler, and Ankur Moitra (Massachusetts Institute of Technology, USA) Graphical models are a rich language for describing highdimensional distributions in terms of their dependence structure. While there are algorithms with provable guarantees for learning undirected graphical models in a variety of settings, there has been much less progress in the important scenario when there are latent variables. Here we study Restricted Boltzmann Machines (or RBMs), which are a popular model with wideranging applications in dimensionality reduction, collaborative filtering, topic modeling, feature extraction and deep learning. The main message of our paper is a strong dichotomy in the feasibility of learning RBMs, depending on the nature of the interactions between variables: ferromagnetic models can be learned efficiently, while general models cannot. In particular, we give a simple greedy algorithm based on influence maximization to learn ferromagnetic RBMs with bounded degree. In fact, we learn a description of the distribution on the observed variables as a Markov Random Field. Our analysis is based on tools from mathematical physics that were developed to show the concavity of magnetization. Our algorithm extends straighforwardly to general ferromagnetic Ising models with latent variables. Conversely, we show that even for a contant number of latent variables with constant degree, without ferromagneticity the problem is as hard as sparse parity with noise. This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs: the distribution on their observed variables can simulate any bounded order MRF. This result is of independent interest since RBMs are the building blocks of deep belief networks. @InProceedings{STOC19p828, author = {Guy Bresler and Frederic Koehler and Ankur Moitra}, title = {Learning Restricted Boltzmann Machines via Influence Maximization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {828839}, doi = {10.1145/3313276.3316372}, year = {2019}, } Publisher's Version 

Kothari, Robin 
STOC '19: "Exponential Separation between ..."
Exponential Separation between Shallow Quantum Circuits and Unbounded FanIn Shallow Classical Circuits
Adam Bene Watts, Robin Kothari, Luke Schaeffer, and Avishay Tal (Massachusetts Institute of Technology, USA; Microsoft Research, USA; Stanford University, USA) Recently, Bravyi, Gosset, and Konig (Science, 2018) exhibited a search problem called the 2D Hidden Linear Function (2D HLF) problem that can be solved exactly by a constantdepth quantum circuit using bounded fanin gates (or QNC^0 circuits), but cannot be solved by any constantdepth classical circuit using bounded fanin AND, OR, and NOT gates (or NC^0 circuits). In other words, they exhibited a search problem in QNC^0 that is not in NC^0. We strengthen their result by proving that the 2D HLF problem is not contained in AC^0, the class of classical, polynomialsize, constantdepth circuits over the gate set of unbounded fanin AND and OR gates, and NOT gates. We also supplement this worstcase lower bound with an averagecase result: There exists a simple distribution under which any AC^0 circuit (even of nearly exponential size) has exponentially small correlation with the 2D HLF problem. Our results are shown by constructing a new problem in QNC^0, which we call the Parity Halving Problem, which is easier to work with. We prove our AC^0 lower bounds for this problem, and then show that it reduces to the 2D HLF problem. @InProceedings{STOC19p515, author = {Adam Bene Watts and Robin Kothari and Luke Schaeffer and Avishay Tal}, title = {Exponential Separation between Shallow Quantum Circuits and Unbounded FanIn Shallow Classical Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {515526}, doi = {10.1145/3313276.3316404}, year = {2019}, } Publisher's Version 

Koutsoupias, Elias 
STOC '19: "The Online 𝑘Taxi Problem ..."
The Online 𝑘Taxi Problem
Christian Coester and Elias Koutsoupias (University of Oxford, UK) We consider the online ktaxi problem, a generalization of the kserver problem, in which k taxis serve a sequence of requests in a metric space. A request consists of two points s and t, representing a passenger that wants to be carried by a taxi from s to t. The goal is to serve all requests while minimizing the total distance traveled by all taxis. The problem comes in two flavors, called the easy and the hard ktaxi problem: In the easy ktaxi problem, the cost is defined as the total distance traveled by the taxis; in the hard ktaxi problem, the cost is only the distance of empty runs. The hard ktaxi problem is substantially more difficult than the easy version with at least an exponential deterministic competitive ratio, Ω(2^{k}), admitting a reduction from the layered graph traversal problem. In contrast, the easy ktaxi problem has exactly the same competitive ratio as the kserver problem. We focus mainly on the hard version. For hierarchically separated trees (HSTs), we present a memoryless randomized algorithm with competitive ratio 2^{k}−1 against adaptive online adversaries and provide two matching lower bounds: for arbitrary algorithms against adaptive adversaries and for memoryless algorithms against oblivious adversaries. Due to wellknown HST embedding techniques, the algorithm implies a randomized O(2^{k}logn)competitive algorithm for arbitrary npoint metrics. This is the first competitive algorithm for the hard ktaxi problem for general finite metric spaces and general k. For the special case of k=2, we obtain a precise answer of 9 for the competitive ratio in general metrics. With an algorithm based on growing, shrinking and shifting regions, we show that one can achieve a constant competitive ratio also for the hard 3taxi problem on the line (abstracting the scheduling of three elevators). @InProceedings{STOC19p1136, author = {Christian Coester and Elias Koutsoupias}, title = {The Online 𝑘Taxi Problem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11361147}, doi = {10.1145/3313276.3316370}, year = {2019}, } Publisher's Version 

Krachun, Dmitry 
STOC '19: "An Optimal Space Lower Bound ..."
An Optimal Space Lower Bound for Approximating MAXCUT
Michael Kapralov and Dmitry Krachun (EPFL, Switzerland; University of Geneva, Switzerland) We consider the problem of estimating the value of MAXCUT in a graph in the streaming model of computation. At one extreme, there is a trivial 2approximation for this problem that uses only O(log n) space, namely, count the number of edges and output half of this value as the estimate for the size of the MAXCUT. On the other extreme, for any fixed є > 0, if one allows Õ(n) space, a (1+є)approximate solution to the MAXCUT value can be obtained by storing an Õ(n)size sparsifier that essentially preserves MAXCUT value. Our main result is that any (randomized) single pass streaming algorithm that breaks the 2approximation barrier requires Ω(n)space, thus resolving the space complexity of any nontrivial approximations of the MAXCUT value to within polylogarithmic factors in the single pass streaming model. We achieve the result by presenting a tight analysis of the Implicit Hidden Partition Problem introduced by Kapralov et al. [SODA’17] for an arbitrarily large number of players. In this problem a number of players receive random matchings of Ω(n) size together with random bits on the edges, and their task is to determine whether the bits correspond to parities of some hidden bipartition, or are just uniformly random. Unlike all previous Fourier analytic communication lower bounds, our analysis does not directly use bounds on the ℓ_{2} norm of Fourier coefficients of a typical message at any given weight level that follow from hypercontractivity. Instead, we use the fact that graphs received by players are sparse (matchings) to obtain strong upper bounds on the ℓ_{1} norm of the Fourier coefficients of the messages of individual players using their special structure, and then argue, using the convolution theorem, that similar strong bounds on the ℓ_{1} norm are essentially preserved (up to an exponential loss in the number of players) once messages of different players are combined. We feel that our main technique is likely of independent interest. @InProceedings{STOC19p277, author = {Michael Kapralov and Dmitry Krachun}, title = {An Optimal Space Lower Bound for Approximating MAXCUT}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {277288}, doi = {10.1145/3313276.3316364}, year = {2019}, } Publisher's Version 

Krokhin, Andrei 
STOC '19: "Algebraic Approach to Promise ..."
Algebraic Approach to Promise Constraint Satisfaction
Jakub Bulín, Andrei Krokhin, and Jakub Opršal (Charles University in Prague, Czechia; University of Durham, UK) The complexity and approximability of the constraint satisfaction problem (CSP) has been actively studied over the last 20 years. A new version of the CSP, the promise CSP (PCSP) has recently been proposed, motivated by open questions about the approximability of variants of satisfiability and graph colouring. The PCSP significantly extends the standard decision CSP. The complexity of CSPs with a fixed constraint language on a finite domain has recently been fully classified, greatly guided by the algebraic approach, which uses polymorphisms — highdimensional symmetries of solution spaces — to analyse the complexity of problems. The corresponding classification for PCSPs is wide open and includes some longstanding open questions, such as the complexity of approximate graph colouring, as special cases. The basic algebraic approach to PCSP was initiated by Brakensiek and Guruswami, and in this paper we significantly extend it and lift it from concrete properties of polymorphisms to their abstract properties. We introduce a new class of problems that can be viewed as algebraic versions of the (Gap) Label Cover problem, and show that every PCSP with a fixed constraint language is equivalent to a problem of this form. This allows us to identify a ”measure of symmetry” that is well suited for comparing and relating the complexity of different PCSPs via the algebraic approach. We demonstrate how our theory can be applied by improving the stateoftheart in approximate graph colouring: we show that, for any k≥ 3, it is NPhard to find a (2k−1)colouring of a given kcolourable graph. @InProceedings{STOC19p602, author = {Jakub Bulín and Andrei Krokhin and Jakub Opršal}, title = {Algebraic Approach to Promise Constraint Satisfaction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {602613}, doi = {10.1145/3313276.3316300}, year = {2019}, } Publisher's Version 

Kumar, Akash 
STOC '19: "Random Walks and Forbidden ..."
Random Walks and Forbidden Minors II: A poly(d ε⁻¹)Query Tester for MinorClosed Properties of Bounded Degree Graphs
Akash Kumar, C. Seshadhri, and Andrew Stolman (Purdue University, USA; University of California at Santa Cruz, USA) Let G be a graph with n vertices and maximum degree d. Fix some minorclosed property P (such as planarity). We say that G is εfar from P if one has to remove ε dn edges to make it have P. The problem of property testing P was introduced in the seminal work of BenjaminiSchrammShapira (STOC 2008) that gave a tester with query complexity triply exponential in ε^{−1}. LeviRon (TALG 2015) have given the best tester to date, with a quasipolynomial (in ε^{−1}) query complexity. It is an open problem to get property testers whose query complexity is (dε^{−1}), even for planarity. In this paper, we resolve this open question. For any minorclosed property, we give a tester with query complexity d· (ε^{−1}). The previous line of work on (independent of n, twosided) testers is primarily combinatorial. Our work, on the other hand, employs techniques from spectral graph theory. This paper is a continuation of recent work of the authors (FOCS 2018) analyzing random walk algorithms that find forbidden minors. @InProceedings{STOC19p559, author = {Akash Kumar and C. Seshadhri and Andrew Stolman}, title = {Random Walks and Forbidden Minors II: A poly(<i>d ε</i>⁻¹)Query Tester for MinorClosed Properties of Bounded Degree Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {559567}, doi = {10.1145/3313276.3316330}, year = {2019}, } Publisher's Version 

Künnemann, Marvin 
STOC '19: "Approximating APSP without ..."
Approximating APSP without Scaling: Equivalence of Approximate MinPlus and Exact MinMax
Karl Bringmann, Marvin Künnemann, and Karol Węgrzycki (Max Planck Institute for Informatics, Germany; University of Warsaw, Poland) Zwick’s (1+ε)approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time Õ(n^{ω}/ε logW), where ω ≤ 2.373 is the exponent of matrix multiplication and W denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimumweight triangle, and minimumweight cycle in the same time bound. Since Zwick’s algorithm uses the scaling technique, it has a factor logW in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of W; this is called strongly polynomial. Our main results are as follows. (1) We design approximation schemes in strongly polynomial time O(n^{ω}/ε polylog(n/ε)) for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimumweight triangle, and minimumweight cycle on directed or undirected graphs. (2) For APSP on directed graphs we design an approximation scheme in strongly polynomial time O(n^{ω + 3/2} ε^{−1} polylog(n/ε)). This is significantly faster than the best exact algorithm. (3) We explain why our approximation scheme for APSP on directed graphs has a worse exponent than ω: Any improvement over our exponent ω + 3/2 would improve the best known algorithm for MinMax Product. In fact, we prove that approximating directed APSP and exactly computing the MinMax Product are equivalent. Our techniques yield a framework for approximation problems over the (min,+)semiring that can be applied more generally. In particular, we obtain the first strongly polynomial approximation scheme for MinPlus Convolution in strongly subquadratic time, and we prove an equivalence of approximate MinPlus Convolution and exact MinMax Convolution. @InProceedings{STOC19p943, author = {Karl Bringmann and Marvin Künnemann and Karol Węgrzycki}, title = {Approximating APSP without Scaling: Equivalence of Approximate MinPlus and Exact MinMax}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {943954}, doi = {10.1145/3313276.3316373}, year = {2019}, } Publisher's Version 

Kuszmaul, William 
STOC '19: "Achieving Optimal Backlog ..."
Achieving Optimal Backlog in Multiprocessor Cup Games
Michael A. Bender, Martín FarachColton, and William Kuszmaul (Stony Brook University, USA; Rutgers University, USA; Massachusetts Institute of Technology, USA) Many problems in processor scheduling, deamortization, and buffer management can be modeled as single and multiprocessor cup games. At the beginning of the singleprocessor ncup game, all cups are empty. In each step of the game, a filler distributes 1−є units of water among the cups, and then an emptier selects a cup and removes up to 1 unit of water from it. The goal of the emptier is to minimize the amount of water in the fullest cup, also known as the backlog. The greedy algorithm (i.e., empty from the fullest cup) is known to achieve backlog O(logn), and no deterministic algorithm can do better. We show that the performance of the greedy algorithm can be exponentially improved with a small amount of randomization: After each step and for any k ≥ Ω(logє^{−1}), the emptier achieves backlog at most O(k) with probability at least 1 −O(2^{−2k}). We call our algorithm the smoothed greedy algorithm because if follows from a smoothed analysis of the (standard) greedy algorithm. In each step of the pprocessor ncup game, the filler distributes p(1−є) unit of water among the cups, with no cup receiving more than 1−δ units of water, and then the emptier selects p cups and removes 1 unit of water from each. Proving nontrivial bounds on the backlog for the multiprocessor cup game has remained open for decades. We present a simple analysis of the greedy algorithm for the multiprocessor cup game, establishing a backlog of O(є^{−1} logn), as long as δ > 1/poly(n). Turning to randomized algorithms, we find that the backlog drops to constant. Specifically, we show that if є and δ satisfy reasonable constraints, then there exists an algorithm that bounds the backlog after a given step by 3 with probability at least 1 − O(exp(−Ω(є^{2} p)). We prove that our results are asymptotically optimal for constant є, in the sense that no algorithms can achieve better bounds, up to constant factors in the backlog and in p. Moreover, we prove robustness results, demonstrating that our randomized algorithms continue to behave well even when placed in bad starting states. @InProceedings{STOC19p1148, author = {Michael A. Bender and Martín FarachColton and William Kuszmaul}, title = {Achieving Optimal Backlog in Multiprocessor Cup Games}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11481157}, doi = {10.1145/3313276.3316342}, year = {2019}, } Publisher's Version 

Kyng, Rasmus 
STOC '19: "Flows in Almost Linear Time ..."
Flows in Almost Linear Time via Adaptive Preconditioning
Rasmus Kyng, Richard Peng, Sushant Sachdeva, and Di Wang (Harvard University, USA; Georgia Tech, USA; Microsoft Research, USA; University of Toronto, Canada) We present algorithms for solving a large class of flow and regression problems on unit weighted graphs to (1 + 1 / poly(n)) accuracy in almostlinear time. These problems include ℓ_{p}norm minimizing flow for p large (p ∈ [ω(1), o(log^{2/3} n) ]), and their duals, ℓ_{p}norm semisupervised learning for p close to 1. As p tends to infinity, pnorm flow and its dual tend to maxflow and mincut respectively. Using this connection and our algorithms, we give an alternate approach for approximating undirected maxflow, and the first almostlinear time approximations of discretizations of total variation minimization objectives. Our framework is inspired by the routingbased solver for Laplacian linear systems by Spielman and Teng (STOC ’04, SIMAX ’14), and is based on several new tools we develop, including adaptive nonlinear preconditioning, treeroutings, and (ultra)sparsification for mixed ℓ_{2} and ℓ_{p} norm objectives. @InProceedings{STOC19p902, author = {Rasmus Kyng and Richard Peng and Sushant Sachdeva and Di Wang}, title = {Flows in Almost Linear Time via Adaptive Preconditioning}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902913}, doi = {10.1145/3313276.3316410}, year = {2019}, } Publisher's Version 

Laekhanukit, Bundit 
STOC '19: "O(log² k / ..."
O(log² k / log log k)Approximation Algorithm for Directed Steiner Tree: A Tight QuasiPolynomialTime Algorithm
Fabrizio Grandoni, Bundit Laekhanukit, and Shi Li (IDSIA, Switzerland; Shanghai University of Finance and Economics, China; SUNY Buffalo, USA) In the Directed Steiner Tree (DST) problem we are given an nvertex directed edgeweighted graph, a root r , and a collection of k terminal nodes. Our goal is to find a minimumcost subgraph that contains a directed path from r to every terminal. We present an O(log^2 k /log log k )approximation algorithm for DST that runs in quasipolynomialtime, i.e., in time n^{polylog(k)}. By making standard complexity assumptions, we show the matching lower bound of Omega(log^2 k/loglogk) for the class of quasipolynomial time algorithms, meaning that our approximation ratio is asymptotically the best possible. This is the first improvement on the DST problem since the classical quasipolynomialtime O (log^3 k ) approximation algorithm by Charikar et al. [SODA’98 & J. Algorithms’99]. (The paper erroneously claims an O (log^2 k ) approximation due to a mistake in prior work.) Our approach is based on two main ingredients. First, we derive an approximation preserving reduction to the Group Steiner Tree on Trees with Dependency Constraint (GSTTD) problem. Compared to the classic Group Steiner Tree on Trees problem, in GSTTD we are additionally given some dependency constraints among the nodes in the output tree that must be satisfied. The GSTTD instance has quasipolynomial size and logarithmic height. We remark that, in contrast, Zelikovsky’s heighreduction theorem [Algorithmica’97] used in all prior work on DST achieves a reduction to a tree instance of the related Group Steiner Tree (GST) problem of similar height, however losing a logarithmic factor in the approximation ratio. Our second ingredient is an LProunding algorithm to approximately solve GSTTD instances, which is inspired by the framework developed by [Rothvob, Preprint’11; Friggstad et al., IPCO’14]. We consider a SheraliAdams lifting of a proper LP relaxation of GSTTD. Our rounding algorithm proceeds level by level from the root to the leaves, rounding and conditioning each time on a proper subset of label variables. The limited height of the tree and small number of labels on roottoleaf paths guarantee that a small enough (namely, polylogarithmic) number of SheraliAdams lifting levels is sufficient to condition up to the leaves. We believe that our basic strategy of combining labelbased reductions with a roundandcondition type of LProunding over hierarchies might find applications to other related problems. @InProceedings{STOC19p253, author = {Fabrizio Grandoni and Bundit Laekhanukit and Shi Li}, title = {<i>O</i>(log² <i>k</i> / log log <i>k</i>)Approximation Algorithm for Directed Steiner Tree: A Tight QuasiPolynomialTime Algorithm}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {253264}, doi = {10.1145/3313276.3316349}, year = {2019}, } Publisher's Version 

Larsen, Kasper Green 
STOC '19: "Lower Bounds for External ..."
Lower Bounds for External Memory Integer Sorting via Network Coding
Alireza Farhadi, MohammadTaghi Hajiaghayi, Kasper Green Larsen, and Elaine Shi (University of Maryland, USA; Aarhus University, Denmark; Cornell University, USA) Sorting extremely large datasets is a frequently occuring task in practice. These datasets are usually much larger than the computer’s main memory; thus external memory sorting algorithms, first introduced by Aggarwal and Vitter (1988), are often used. The complexity of comparison based external memory sorting has been understood for decades by now, however the situation remains elusive if we assume the keys to be sorted are integers. In internal memory, one can sort a set of n integer keys of Θ(lgn) bits each in O(n) time using the classic Radix Sort algorithm, however in external memory, there are no faster integer sorting algorithms known than the simple comparison based ones. Whether such algorithms exist has remained a central open problem in external memory algorithms for more than three decades. In this paper, we present a tight conditional lower bound on the complexity of external memory sorting of integers. Our lower bound is based on a famous conjecture in network coding by Li and Li (2004), who conjectured that network coding cannot help anything beyond the standard multicommodity flow rate in undirected graphs. The only previous work connecting the Li and Li conjecture to lower bounds for algorithms is due to Adler et al. (2006). Adler et al. indeed obtain relatively simple lower bounds for oblivious algorithms (the memory access pattern is fixed and independent of the input data). Unfortunately obliviousness is a strong limitations, especially for integer sorting: we show that the Li and Li conjecture implies an Ω(n logn) lower bound for internal memory oblivious sorting when the keys are Θ(lgn) bits. This is in sharp contrast to the classic (nonoblivious) Radix Sort algorithm. Indeed going beyond obliviousness is highly nontrivial; we need to introduce several new methods and involved techniques, which are of their own interest, to obtain our tight lower bound for external memory integer sorting. @InProceedings{STOC19p997, author = {Alireza Farhadi and MohammadTaghi Hajiaghayi and Kasper Green Larsen and Elaine Shi}, title = {Lower Bounds for External Memory Integer Sorting via Network Coding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {9971008}, doi = {10.1145/3313276.3316337}, year = {2019}, } Publisher's Version 

Lasota, Sławomir 
STOC '19: "The Reachability Problem for ..."
The Reachability Problem for Petri Nets Is Not Elementary
Wojciech Czerwiński, Sławomir Lasota, Ranko Lazić, Jérôme Leroux, and Filip Mazowiecki (University of Warsaw, Poland; University of Warwick, UK; CNRS, France; University of Bordeaux, France) Petri nets, also known as vector addition systems, are a long established model of concurrency with extensive applications in modelling and analysis of hardware, software and database systems, as well as chemical, biological and business processes. The central algorithmic problem for Petri nets is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal STOC 1981 work, and the currently best published upper bound is nonprimitive recursive Ackermannian of Leroux and Schmitz from LICS 2019. We establish a nonelementary lower bound, i.e. that the reachability problem needs a tower of exponentials of time and space. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. The new lower bound is a major breakthrough for several reasons. Firstly, it shows that the reachability problem is much harder than the coverability (i.e., state reachability) problem, which is also ubiquitous but has been known to be complete for exponential space since the late 1970s. Secondly, it implies that a plethora of problems from formal languages, logic, concurrent systems, process calculi and other areas, that are known to admit reductions from the Petri nets reachability problem, are also not elementary. Thirdly, it makes obsolete the currently best lower bounds for the reachability problems for two key extensions of Petri nets: with branching and with a pushdown stack. At the heart of our proof is a novel gadget so called the factorial amplifier that, assuming availability of counters that are zero testable and bounded by k, guarantees to produce arbitrarily large pairs of values whose ratio is exactly the factorial of k. We also develop a novel construction that uses arbitrarily large pairs of values with ratio R to provide zero testable counters that are bounded by R. Repeatedly composing the factorial amplifier with itself by means of the construction then enables us to compute in linear time Petri nets that simulate Minsky machines whose counters are bounded by a tower of exponentials, which yields the nonelementary lower bound. By refining this scheme further, we in fact establish hardness for hexponential space already for Petri nets with h + 13 counters. @InProceedings{STOC19p24, author = {Wojciech Czerwiński and Sławomir Lasota and Ranko Lazić and Jérôme Leroux and Filip Mazowiecki}, title = {The Reachability Problem for Petri Nets Is Not Elementary}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {2433}, doi = {10.1145/3313276.3316369}, year = {2019}, } Publisher's Version 

Lazić, Ranko 
STOC '19: "The Reachability Problem for ..."
The Reachability Problem for Petri Nets Is Not Elementary
Wojciech Czerwiński, Sławomir Lasota, Ranko Lazić, Jérôme Leroux, and Filip Mazowiecki (University of Warsaw, Poland; University of Warwick, UK; CNRS, France; University of Bordeaux, France) Petri nets, also known as vector addition systems, are a long established model of concurrency with extensive applications in modelling and analysis of hardware, software and database systems, as well as chemical, biological and business processes. The central algorithmic problem for Petri nets is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal STOC 1981 work, and the currently best published upper bound is nonprimitive recursive Ackermannian of Leroux and Schmitz from LICS 2019. We establish a nonelementary lower bound, i.e. that the reachability problem needs a tower of exponentials of time and space. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. The new lower bound is a major breakthrough for several reasons. Firstly, it shows that the reachability problem is much harder than the coverability (i.e., state reachability) problem, which is also ubiquitous but has been known to be complete for exponential space since the late 1970s. Secondly, it implies that a plethora of problems from formal languages, logic, concurrent systems, process calculi and other areas, that are known to admit reductions from the Petri nets reachability problem, are also not elementary. Thirdly, it makes obsolete the currently best lower bounds for the reachability problems for two key extensions of Petri nets: with branching and with a pushdown stack. At the heart of our proof is a novel gadget so called the factorial amplifier that, assuming availability of counters that are zero testable and bounded by k, guarantees to produce arbitrarily large pairs of values whose ratio is exactly the factorial of k. We also develop a novel construction that uses arbitrarily large pairs of values with ratio R to provide zero testable counters that are bounded by R. Repeatedly composing the factorial amplifier with itself by means of the construction then enables us to compute in linear time Petri nets that simulate Minsky machines whose counters are bounded by a tower of exponentials, which yields the nonelementary lower bound. By refining this scheme further, we in fact establish hardness for hexponential space already for Petri nets with h + 13 counters. @InProceedings{STOC19p24, author = {Wojciech Czerwiński and Sławomir Lasota and Ranko Lazić and Jérôme Leroux and Filip Mazowiecki}, title = {The Reachability Problem for Petri Nets Is Not Elementary}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {2433}, doi = {10.1145/3313276.3316369}, year = {2019}, } Publisher's Version 

Lee, Euiwoong 
STOC '19: "The Number of Minimum kCuts: ..."
The Number of Minimum kCuts: Improving the KargerStein Bound
Anupam Gupta, Euiwoong Lee, and Jason Li (Carnegie Mellon University, USA; New York University, USA) Given an edgeweighted graph, how many minimum kcuts can it have? This is a fundamental question in the intersection of algorithms, extremal combinatorics, and graph theory. It is particularly interesting in that the best known bounds are algorithmic: they stem from algorithms that compute the minimum kcut. In 1994, Karger and Stein obtained a randomized contraction algorithm that finds a minimum kcut in O(n^{(2−o(1))k}) time. It can also enumerate all such kcuts in the same running time, establishing a corresponding extremal bound of O(n^{(2−o(1))k}). Since then, the algorithmic side of the minimum kcut problem has seen much progress, leading to a deterministic algorithm based on a tree packing result of Thorup, which enumerates all minimum kcuts in the same asymptotic running time, and gives an alternate proof of the O(n^{(2−o(1))k}) bound. However, beating the Karger–Stein bound, even for computing a single minimum kcut, has remained out of reach. In this paper, we give an algorithm to enumerate all minimum kcuts in O(n^{(1.981+o(1))k}) time, breaking the algorithmic and extremal barriers for enumerating minimum kcuts. To obtain our result, we combine ideas from both the Karger–Stein and Thorup results, and draw a novel connection between minimum kcut and extremal set theory. In particular, we give and use tighter bounds on the size of set systems with bounded dual VCdimension, which may be of independent interest. @InProceedings{STOC19p229, author = {Anupam Gupta and Euiwoong Lee and Jason Li}, title = {The Number of Minimum <i>k</i>Cuts: Improving the KargerStein Bound}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {229240}, doi = {10.1145/3313276.3316395}, year = {2019}, } Publisher's Version 

Lee, Yin Tat 
STOC '19: "Solving Linear Programs in ..."
Solving Linear Programs in the Current Matrix Multiplication Time
Michael B. Cohen, Yin Tat Lee, and Zhao Song (Massachusetts Institute of Technology, USA; University of Washington, USA; Microsoft Research, USA; University of Texas at Austin, USA) This paper shows how to solve linear programs of the form min_{Ax=b,x≥0} c^{⊤}x with n variables in time O^{*}((n^{ω}+n^{2.5−α/2}+n^{2+1/6}) log(n/δ)) where ω is the exponent of matrix multiplication, α is the dual exponent of matrix multiplication, and δ is the relative accuracy. For the current value of ω∼2.37 and α∼0.31, our algorithm takes O^{*}(n^{ω} log(n/δ)) time. When ω = 2, our algorithm takes O^{*}(n^{2+1/6} log(n/δ)) time. Our algorithm utilizes several new concepts that we believe may be of independent interest: (1) We define a stochastic central path method. (2) We show how to maintain a projection matrix √W A^{⊤}(AWA^{⊤})^{−1}A√W in subquadratic time under ℓ_{2} multiplicative changes in the diagonal matrix W. @InProceedings{STOC19p938, author = {Michael B. Cohen and Yin Tat Lee and Zhao Song}, title = {Solving Linear Programs in the Current Matrix Multiplication Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {938942}, doi = {10.1145/3313276.3316303}, year = {2019}, } Publisher's Version STOC '19: "Competitively Chasing Convex ..." Competitively Chasing Convex Bodies Sébastien Bubeck, Yin Tat Lee, Yuanzhi Li, and Mark Sellke (Microsoft Research, USA; University of Washington, USA; Stanford University, USA) Let F be a family of sets in some metric space. In the Fchasing problem, an online algorithm observes a request sequence of sets in F and responds (online) by giving a sequence of points in these sets. The movement cost is the distance between consecutive such points. The competitive ratio is the worst case ratio (over request sequences) between the total movement of the online algorithm and the smallest movement one could have achieved by knowing in advance the request sequence. The family F is said to be chaseable if there exists an online algorithm with finite competitive ratio. In 1991, Linial and Friedman conjectured that the family of convex sets in Euclidean space is chaseable. We prove this conjecture. @InProceedings{STOC19p861, author = {Sébastien Bubeck and Yin Tat Lee and Yuanzhi Li and Mark Sellke}, title = {Competitively Chasing Convex Bodies}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {861868}, doi = {10.1145/3313276.3316314}, year = {2019}, } Publisher's Version 

Leroux, Jérôme 
STOC '19: "The Reachability Problem for ..."
The Reachability Problem for Petri Nets Is Not Elementary
Wojciech Czerwiński, Sławomir Lasota, Ranko Lazić, Jérôme Leroux, and Filip Mazowiecki (University of Warsaw, Poland; University of Warwick, UK; CNRS, France; University of Bordeaux, France) Petri nets, also known as vector addition systems, are a long established model of concurrency with extensive applications in modelling and analysis of hardware, software and database systems, as well as chemical, biological and business processes. The central algorithmic problem for Petri nets is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal STOC 1981 work, and the currently best published upper bound is nonprimitive recursive Ackermannian of Leroux and Schmitz from LICS 2019. We establish a nonelementary lower bound, i.e. that the reachability problem needs a tower of exponentials of time and space. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. The new lower bound is a major breakthrough for several reasons. Firstly, it shows that the reachability problem is much harder than the coverability (i.e., state reachability) problem, which is also ubiquitous but has been known to be complete for exponential space since the late 1970s. Secondly, it implies that a plethora of problems from formal languages, logic, concurrent systems, process calculi and other areas, that are known to admit reductions from the Petri nets reachability problem, are also not elementary. Thirdly, it makes obsolete the currently best lower bounds for the reachability problems for two key extensions of Petri nets: with branching and with a pushdown stack. At the heart of our proof is a novel gadget so called the factorial amplifier that, assuming availability of counters that are zero testable and bounded by k, guarantees to produce arbitrarily large pairs of values whose ratio is exactly the factorial of k. We also develop a novel construction that uses arbitrarily large pairs of values with ratio R to provide zero testable counters that are bounded by R. Repeatedly composing the factorial amplifier with itself by means of the construction then enables us to compute in linear time Petri nets that simulate Minsky machines whose counters are bounded by a tower of exponentials, which yields the nonelementary lower bound. By refining this scheme further, we in fact establish hardness for hexponential space already for Petri nets with h + 13 counters. @InProceedings{STOC19p24, author = {Wojciech Czerwiński and Sławomir Lasota and Ranko Lazić and Jérôme Leroux and Filip Mazowiecki}, title = {The Reachability Problem for Petri Nets Is Not Elementary}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {2433}, doi = {10.1145/3313276.3316369}, year = {2019}, } Publisher's Version 

Li, Jason 
STOC '19: "Planar Diameter via Metric ..."
Planar Diameter via Metric Compression
Jason Li and Merav Parter (Carnegie Mellon University, USA; Weizmann Institute of Science, Israel) We develop a new approach for distributed distance computation in planar graphs that is based on a variant of the metric compression problem recently introduced by Abboud et al. [SODA’18]. In our variant of the Planar Graph Metric Compression Problem, one is given an nvertex planar graph G=(V,E), a set of S ⊆ V source terminals lying on a single face, and a subset of target terminals T ⊆ V. The goal is to compactly encode the S× T distances. One of our key technical contributions is in providing a compression scheme that encodes all S × T distances using O(S·(D)+T) bits, for unweighted graphs with diameter D. This significantly improves the state of the art of O(S· 2^{D}+T · D) bits. We also consider an approximate version of the problem for weighted graphs, where the goal is to encode (1+є) approximation of the S × T distances, for a given input parameter є ∈ (0,1]. Here, our compression scheme uses O((S/є)+T) bits. In addition, we describe how these compression schemes can be computed in nearlinear time. At the heart of this compact compression scheme lies a VCdimension type argument on planar graphs, using the wellknown Sauer’’s lemma. This efficient compression scheme leads to several improvements and simplifications in the setting of diameter computation, most notably in the distributed setting: There is an O(D^{5})round randomized distributed algorithm for computing the diameter in planar graphs, w.h.p. There is an O(D^{3})+D^{2}(logn/є)round randomized distributed algorithm for computing a (1+є) approximation for the diameter in weighted planar graphs, with unweighted diameter D, w.h.p. No sublinear round algorithms were known for these problems before. These distributed constructions are based on a new recursive graph decomposition that preserves the (unweighted) diameter of each of the subgraphs up to a logarithmic term. Using this decomposition, we also get an exact SSSP tree computation within O(D^{2}) rounds. @InProceedings{STOC19p152, author = {Jason Li and Merav Parter}, title = {Planar Diameter via Metric Compression}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {152163}, doi = {10.1145/3313276.3316358}, year = {2019}, } Publisher's Version STOC '19: "The Number of Minimum kCuts: ..." The Number of Minimum kCuts: Improving the KargerStein Bound Anupam Gupta, Euiwoong Lee, and Jason Li (Carnegie Mellon University, USA; New York University, USA) Given an edgeweighted graph, how many minimum kcuts can it have? This is a fundamental question in the intersection of algorithms, extremal combinatorics, and graph theory. It is particularly interesting in that the best known bounds are algorithmic: they stem from algorithms that compute the minimum kcut. In 1994, Karger and Stein obtained a randomized contraction algorithm that finds a minimum kcut in O(n^{(2−o(1))k}) time. It can also enumerate all such kcuts in the same running time, establishing a corresponding extremal bound of O(n^{(2−o(1))k}). Since then, the algorithmic side of the minimum kcut problem has seen much progress, leading to a deterministic algorithm based on a tree packing result of Thorup, which enumerates all minimum kcuts in the same asymptotic running time, and gives an alternate proof of the O(n^{(2−o(1))k}) bound. However, beating the Karger–Stein bound, even for computing a single minimum kcut, has remained out of reach. In this paper, we give an algorithm to enumerate all minimum kcuts in O(n^{(1.981+o(1))k}) time, breaking the algorithmic and extremal barriers for enumerating minimum kcuts. To obtain our result, we combine ideas from both the Karger–Stein and Thorup results, and draw a novel connection between minimum kcut and extremal set theory. In particular, we give and use tighter bounds on the size of set systems with bounded dual VCdimension, which may be of independent interest. @InProceedings{STOC19p229, author = {Anupam Gupta and Euiwoong Lee and Jason Li}, title = {The Number of Minimum <i>k</i>Cuts: Improving the KargerStein Bound}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {229240}, doi = {10.1145/3313276.3316395}, year = {2019}, } Publisher's Version 

Li, Qian 
STOC '19: "Quantum Lovász Local Lemma: ..."
Quantum Lovász Local Lemma: Shearer’s Bound Is Tight
Kun He, Qian Li, Xiaoming Sun, and Jiapeng Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Shenzhen Institute of Computing Sciences, China; Shenzhen University, China; University of California at San Diego, USA) Lovász Local Lemma (LLL) is a very powerful tool in combinatorics and probability theory to show the possibility of avoiding all “bad” events under some “weakly dependent” condition. Over the last decades, the algorithmic aspect of LLL has also attracted lots of attention in theoretical computer science. A tight criterion under which the abstract version LLL (ALLL) holds was given by Shearer. It turns out that Shearer’s bound is generally not tight for variable version LLL (VLLL). Recently, Ambainis et al. introduced a quantum version LLL (QLLL), which was then shown to be powerful for the quantum satisfiability problem. In this paper, we prove that Shearer’s bound is tight for QLLL, i.e., the relative dimension of the smallest satisfying subspace is completely characterized by the independent set polynomial, affirming a conjecture proposed by Sattath et al. Our result also shows the tightness of Gilyén and Sattath’s algorithm, and implies that the lattice gas partition function fully characterizes quantum satisfiability for almost all Hamiltonians with large enough qudits. Commuting LLL (CLLL), LLL for commuting local Hamiltonians which are widely studied in the literature, is also investigated here. We prove that the tight regions of CLLL and QLLL are different in general. This result might imply that it is possible to design an algorithm for CLLL which is still efficient beyond Shearer’s bound. In applications of LLLs, the symmetric cases are most common, i.e., the events are with the same probability and the Hamiltonians are with the same relative dimension. We give the first lower bound on the gap between the symmetric VLLL and Shearer’s bound. Our result can be viewed as a quantitative study on the separation between quantum and classical constraint satisfaction problems. Additionally, we obtain similar results for the symmetric CLLL. As an application, we give lower bounds on the critical thresholds of VLLL and CLLL for several of the most common lattices. @InProceedings{STOC19p461, author = {Kun He and Qian Li and Xiaoming Sun and Jiapeng Zhang}, title = {Quantum Lovász Local Lemma: Shearer’s Bound Is Tight}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {461472}, doi = {10.1145/3313276.3316392}, year = {2019}, } Publisher's Version 

Li, Shi 
STOC '19: "O(log² k / ..."
O(log² k / log log k)Approximation Algorithm for Directed Steiner Tree: A Tight QuasiPolynomialTime Algorithm
Fabrizio Grandoni, Bundit Laekhanukit, and Shi Li (IDSIA, Switzerland; Shanghai University of Finance and Economics, China; SUNY Buffalo, USA) In the Directed Steiner Tree (DST) problem we are given an nvertex directed edgeweighted graph, a root r , and a collection of k terminal nodes. Our goal is to find a minimumcost subgraph that contains a directed path from r to every terminal. We present an O(log^2 k /log log k )approximation algorithm for DST that runs in quasipolynomialtime, i.e., in time n^{polylog(k)}. By making standard complexity assumptions, we show the matching lower bound of Omega(log^2 k/loglogk) for the class of quasipolynomial time algorithms, meaning that our approximation ratio is asymptotically the best possible. This is the first improvement on the DST problem since the classical quasipolynomialtime O (log^3 k ) approximation algorithm by Charikar et al. [SODA’98 & J. Algorithms’99]. (The paper erroneously claims an O (log^2 k ) approximation due to a mistake in prior work.) Our approach is based on two main ingredients. First, we derive an approximation preserving reduction to the Group Steiner Tree on Trees with Dependency Constraint (GSTTD) problem. Compared to the classic Group Steiner Tree on Trees problem, in GSTTD we are additionally given some dependency constraints among the nodes in the output tree that must be satisfied. The GSTTD instance has quasipolynomial size and logarithmic height. We remark that, in contrast, Zelikovsky’s heighreduction theorem [Algorithmica’97] used in all prior work on DST achieves a reduction to a tree instance of the related Group Steiner Tree (GST) problem of similar height, however losing a logarithmic factor in the approximation ratio. Our second ingredient is an LProunding algorithm to approximately solve GSTTD instances, which is inspired by the framework developed by [Rothvob, Preprint’11; Friggstad et al., IPCO’14]. We consider a SheraliAdams lifting of a proper LP relaxation of GSTTD. Our rounding algorithm proceeds level by level from the root to the leaves, rounding and conditioning each time on a proper subset of label variables. The limited height of the tree and small number of labels on roottoleaf paths guarantee that a small enough (namely, polylogarithmic) number of SheraliAdams lifting levels is sufficient to condition up to the leaves. We believe that our basic strategy of combining labelbased reductions with a roundandcondition type of LProunding over hierarchies might find applications to other related problems. @InProceedings{STOC19p253, author = {Fabrizio Grandoni and Bundit Laekhanukit and Shi Li}, title = {<i>O</i>(log² <i>k</i> / log log <i>k</i>)Approximation Algorithm for Directed Steiner Tree: A Tight QuasiPolynomialTime Algorithm}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {253264}, doi = {10.1145/3313276.3316349}, year = {2019}, } Publisher's Version 

Li, Yuanzhi 
STOC '19: "Competitively Chasing Convex ..."
Competitively Chasing Convex Bodies
Sébastien Bubeck, Yin Tat Lee, Yuanzhi Li, and Mark Sellke (Microsoft Research, USA; University of Washington, USA; Stanford University, USA) Let F be a family of sets in some metric space. In the Fchasing problem, an online algorithm observes a request sequence of sets in F and responds (online) by giving a sequence of points in these sets. The movement cost is the distance between consecutive such points. The competitive ratio is the worst case ratio (over request sequences) between the total movement of the online algorithm and the smallest movement one could have achieved by knowing in advance the request sequence. The family F is said to be chaseable if there exists an online algorithm with finite competitive ratio. In 1991, Linial and Friedman conjectured that the family of convex sets in Euclidean space is chaseable. We prove this conjecture. @InProceedings{STOC19p861, author = {Sébastien Bubeck and Yin Tat Lee and Yuanzhi Li and Mark Sellke}, title = {Competitively Chasing Convex Bodies}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {861868}, doi = {10.1145/3313276.3316314}, year = {2019}, } Publisher's Version 

Limaye, Nutan 
STOC '19: "A FixedDepth SizeHierarchy ..."
A FixedDepth SizeHierarchy Theorem for AC^{0}[⊕] via the Coin Problem
Nutan Limaye, Karteek Sreenivasaiah, Srikanth Srinivasan, Utkarsh Tripathi, and S. Venkitesh (IIT Bombay, India; IIT Hyderabad, India) In this work we prove the first Fixeddepth SizeHierarchy Theorem for uniform AC^{0}[⊕]. In particular, we show that for any fixed d, the class C_{d,k} of functions that have uniform AC^{0}[⊕] formulas of depth d and size n^{k} form an infinite hierarchy. We show this by exhibiting the first class of explicit functions where we have nearly (up to a polynomial factor) matching upper and lower bounds for the class of AC^{0}[⊕] formulas. The explicit functions are derived from the δCoin Problem, which is the computational problem of distinguishing between coins that are heads with probability (1+δ)/2 or (1−δ)/2, where δ is a parameter that is going to 0. We study the complexity of this problem and make progress on both upper bound and lower bound fronts. Upper bounds. For any constant d≥ 2, we show that there are explicit monotone AC^{0} formulas (i.e. made up of AND and OR gates only) solving the δcoin problem that have depth d, size exp(O(d(1/δ)^{1/(d−1)})), and sample complexity (i.e. number of inputs) poly(1/δ). This matches previous upper bounds of O’Donnell and Wimmer (ICALP 2007) and Amano (ICALP 2009) in terms of size (which is optimal) and improves the sample complexity from exp(O(d(1/δ)^{1/(d−1)})) to poly(1/δ). Lower bounds. We show that the above upper bounds are nearly tight (in terms of size) even for the significantly stronger model of AC^{0}[⊕] formulas (which are also allowed NOT and Parity gates): formally, we show that any AC^{0}[⊕] formula solving the δcoin problem must have size exp(Ω(d(1/δ)^{1/(d−1)})). This strengthens a result of Shaltiel and Viola (SICOMP 2010), who prove a exp(Ω((1/δ)^{1/(d+2)})) lower bound for AC^{0}[⊕], and a lower bound of exp(Ω((1/δ)^{1/(d−1)})) shown by Cohen, Ganor and Raz (APPROXRANDOM 2014) for the class ^{0}. The upper bound is a derandomization involving a use of Janson’s inequality and classical combinatorial designs. The lower bound involves proving an optimal degree lower bound for polynomials over _{2} solving the δcoin problem. @InProceedings{STOC19p442, author = {Nutan Limaye and Karteek Sreenivasaiah and Srikanth Srinivasan and Utkarsh Tripathi and S. Venkitesh}, title = {A FixedDepth SizeHierarchy Theorem for AC<sup>0</sup>[⊕] via the Coin Problem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {442453}, doi = {10.1145/3313276.3316339}, year = {2019}, } Publisher's Version 

Linhares, André 
STOC '19: "Approximation Algorithms for ..."
Approximation Algorithms for DistributionallyRobust Stochastic Optimization with BlackBox Distributions
André Linhares and Chaitanya Swamy (University of Waterloo, Canada) Twostage stochastic optimization is a widely used framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios, and decisions are taken in two stages: we make firststage decisions knowing only the underlying distribution and before a scenario is realized, and may take additional secondstage recourse actions after a scenario is realized. The goal is typically to minimize the total expected cost. A common criticism levied at this model is that the underlying probability distribution is itself often imprecise! To address this, an approach that is quite versatile and has gained popularity in the stochasticoptimization literature is the distributionally robust 2stage model: given a collection D of probability distributions, our goal now is to minimize the maximum expected total cost with respect to a distribution in D. We provide a framework for designing approximation algorithms in such settings when the collection D is a ball around a central distribution and the central distribution is accessed only via a sampling black box. We first show that one can utilize the sample average approximation (SAA) method—solve the distributionally robust problem with an empirical estimate of the central distribution—to reduce the problem to the case where the central distribution has polynomialsize support. Complementing this, we show how to approximately solve a fractional relaxation of the SAA (i.e., polynomialscenario centraldistribution) problem. Unlike in 2stage stochastic or robust optimization, this turns out to be quite challenging. We utilize the ellipsoid method in conjunction with several new ideas to show that this problem can be approximately solved provided that we have an (approximation) algorithm for a certain maxmin problem that is akin to, and generalizes, the kmaxmin problem—find the worstcase scenario consisting of at most k elements—encountered in 2stage robust optimization. We obtain such a procedure for various discreteoptimization problems; by complementing this via LProunding algorithms that provide local (i.e., perscenario) approximation guarantees, we obtain the first approximation algorithms for the distributionally robust versions of a variety of discreteoptimization problems including set cover, vertex cover, edge cover, facility location, and Steiner tree, with guarantees that are, except for set cover, within O(1)factors of the guarantees known for the deterministic version of the problem. @InProceedings{STOC19p768, author = {André Linhares and Chaitanya Swamy}, title = {Approximation Algorithms for DistributionallyRobust Stochastic Optimization with BlackBox Distributions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {768779}, doi = {10.1145/3313276.3316391}, year = {2019}, } Publisher's Version 

Liu, Jingbo 
STOC '19: "Communication Complexity of ..."
Communication Complexity of Estimating Correlations
Uri Hadar, Jingbo Liu, Yury Polyanskiy, and Ofer Shayevitz (Tel Aviv University, Israel; Massachusetts Institute of Technology, USA) We characterize the communication complexity of the following distributed estimation problem. Alice and Bob observe infinitely many iid copies of ρcorrelated unitvariance (Gaussian or ±1 binary) random variables, with unknown ρ∈[−1,1]. By interactively exchanging k bits, Bob wants to produce an estimate ρ of ρ. We show that the best possible performance (optimized over interaction protocol Π and estimator ρ) satisfies inf_{Π ρ}sup_{ρ}E [ρ−ρ^{2}] = k^{−1} (1/2 ln2 + o(1)). Curiously, the number of samples in our achievability scheme is exponential in k; by contrast, a naive scheme exchanging k samples achieves the same Ω(1/k) rate but with a suboptimal prefactor. Our protocol achieving optimal performance is oneway (noninteractive). We also prove the Ω(1/k) bound even when ρ is restricted to any small open subinterval of [−1,1] (i.e. a local minimax lower bound). Our proof techniques rely on symmetric strong dataprocessing inequalities and various tensorization techniques from informationtheoretic interactive commonrandomness extraction. Our results also imply an Ω(n) lower bound on the information complexity of the GapHamming problem, for which we show a direct informationtheoretic proof. @InProceedings{STOC19p792, author = {Uri Hadar and Jingbo Liu and Yury Polyanskiy and Ofer Shayevitz}, title = {Communication Complexity of Estimating Correlations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {792803}, doi = {10.1145/3313276.3316332}, year = {2019}, } Publisher's Version 

Liu, Jingcheng 
STOC '19: "Private Selection from Private ..."
Private Selection from Private Candidates
Jingcheng Liu and Kunal Talwar (University of California at Berkeley, USA; Google Brain, USA) Differentially Private algorithms often need to select the best amongst many candidate options. Classical works on this selection problem require that the candidates’ goodness, measured as a realvalued score function, does not change by much when one person’s data changes. In many applications such as hyperparameter optimization, this stability assumption is much too strong. In this work, we consider the selection problem under a much weaker stability assumption on the candidates, namely that the score functions are differentially private. Under this assumption, we present algorithms that are nearoptimal along the three relevant dimensions: privacy, utility and computational efficiency. Our result can be seen as a generalization of the exponential mechanism and its existing generalizations. We also develop an online version of our algorithm, that can be seen as a generalization of the sparse vector technique to this weaker stability assumption. We show how our results imply better algorithms for hyperparameter selection in differentially private machine learning, as well as for adaptive data analysis. @InProceedings{STOC19p298, author = {Jingcheng Liu and Kunal Talwar}, title = {Private Selection from Private Candidates}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {298309}, doi = {10.1145/3313276.3316377}, year = {2019}, } Publisher's Version 

Liu, Kuikui 
STOC '19: "LogConcave Polynomials II: ..."
LogConcave Polynomials II: HighDimensional Walks and an FPRAS for Counting Bases of a Matroid
Nima Anari, Kuikui Liu, Shayan Oveis Gharan, and Cynthia Vinzant (Stanford University, USA; University of Washington, USA; North Carolina State University, USA) We design an FPRAS to count the number of bases of any matroid given by an independent set oracle, and to estimate the partition function of the random cluster model of any matroid in the regime where 0<q<1. Consequently, we can sample random spanning forests in a graph and estimate the reliability polynomial of any matroid. We also prove the thirty year old conjecture of Mihail and Vazirani that the bases exchange graph of any matroid has edge expansion at least 1. Our algorithm and proof build on the recent results of Dinur, Kaufman, Mass and Oppenheim who show that a high dimensional walk on a weighted simplicial complex mixes rapidly if for every link of the complex, the corresponding localized random walk on the 1skeleton is a strong spectral expander. One of our key observations is that a weighted simplicial complex X is a 0local spectral expander if and only if a naturally associated generating polynomial p_{X} is strongly logconcave. More generally, to every pure simplicial complex with positive weights on its maximal faces, we can associate to X a multiaffine homogeneous polynomial p_{X} such that the eigenvalues of the localized random walks on X correspond to the eigenvalues of the Hessian of derivatives of p_{X}. @InProceedings{STOC19p1, author = {Nima Anari and Kuikui Liu and Shayan Oveis Gharan and Cynthia Vinzant}, title = {LogConcave Polynomials II: HighDimensional Walks and an FPRAS for Counting Bases of a Matroid}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {112}, doi = {10.1145/3313276.3316385}, year = {2019}, } Publisher's Version 

Livni, Roi 
STOC '19: "Private PAC Learning Implies ..."
Private PAC Learning Implies Finite Littlestone Dimension
Noga Alon, Roi Livni, Maryanthe Malliaris, and Shay Moran (Princeton University, USA; Tel Aviv University, Israel; University of Chicago, USA) We show that every approximately differentially private learning algorithm (possibly improper) for a class H with Littlestone dimension d requires Ω(log^{*}(d)) examples. As a corollary it follows that the class of thresholds over ℕ can not be learned in a private manner; this resolves open questions due to [Bun et al. 2015] and [Feldman and Xiao, 2015]. We leave as an open question whether every class with a finite Littlestone dimension can be learned by an approximately differentially private algorithm. @InProceedings{STOC19p852, author = {Noga Alon and Roi Livni and Maryanthe Malliaris and Shay Moran}, title = {Private PAC Learning Implies Finite Littlestone Dimension}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {852860}, doi = {10.1145/3313276.3316312}, year = {2019}, } Publisher's Version 

Lombardi, Alex 
STOC '19: "FiatShamir: From Practice ..."
FiatShamir: From Practice to Theory
Ran Canetti, Yilei Chen, Justin Holmgren, Alex Lombardi, Guy N. Rothblum, Ron D. Rothblum, and Daniel Wichs (Boston University, USA; Tel Aviv University, Israel; Visa Research, USA; Princeton University, USA; Massachusetts Institute of Technology, USA; Weizmann Institute of Science, Israel; Technion, Israel; Northeastern University, USA) We give new instantiations of the FiatShamir transform using explicit, efficiently computable hash functions. We improve over prior work by reducing the security of these protocols to qualitatively simpler and weaker computational hardness assumptions. As a consequence of our framework, we obtain the following concrete results. 1) There exists a succinct publicly verifiable noninteractive argument system for logspace uniform computations, under the assumption that any one of a broad class of fully homomorphic encryption (FHE) schemes has almost optimal security against polynomialtime adversaries. The class includes all FHE schemes in the literature that are based on the learning with errors (LWE) problem. 2) There exists a noninteractive zeroknowledge argument system for in the common reference string model, under either of the following two assumptions: (i) Almost optimal hardness of searchLWE against polynomialtime adversaries, or (ii) The existence of a circularsecure FHE scheme with a standard (polynomial time, negligible advantage) level of security. 3) The classic quadratic residuosity protocol of [Goldwasser, Micali, and Rackoff, SICOMP ’89] is not zero knowledge when repeated in parallel, under any of the hardness assumptions above. @InProceedings{STOC19p1082, author = {Ran Canetti and Yilei Chen and Justin Holmgren and Alex Lombardi and Guy N. Rothblum and Ron D. Rothblum and Daniel Wichs}, title = {FiatShamir: From Practice to Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10821090}, doi = {10.1145/3313276.3316380}, year = {2019}, } Publisher's Version 

Lovett, Shachar 
STOC '19: "DNF Sparsification beyond ..."
DNF Sparsification beyond Sunflowers
Shachar Lovett and Jiapeng Zhang (University of California at San Diego, USA) There are two natural complexity measures associated with DNFs: their size, which is the number of clauses; and their width, which is the maximal number of variables in a clause. It is a folklore result that DNFs of small size can be approximated by DNFs of small width (logarithmic in the size). The other direction is much less clear. Gopalan, Meka and Reingold [Computational Complexity 2013] showed that the other direction – DNF sparsification – holds as well. Any DNF of width w can be approximated to within error ε by a DNF of size (w log(1/ε))^{O(w)}. Our main interest in this work is the dependence on the width w. The same dependence of w^{w} appears in several other open problems in combinatorics and complexity, such as the ErdősRado sunflower conjecture and Mansour’s conjecture. In fact, there are deep connections between these three problems. Our main result is DNF compression with an improved dependence on the width, which overcomes the w^{w} barrier. Concretely, we show that any DNF of width w can be approximated to within error ε by a DNF of size (1/ε)^{O(w)}. The proof centers around a new object which we call the DNF index function. Given a DNF, the DNF index function outputs for an input the first clause that satisfies it (if one exists). Our proof has two parts: a combinatorial part, where we exhibit a switching lemma for the DNF index function; and an analytic part, where we use the switching lemma to bound the noise sensitivity of the DNF index function, and then use it to obtain our DNF compression result. @InProceedings{STOC19p454, author = {Shachar Lovett and Jiapeng Zhang}, title = {DNF Sparsification beyond Sunflowers}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {454460}, doi = {10.1145/3313276.3316323}, year = {2019}, } Publisher's Version 

Low, Guang Hao 
STOC '19: "Quantum Singular Value Transformation ..."
Quantum Singular Value Transformation and Beyond: Exponential Improvements for Quantum Matrix Arithmetics
András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe (CWI, Netherlands; University of Amsterdam, Netherlands; University of Maryland, USA; Microsoft Research, USA) An nqubit quantum circuit performs a unitary operation on an exponentially large, 2^{n}dimensional, Hilbert space, which is a major source of quantum speedups. We develop a new “Quantum singular value transformation” algorithm that can directly harness the advantages of exponential dimensionality by applying polynomial transformations to the singular values of a block of a unitary operator. The transformations are realized by quantum circuits with a very simple structure  typically using only a constant number of ancilla qubits  leading to optimal algorithms with appealing constant factors. We show that our framework allows describing many quantum algorithms on a high level, and enables remarkably concise proofs for many prominent quantum algorithms, ranging from optimal Hamiltonian simulation to various quantum machine learning applications. We also devise a new singular vector transformation algorithm, describe how to exponentially improve the complexity of implementing fractional queries to unitaries with a gapped spectrum, and show how to efficiently implement principal component regression. Finally, we also prove a quantum lower bound on spectral transformations. @InProceedings{STOC19p193, author = {András Gilyén and Yuan Su and Guang Hao Low and Nathan Wiebe}, title = {Quantum Singular Value Transformation and Beyond: Exponential Improvements for Quantum Matrix Arithmetics}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {193204}, doi = {10.1145/3313276.3316366}, year = {2019}, } Publisher's Version STOC '19: "Hamiltonian Simulation with ..." Hamiltonian Simulation with Nearly Optimal Dependence on Spectral Norm Guang Hao Low (Microsoft Research, USA) We present a quantum algorithm for approximating the real time evolution e^{−iHt} of an arbitrary dsparse Hamiltonian to error є, given blackbox access to the positions and bbit values of its nonzero matrix entries. The query complexity of our algorithm is O((t√dH_{1 → 2})^{1+o(1)}/є^{o(1)}) with respect to the largest Euclidean row norm H_{1 → 2}, which is shown to be optimal up to subpolynomial factors through a matching lower bound, and it uses a factor O(b) more gates. This provides a polynomial speedup in sparsity for the common case where the spectral norm is known, and generalizes previous approaches which achieve optimal scaling, but with respect to more restrictive parameters. By exploiting knowledge of the spectral norm, our algorithm solves the blackbox unitary implementation problem – O(d^{1/2+o(1)}) queries suffice to approximate any dsparse unitary in the blackbox setting, which matches the quantum search lower bound of Ω(√d) queries and improves upon prior art [Berry and Childs, QIP 2010] of Õ(d^{2/3}) queries. Combined with known techniques, we also solve systems of sparse linear equations with condition number κ using O((κ √d)^{1+o(1)}/є^{o(1)}) queries, which is a quadratic improvement in sparsity. @InProceedings{STOC19p491, author = {Guang Hao Low}, title = {Hamiltonian Simulation with Nearly Optimal Dependence on Spectral Norm}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {491502}, doi = {10.1145/3313276.3316386}, year = {2019}, } Publisher's Version 

Lu, Pinyan 
STOC '19: "Tight Approximation Ratio ..."
Tight Approximation Ratio of Anonymous Pricing
Yaonan Jin, Pinyan Lu, Qi Qi, Zhihao Gavin Tang, and Tao Xiao (Columbia University, USA; Shanghai University of Finance and Economics, China; Hong Kong University of Science and Technology, China; Shanghai Jiao Tong University, China) This paper considers two canonical Bayesian mechanism design settings. In the singleitem setting, the tight approximation ratio of Anonymous Pricing is obtained: (1) compared to Myerson Auction, Anonymous Pricing always generates at least a 1/2.62fraction of the revenue; (2) there is a matching lowerbound instance. In the unitdemand singlebuyer setting, the tight approximation ratio between the simplest deterministic mechanism and the optimal deterministic mechanism is attained: in terms of revenue, (1) Uniform Pricing admits a 2.62approximation to Item Pricing; (2) a matching lowerbound instance is presented also. These results answer two open questions asked by Alaei et al. (FOCS’15) and Cai and Daskalakis (GEB’15). As an implication, in the singleitem setting: the approximation ratio of SecondPrice Auction with Anonymous Reserve (Hartline and Roughgarden EC’09) is improved to 2.62, which breaks the best known upper bound of e ≈ 2.72. @InProceedings{STOC19p674, author = {Yaonan Jin and Pinyan Lu and Qi Qi and Zhihao Gavin Tang and Tao Xiao}, title = {Tight Approximation Ratio of Anonymous Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {674685}, doi = {10.1145/3313276.3316331}, year = {2019}, } Publisher's Version 

Makarychev, Konstantin 
STOC '19: "Performance of JohnsonLindenstrauss ..."
Performance of JohnsonLindenstrauss Transform for kMeans and kMedians Clustering
Konstantin Makarychev, Yury Makarychev, and Ilya Razenshteyn (Northwestern University, USA; Toyota Technological Institute at Chicago, USA; Microsoft Research, USA) Consider an instance of Euclidean kmeans or kmedians clustering. We show that the cost of the optimal solution is preserved up to a factor of (1+ε) under a projection onto a random O(log(k /ε) / ε^{2})dimensional subspace. Further, the cost of every clustering is preserved within (1+ε). More generally, our result applies to any dimension reduction map satisfying a mild subGaussiantail condition. Our bound on the dimension is nearly optimal. Additionally, our result applies to Euclidean kclustering with the distances raised to the pth power for any constant p. For kmeans, our result resolves an open problem posed by Cohen, Elder, Musco, Musco, and Persu (STOC 2015); for kmedians, it answers a question raised by Kannan. @InProceedings{STOC19p1027, author = {Konstantin Makarychev and Yury Makarychev and Ilya Razenshteyn}, title = {Performance of JohnsonLindenstrauss Transform for <i>k</i>Means and <i>k</i>Medians Clustering}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10271038}, doi = {10.1145/3313276.3316350}, year = {2019}, } Publisher's Version 

Makarychev, Yury 
STOC '19: "Performance of JohnsonLindenstrauss ..."
Performance of JohnsonLindenstrauss Transform for kMeans and kMedians Clustering
Konstantin Makarychev, Yury Makarychev, and Ilya Razenshteyn (Northwestern University, USA; Toyota Technological Institute at Chicago, USA; Microsoft Research, USA) Consider an instance of Euclidean kmeans or kmedians clustering. We show that the cost of the optimal solution is preserved up to a factor of (1+ε) under a projection onto a random O(log(k /ε) / ε^{2})dimensional subspace. Further, the cost of every clustering is preserved within (1+ε). More generally, our result applies to any dimension reduction map satisfying a mild subGaussiantail condition. Our bound on the dimension is nearly optimal. Additionally, our result applies to Euclidean kclustering with the distances raised to the pth power for any constant p. For kmeans, our result resolves an open problem posed by Cohen, Elder, Musco, Musco, and Persu (STOC 2015); for kmedians, it answers a question raised by Kannan. @InProceedings{STOC19p1027, author = {Konstantin Makarychev and Yury Makarychev and Ilya Razenshteyn}, title = {Performance of JohnsonLindenstrauss Transform for <i>k</i>Means and <i>k</i>Medians Clustering}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10271038}, doi = {10.1145/3313276.3316350}, year = {2019}, } Publisher's Version 

Malliaris, Maryanthe 
STOC '19: "Private PAC Learning Implies ..."
Private PAC Learning Implies Finite Littlestone Dimension
Noga Alon, Roi Livni, Maryanthe Malliaris, and Shay Moran (Princeton University, USA; Tel Aviv University, Israel; University of Chicago, USA) We show that every approximately differentially private learning algorithm (possibly improper) for a class H with Littlestone dimension d requires Ω(log^{*}(d)) examples. As a corollary it follows that the class of thresholds over ℕ can not be learned in a private manner; this resolves open questions due to [Bun et al. 2015] and [Feldman and Xiao, 2015]. We leave as an open question whether every class with a finite Littlestone dimension can be learned by an approximately differentially private algorithm. @InProceedings{STOC19p852, author = {Noga Alon and Roi Livni and Maryanthe Malliaris and Shay Moran}, title = {Private PAC Learning Implies Finite Littlestone Dimension}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {852860}, doi = {10.1145/3313276.3316312}, year = {2019}, } Publisher's Version 

Mande, Nikhil S. 
STOC '19: "The LogApproximateRank Conjecture ..."
The LogApproximateRank Conjecture Is False
Arkadev Chattopadhyay, Nikhil S. Mande, and Suhail Sherif (TIFR, India; Georgetown University, USA) We construct a simple and total XOR function F on 2n variables that has only O(√n) spectral norm, O(n^{2}) approximate rank and O(n^{2.5}) approximate nonnegative rank. We show it has polynomially large randomized boundederror communication complexity of Ω(√n). This yields the first exponential gap between the logarithm of the approximate rank and randomized communication complexity for total functions. Thus F witnesses a refutation of the LogApproximateRank Conjecture (LARC) which was posed by Lee and Shraibman as a very natural analogue for randomized communication of the still unresolved LogRank Conjecture for deterministic communication. The best known previous gap for any total function between the two measures is a recent 4thpower separation by G'o'os, Jayram, Pitassi and Watson. Additionally, our function F refutes Grolmusz’s Conjecture and a variant of the LogApproximateNonnegativeRank Conjecture, suggested recently by Kol, Moran, Shpilka and Yehudayoff, both of which are implied by the LARC. The complement of F has exponentially large approximate nonnegative rank. This answers a question of Lee and Kol et al., showing that approximate nonnegative rank can be exponentially larger than approximate rank. The function F also falsifies a conjecture about parity measures of Boolean functions made by Tsang, Wong, Xie and Zhang. The latter conjecture implied the LogRank Conjecture for XOR functions. We are pleased to note that shortly after we published our results two independent groups of researchers, Anshu, Boddu and Touchette, and Sinha and de Wolf, used our function F to prove that the QuantumLogRank Conjecture is also false by showing that F has Ω(n^{1/6}) quantum communication complexity. @InProceedings{STOC19p42, author = {Arkadev Chattopadhyay and Nikhil S. Mande and Suhail Sherif}, title = {The LogApproximateRank Conjecture Is False}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {4253}, doi = {10.1145/3313276.3316353}, year = {2019}, } Publisher's Version 

Mazowiecki, Filip 
STOC '19: "The Reachability Problem for ..."
The Reachability Problem for Petri Nets Is Not Elementary
Wojciech Czerwiński, Sławomir Lasota, Ranko Lazić, Jérôme Leroux, and Filip Mazowiecki (University of Warsaw, Poland; University of Warwick, UK; CNRS, France; University of Bordeaux, France) Petri nets, also known as vector addition systems, are a long established model of concurrency with extensive applications in modelling and analysis of hardware, software and database systems, as well as chemical, biological and business processes. The central algorithmic problem for Petri nets is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal STOC 1981 work, and the currently best published upper bound is nonprimitive recursive Ackermannian of Leroux and Schmitz from LICS 2019. We establish a nonelementary lower bound, i.e. that the reachability problem needs a tower of exponentials of time and space. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. The new lower bound is a major breakthrough for several reasons. Firstly, it shows that the reachability problem is much harder than the coverability (i.e., state reachability) problem, which is also ubiquitous but has been known to be complete for exponential space since the late 1970s. Secondly, it implies that a plethora of problems from formal languages, logic, concurrent systems, process calculi and other areas, that are known to admit reductions from the Petri nets reachability problem, are also not elementary. Thirdly, it makes obsolete the currently best lower bounds for the reachability problems for two key extensions of Petri nets: with branching and with a pushdown stack. At the heart of our proof is a novel gadget so called the factorial amplifier that, assuming availability of counters that are zero testable and bounded by k, guarantees to produce arbitrarily large pairs of values whose ratio is exactly the factorial of k. We also develop a novel construction that uses arbitrarily large pairs of values with ratio R to provide zero testable counters that are bounded by R. Repeatedly composing the factorial amplifier with itself by means of the construction then enables us to compute in linear time Petri nets that simulate Minsky machines whose counters are bounded by a tower of exponentials, which yields the nonelementary lower bound. By refining this scheme further, we in fact establish hardness for hexponential space already for Petri nets with h + 13 counters. @InProceedings{STOC19p24, author = {Wojciech Czerwiński and Sławomir Lasota and Ranko Lazić and Jérôme Leroux and Filip Mazowiecki}, title = {The Reachability Problem for Petri Nets Is Not Elementary}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {2433}, doi = {10.1145/3313276.3316369}, year = {2019}, } Publisher's Version 

Mchedlidze, Tamara 
STOC '19: "Planar Graphs of Bounded Degree ..."
Planar Graphs of Bounded Degree Have Bounded Queue Number
Michael Bekos, Henry Förster, Martin Gronemann, Tamara Mchedlidze, Fabrizio Montecchiani, Chrysanthi Raftopoulou, and Torsten Ueckerdt (University of Tübingen, Germany; University of Cologne, Germany; KIT, Germany; University of Perugia, Italy; National Technical University of Athens, Greece) A queue layout of a graph consists of a linear order of its vertices and a partition of its edges into queues, so that no two independent edges of the same queue are nested. The queue number of a graph is the minimum number of queues required by any of its queue layouts. A longstanding conjecture by Heath, Leighton and Rosenberg states that the queue number of planar graphs is bounded.This conjecture has been partially settled in the positive for several sub families of planar graphs (most of which have bounded treewidth). In this paper, we make a further important step towards settling this conjecture. We prove that planar graphs of bounded degree (which may have unbounded treewidth) have bounded queue number. A notable implication of this result is that every planar graph of bounded degree admits a threedimensional straightline grid drawing in linear volume. Further implications are that every planar graph of bounded degree has bounded track number, and that every kplanar graph (i.e., every graph that can be drawn in the plane with at most k crossings per edge) of bounded degree as bounded queue number. @InProceedings{STOC19p176, author = {Michael Bekos and Henry Förster and Martin Gronemann and Tamara Mchedlidze and Fabrizio Montecchiani and Chrysanthi Raftopoulou and Torsten Ueckerdt}, title = {Planar Graphs of Bounded Degree Have Bounded Queue Number}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {176184}, doi = {10.1145/3313276.3316324}, year = {2019}, } Publisher's Version 

McKay, Dylan M. 
STOC '19: "Weak Lower Bounds on ResourceBounded ..."
Weak Lower Bounds on ResourceBounded Compression Imply Strong Separations of Complexity Classes
Dylan M. McKay, Cody D. Murray, and R. Ryan Williams (Massachusetts Institute of Technology, USA; University of California at Berkeley, USA) The Minimum Circuit Size Problem (MCSP) asks to determine the minimum size of a circuit computing a given truth table. MCSP is a natural and powerful string compression problem using boundedsize circuits. Recently, Oliveira and Santhanam [FOCS 2018] and Oliveira, Pich, and Santhanam [ECCC 2018] demonstrated a “hardness magnification” phenomenon for MCSP in restricted settings. Letting MCSP[s(n)] be the problem of deciding if a truth table of length 2^{n} has circuit complexity at most s(n), they proved that small (fixedpolynomial) average case circuit/formula lower bounds for MCSP[2^{√n}], or lower bounds for approximating MCSP[2^{o(n)}], would imply major separations such as NP ⊄BPP and NP ⊄P/poly. We strengthen their results in several directions, obtaining magnification results from worstcase lower bounds on exactly computing the search version of generalizations of MCSP[s(n)], which also extend to timebounded Kolmogorov complexity. In particular, we show that searchMCSP[s(n)] (where we must output a s(n)size circuit when it exists) admits extremely efficient AC^{0} circuits and streaming algorithms using Σ_{3} SAT oracle gates of small fanin (related to the size s(n) we want to test). For A : {0,1}^{⋆} → {0,1}, let searchoracleMCSPA[s(n)] be the problem: Given a truth table T of size N=2^{n}, output a Boolean circuit for T of size at most s(n) with AND, OR, NOT, and Aoracle gates (or report that no such circuit exists). Some consequences of our results are: (1) For reasonable s(n) ≥ n and A ∈ PH, if searchMCSP^{A}[s(n)] does not have a 1pass deterministic poly(s(n))space streaming algorithm with poly(s(n)) update time, then P ≠ NP. For example, proving that it is impossible to synthesize SAToracle circuits of size 2^{n/log⋆ n} with a streaming algorithm on truth tables of length N=2^{n} using N^{ε} update time and N^{ε} space on lengthN inputs (for some ε > 0) would already separate P and NP. Note that some extremely simple functions, such as EQUALITY of two strings, already satisfy such lower bounds. (2) If searchMCSP[n^{c}] lacks Õ(N)size, Õ(1)depth circuits for a c ≥ 1, then NP ⊄P/poly. (3) If searchMCSP[s(n)] does not have N · poly(s(n))size, O(logN)depth circuits, then NP ⊄NC^{1}. Note it is known that MCSP[2^{√n}] does not have formulas of N^{1.99} size [Hirahara and Santhanam, CCC 2017]. (4) If there is an ε > 0 such that for all c ≥ 1, searchMCSP[2^{n/c}] does not have N^{1+ε}size O(1/ε)depth ACC^{0} circuits, then NP ⊄ACC^{0}. Thus the amplification results of Allender and Koucký [JACM 2010] can extend to problems in NP and beyond. Furthermore, if we substitute ⊕ P, PP, PSPACE, or EXPcomplete problems for the oracle A, we obtain separations for those corresponding complexity classes instead of NP. Analogues of the above results hold for timebounded Kolmogorov complexity as well. @InProceedings{STOC19p1215, author = {Dylan M. McKay and Cody D. Murray and R. Ryan Williams}, title = {Weak Lower Bounds on ResourceBounded Compression Imply Strong Separations of Complexity Classes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12151225}, doi = {10.1145/3313276.3316396}, year = {2019}, } Publisher's Version 

McMillan, Audra 
STOC '19: "The Structure of Optimal Private ..."
The Structure of Optimal Private Tests for Simple Hypotheses
Clément L. Canonne, Gautam Kamath, Audra McMillan, Adam Smith, and Jonathan Ullman (Stanford University, USA; Simons Institute for the Theory of Computing Berkeley, USA; Boston University, USA; Northeastern University, USA) Hypothesis testing plays a central role in statistical inference, and is used in many settings where privacy concerns are paramount. This work answers a basic question about privately testing simple hypotheses: given two distributions P and Q, and a privacy level ε, how many i.i.d. samples are needed to distinguish P from Q subject to εdifferential privacy, and what sort of tests have optimal sample complexity? Specifically, we characterize this sample complexity up to constant factors in terms of the structure of P and Q and the privacy level ε, and show that this sample complexity is achieved by a certain randomized and clamped variant of the loglikelihood ratio test. Our result is an analogue of the classical NeymanPearson lemma in the setting of private hypothesis testing. We also give an application of our result to the private changepoint detection. Our characterization applies more generally to hypothesis tests satisfying essentially any notion of algorithmic stability, which is known to imply strong generalization bounds in adaptive data analysis, and thus our results have applications even when privacy is not a primary concern. @InProceedings{STOC19p310, author = {Clément L. Canonne and Gautam Kamath and Audra McMillan and Adam Smith and Jonathan Ullman}, title = {The Structure of Optimal Private Tests for Simple Hypotheses}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {310321}, doi = {10.1145/3313276.3316336}, year = {2019}, } Publisher's Version 

Meka, Raghu 
STOC '19: "Pseudorandom Generators for ..."
Pseudorandom Generators for Width3 Branching Programs
Raghu Meka, Omer Reingold, and Avishay Tal (University of California at Los Angeles, USA; Stanford University, USA) We construct pseudorandom generators of seed length Õ(log(n)· log(1/є)) that єfool ordered readonce branching programs (ROBPs) of width 3 and length n. For unordered ROBPs, we construct pseudorandom generators with seed length Õ(log(n) · poly(1/є)). This is the first improvement for pseudorandom generators fooling width 3 ROBPs since the work of Nisan [Combinatorica, 1992]. Our constructions are based on the “iterated milder restrictions” approach of Gopalan et al. [FOCS, 2012] (which further extends the AjtaiWigderson framework [FOCS, 1985]), combined with the INWgenerator [STOC, 1994] at the last step (as analyzed by Braverman et al. [SICOMP, 2014]). For the unordered case, we combine iterated milder restrictions with the generator of Chattopadhyay et al. [CCC, 2018]. Two conceptual ideas that play an important role in our analysis are: (1) A relabeling technique allowing us to analyze a relabeled version of the given branching program, which turns out to be much easier. (2) Treating the number of colliding layers in a branching program as a progress measure and showing that it reduces significantly under pseudorandom restrictions. In addition, we achieve nearly optimal seedlength Õ(log(n/є)) for the classes of: (1) readonce polynomials on n variables, (2) locallymonotone ROBPs of length n and width 3 (generalizing readonce CNFs and DNFs), and (3) constantwidth ROBPs of length n having a layer of width 2 in every consecutive polylog(n) layers. @InProceedings{STOC19p626, author = {Raghu Meka and Omer Reingold and Avishay Tal}, title = {Pseudorandom Generators for Width3 Branching Programs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {626637}, doi = {10.1145/3313276.3316319}, year = {2019}, } Publisher's Version Info 

Moitra, Ankur 
STOC '19: "Spectral Methods from Tensor ..."
Spectral Methods from Tensor Networks
Ankur Moitra and Alexander S. Wein (Massachusetts Institute of Technology, USA; New York University, USA) A tensor network is a diagram that specifies a way to ``multiply'' a collection of tensors together to produce another tensor (or matrix). Many existing algorithms for tensor problems (such as tensor decomposition and tensor PCA), although they are not presented this way, can be viewed as spectral methods on matrices built from simple tensor networks. In this work we leverage the full power of this abstraction to design new algorithms for certain continuous tensor decomposition problems. An important and challenging family of tensor problems comes from orbit recovery, a class of inference problems involving group actions (inspired by applications such as cryoelectron microscopy). Orbit recovery problems over finite groups can often be solved via standard tensor methods. However, for infinite groups, no general algorithms are known. We give a new spectral algorithm based on tensor networks for one such problem: continuous multireference alignment over the infinite group SO(2). Our algorithm extends to the more general heterogeneous case. @InProceedings{STOC19p926, author = {Ankur Moitra and Alexander S. Wein}, title = {Spectral Methods from Tensor Networks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {926937}, doi = {10.1145/3313276.3316357}, year = {2019}, } Publisher's Version STOC '19: "Beyond the LowDegree Algorithm: ..." Beyond the LowDegree Algorithm: Mixtures of Subcubes and Their Applications Sitan Chen and Ankur Moitra (Massachusetts Institute of Technology, USA) We introduce the problem of learning mixtures of k subcubes over {0,1}^{n}, which contains many classic learning theory problems as a special case (and is itself a special case of others). We give a surprising n^{O(logk)}time learning algorithm based on higherorder multilinear moments. It is not possible to learn the parameters because the same distribution can be represented by quite different models. Instead, we develop a framework for reasoning about how multilinear moments can pinpoint essential features of the mixture, like the number of components. We also give applications of our algorithm to learning decision trees with stochastic transitions (which also capture interesting scenarios where the transitions are deterministic but there are latent variables). Using our algorithm for learning mixtures of subcubes, we can approximate the Bayes optimal classifier within additive error є on kleaf decision trees with at most s stochastic transitions on any roottoleaf path in n^{O(s + logk)}·poly(1/є) time. In this stochastic setting, the classic n^{O(logk)}·poly(1/є)time algorithms of Rivest, Blum, and EhrenfreuchtHaussler for learning decision trees with zero stochastic transitions break down because they are fundamentally Occam algorithms. The lowdegree algorithm of LinialMansourNisan is able to get a constant factor approximation to the optimal error (again within an additive є) and runs in time n^{O(s + log(k/є))}. The quasipolynomial dependence on 1/є is inherent to the lowdegree approach because the degree needs to grow as the target accuracy decreases, which is undesirable when є is small. In contrast, as we will show, mixtures of k subcubes are uniquely determined by their 2 logk order moments and hence provide a useful abstraction for simultaneously achieving the polynomial dependence on 1/є of the classic Occam algorithms for decision trees and the flexibility of the lowdegree algorithm in being able to accommodate stochastic transitions. Using our multilinear moment techniques, we also give the first improved upper and lower bounds since the work of FeldmanO’DonnellServedio for the related but harder problem of learning mixtures of binary product distributions. @InProceedings{STOC19p869, author = {Sitan Chen and Ankur Moitra}, title = {Beyond the LowDegree Algorithm: Mixtures of Subcubes and Their Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {869880}, doi = {10.1145/3313276.3316375}, year = {2019}, } Publisher's Version STOC '19: "Learning Restricted Boltzmann ..." Learning Restricted Boltzmann Machines via Influence Maximization Guy Bresler, Frederic Koehler, and Ankur Moitra (Massachusetts Institute of Technology, USA) Graphical models are a rich language for describing highdimensional distributions in terms of their dependence structure. While there are algorithms with provable guarantees for learning undirected graphical models in a variety of settings, there has been much less progress in the important scenario when there are latent variables. Here we study Restricted Boltzmann Machines (or RBMs), which are a popular model with wideranging applications in dimensionality reduction, collaborative filtering, topic modeling, feature extraction and deep learning. The main message of our paper is a strong dichotomy in the feasibility of learning RBMs, depending on the nature of the interactions between variables: ferromagnetic models can be learned efficiently, while general models cannot. In particular, we give a simple greedy algorithm based on influence maximization to learn ferromagnetic RBMs with bounded degree. In fact, we learn a description of the distribution on the observed variables as a Markov Random Field. Our analysis is based on tools from mathematical physics that were developed to show the concavity of magnetization. Our algorithm extends straighforwardly to general ferromagnetic Ising models with latent variables. Conversely, we show that even for a contant number of latent variables with constant degree, without ferromagneticity the problem is as hard as sparse parity with noise. This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs: the distribution on their observed variables can simulate any bounded order MRF. This result is of independent interest since RBMs are the building blocks of deep belief networks. @InProceedings{STOC19p828, author = {Guy Bresler and Frederic Koehler and Ankur Moitra}, title = {Learning Restricted Boltzmann Machines via Influence Maximization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {828839}, doi = {10.1145/3313276.3316372}, year = {2019}, } Publisher's Version 

Montecchiani, Fabrizio 
STOC '19: "Planar Graphs of Bounded Degree ..."
Planar Graphs of Bounded Degree Have Bounded Queue Number
Michael Bekos, Henry Förster, Martin Gronemann, Tamara Mchedlidze, Fabrizio Montecchiani, Chrysanthi Raftopoulou, and Torsten Ueckerdt (University of Tübingen, Germany; University of Cologne, Germany; KIT, Germany; University of Perugia, Italy; National Technical University of Athens, Greece) A queue layout of a graph consists of a linear order of its vertices and a partition of its edges into queues, so that no two independent edges of the same queue are nested. The queue number of a graph is the minimum number of queues required by any of its queue layouts. A longstanding conjecture by Heath, Leighton and Rosenberg states that the queue number of planar graphs is bounded.This conjecture has been partially settled in the positive for several sub families of planar graphs (most of which have bounded treewidth). In this paper, we make a further important step towards settling this conjecture. We prove that planar graphs of bounded degree (which may have unbounded treewidth) have bounded queue number. A notable implication of this result is that every planar graph of bounded degree admits a threedimensional straightline grid drawing in linear volume. Further implications are that every planar graph of bounded degree has bounded track number, and that every kplanar graph (i.e., every graph that can be drawn in the plane with at most k crossings per edge) of bounded degree as bounded queue number. @InProceedings{STOC19p176, author = {Michael Bekos and Henry Förster and Martin Gronemann and Tamara Mchedlidze and Fabrizio Montecchiani and Chrysanthi Raftopoulou and Torsten Ueckerdt}, title = {Planar Graphs of Bounded Degree Have Bounded Queue Number}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {176184}, doi = {10.1145/3313276.3316324}, year = {2019}, } Publisher's Version 

Moran, Shay 
STOC '19: "Private PAC Learning Implies ..."
Private PAC Learning Implies Finite Littlestone Dimension
Noga Alon, Roi Livni, Maryanthe Malliaris, and Shay Moran (Princeton University, USA; Tel Aviv University, Israel; University of Chicago, USA) We show that every approximately differentially private learning algorithm (possibly improper) for a class H with Littlestone dimension d requires Ω(log^{*}(d)) examples. As a corollary it follows that the class of thresholds over ℕ can not be learned in a private manner; this resolves open questions due to [Bun et al. 2015] and [Feldman and Xiao, 2015]. We leave as an open question whether every class with a finite Littlestone dimension can be learned by an approximately differentially private algorithm. @InProceedings{STOC19p852, author = {Noga Alon and Roi Livni and Maryanthe Malliaris and Shay Moran}, title = {Private PAC Learning Implies Finite Littlestone Dimension}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {852860}, doi = {10.1145/3313276.3316312}, year = {2019}, } Publisher's Version 

Mozes, Shay 
STOC '19: "Almost Optimal Distance Oracles ..."
Almost Optimal Distance Oracles for Planar Graphs
Panagiotis Charalampopoulos, Paweł Gawrychowski, Shay Mozes, and Oren Weimann (King's College London, UK; University of Wrocław, Poland; IDC Herzliya, Israel; University of Haifa, Israel) We present new tradeoffs between space and querytime for exact distance oracles in directed weighted planar graphs. These tradeoffs are almost optimal in the sense that they are within polylogarithmic, subpolynomial or arbitrarily small polynomial factors from the naïve linear space, constant querytime lower bound. These tradeoffs include: (i) an oracle with space O(n^{1+є}) and querytime Õ(1) for any constant є>0, (ii) an oracle with space Õ(n) and querytime O(n^{є}) for any constant є>0, and (iii) an oracle with space n^{1+o(1)} and querytime n^{o(1)}. @InProceedings{STOC19p138, author = {Panagiotis Charalampopoulos and Paweł Gawrychowski and Shay Mozes and Oren Weimann}, title = {Almost Optimal Distance Oracles for Planar Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {138151}, doi = {10.1145/3313276.3316316}, year = {2019}, } Publisher's Version 

Murray, Cody D. 
STOC '19: "Weak Lower Bounds on ResourceBounded ..."
Weak Lower Bounds on ResourceBounded Compression Imply Strong Separations of Complexity Classes
Dylan M. McKay, Cody D. Murray, and R. Ryan Williams (Massachusetts Institute of Technology, USA; University of California at Berkeley, USA) The Minimum Circuit Size Problem (MCSP) asks to determine the minimum size of a circuit computing a given truth table. MCSP is a natural and powerful string compression problem using boundedsize circuits. Recently, Oliveira and Santhanam [FOCS 2018] and Oliveira, Pich, and Santhanam [ECCC 2018] demonstrated a “hardness magnification” phenomenon for MCSP in restricted settings. Letting MCSP[s(n)] be the problem of deciding if a truth table of length 2^{n} has circuit complexity at most s(n), they proved that small (fixedpolynomial) average case circuit/formula lower bounds for MCSP[2^{√n}], or lower bounds for approximating MCSP[2^{o(n)}], would imply major separations such as NP ⊄BPP and NP ⊄P/poly. We strengthen their results in several directions, obtaining magnification results from worstcase lower bounds on exactly computing the search version of generalizations of MCSP[s(n)], which also extend to timebounded Kolmogorov complexity. In particular, we show that searchMCSP[s(n)] (where we must output a s(n)size circuit when it exists) admits extremely efficient AC^{0} circuits and streaming algorithms using Σ_{3} SAT oracle gates of small fanin (related to the size s(n) we want to test). For A : {0,1}^{⋆} → {0,1}, let searchoracleMCSPA[s(n)] be the problem: Given a truth table T of size N=2^{n}, output a Boolean circuit for T of size at most s(n) with AND, OR, NOT, and Aoracle gates (or report that no such circuit exists). Some consequences of our results are: (1) For reasonable s(n) ≥ n and A ∈ PH, if searchMCSP^{A}[s(n)] does not have a 1pass deterministic poly(s(n))space streaming algorithm with poly(s(n)) update time, then P ≠ NP. For example, proving that it is impossible to synthesize SAToracle circuits of size 2^{n/log⋆ n} with a streaming algorithm on truth tables of length N=2^{n} using N^{ε} update time and N^{ε} space on lengthN inputs (for some ε > 0) would already separate P and NP. Note that some extremely simple functions, such as EQUALITY of two strings, already satisfy such lower bounds. (2) If searchMCSP[n^{c}] lacks Õ(N)size, Õ(1)depth circuits for a c ≥ 1, then NP ⊄P/poly. (3) If searchMCSP[s(n)] does not have N · poly(s(n))size, O(logN)depth circuits, then NP ⊄NC^{1}. Note it is known that MCSP[2^{√n}] does not have formulas of N^{1.99} size [Hirahara and Santhanam, CCC 2017]. (4) If there is an ε > 0 such that for all c ≥ 1, searchMCSP[2^{n/c}] does not have N^{1+ε}size O(1/ε)depth ACC^{0} circuits, then NP ⊄ACC^{0}. Thus the amplification results of Allender and Koucký [JACM 2010] can extend to problems in NP and beyond. Furthermore, if we substitute ⊕ P, PP, PSPACE, or EXPcomplete problems for the oracle A, we obtain separations for those corresponding complexity classes instead of NP. Analogues of the above results hold for timebounded Kolmogorov complexity as well. @InProceedings{STOC19p1215, author = {Dylan M. McKay and Cody D. Murray and R. Ryan Williams}, title = {Weak Lower Bounds on ResourceBounded Compression Imply Strong Separations of Complexity Classes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12151225}, doi = {10.1145/3313276.3316396}, year = {2019}, } Publisher's Version 

Musco, Cameron 
STOC '19: "A Universal Sampling Method ..."
A Universal Sampling Method for Reconstructing Signals with Simple Fourier Transforms
Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, and Amir Zandieh (Tel Aviv University, Israel; EPFL, Switzerland; Microsoft Research, USA; Princeton University, USA; Google Research, USA) Reconstructing continuous signals based on a small number of discrete samples is a fundamental problem across science and engineering. We are often interested in signals with "simple'' Fourier structure  e.g., those involving frequencies within a bounded range, a small number of frequencies, or a few blocks of frequencies  i.e., bandlimited, sparse, and multiband signals, respectively. More broadly, any prior knowledge on a signal's Fourier power spectrum can constrain its complexity. Intuitively, signals with more highly constrained Fourier structure require fewer samples to reconstruct. We formalize this intuition by showing that, roughly, a continuous signal from a given class can be approximately reconstructed using a number of samples proportional to the statistical dimension of the allowed power spectrum of that class. We prove that, in nearly all settings, this natural measure tightly characterizes the sample complexity of signal reconstruction. Surprisingly, we also show that, up to log factors, a universal nonuniform sampling strategy can achieve this optimal complexity for any class of signals. We present an efficient and general algorithm for recovering a signal from the samples taken. For bandlimited and sparse signals, our method matches the stateoftheart, while providing the the first computationally and sample efficient solution to a broader range of problems, including multiband signal reconstruction and Gaussian process regression tasks in one dimension. Our work is based on a novel connection between randomized linear algebra and the problem of reconstructing signals with constrained Fourier structure. We extend tools based on statistical leverage score sampling and columnbased matrix reconstruction to the approximation of continuous linear operators that arise in the signal reconstruction problem. We believe these extensions are of independent interest and serve as a foundation for tackling a broad range of continuous time problems using randomized methods. @InProceedings{STOC19p1051, author = {Haim Avron and Michael Kapralov and Cameron Musco and Christopher Musco and Ameya Velingker and Amir Zandieh}, title = {A Universal Sampling Method for Reconstructing Signals with Simple Fourier Transforms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10511063}, doi = {10.1145/3313276.3316363}, year = {2019}, } Publisher's Version 

Musco, Christopher 
STOC '19: "A Universal Sampling Method ..."
A Universal Sampling Method for Reconstructing Signals with Simple Fourier Transforms
Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, and Amir Zandieh (Tel Aviv University, Israel; EPFL, Switzerland; Microsoft Research, USA; Princeton University, USA; Google Research, USA) Reconstructing continuous signals based on a small number of discrete samples is a fundamental problem across science and engineering. We are often interested in signals with "simple'' Fourier structure  e.g., those involving frequencies within a bounded range, a small number of frequencies, or a few blocks of frequencies  i.e., bandlimited, sparse, and multiband signals, respectively. More broadly, any prior knowledge on a signal's Fourier power spectrum can constrain its complexity. Intuitively, signals with more highly constrained Fourier structure require fewer samples to reconstruct. We formalize this intuition by showing that, roughly, a continuous signal from a given class can be approximately reconstructed using a number of samples proportional to the statistical dimension of the allowed power spectrum of that class. We prove that, in nearly all settings, this natural measure tightly characterizes the sample complexity of signal reconstruction. Surprisingly, we also show that, up to log factors, a universal nonuniform sampling strategy can achieve this optimal complexity for any class of signals. We present an efficient and general algorithm for recovering a signal from the samples taken. For bandlimited and sparse signals, our method matches the stateoftheart, while providing the the first computationally and sample efficient solution to a broader range of problems, including multiband signal reconstruction and Gaussian process regression tasks in one dimension. Our work is based on a novel connection between randomized linear algebra and the problem of reconstructing signals with constrained Fourier structure. We extend tools based on statistical leverage score sampling and columnbased matrix reconstruction to the approximation of continuous linear operators that arise in the signal reconstruction problem. We believe these extensions are of independent interest and serve as a foundation for tackling a broad range of continuous time problems using randomized methods. @InProceedings{STOC19p1051, author = {Haim Avron and Michael Kapralov and Cameron Musco and Christopher Musco and Ameya Velingker and Amir Zandieh}, title = {A Universal Sampling Method for Reconstructing Signals with Simple Fourier Transforms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10511063}, doi = {10.1145/3313276.3316363}, year = {2019}, } Publisher's Version 

Nakos, Vasileios 
STOC '19: "Stronger L2/L2 Compressed ..."
Stronger L2/L2 Compressed Sensing; Without Iterating
Vasileios Nakos and Zhao Song (Harvard University, USA; University of Texas at Austin, USA) We consider the extensively studied problem of ℓ_{2}/ℓ_{2} compressed sensing. The main contribution of our work is an improvement over [Gilbert, Li, Porat and Strauss, STOC 2010] with faster decoding time and significantly smaller column sparsity, answering two open questions of the aforementioned work. Previous work on sublineartime compressed sensing employed an iterative procedure, recovering the heavy coordinates in phases. We completely depart from that framework, and give the first sublineartime ℓ_{2}/ℓ_{2} scheme which achieves the optimal number of measurements without iterating; this new approach is the key step to our progress. Towards that, we satisfy the ℓ_{2}/ℓ_{2} guarantee by exploiting the heaviness of coordinates in a way that was not exploited in previous work. Via our techniques we obtain improved results for various sparse recovery tasks, and indicate possible further applications to problems in the field, to which the aforementioned iterative procedure creates significant obstructions. @InProceedings{STOC19p289, author = {Vasileios Nakos and Zhao Song}, title = {Stronger L2/L2 Compressed Sensing; Without Iterating}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {289297}, doi = {10.1145/3313276.3316355}, year = {2019}, } Publisher's Version 

Nanongkai, Danupon 
STOC '19: "Distributed Edge Connectivity ..."
Distributed Edge Connectivity in Sublinear Time
Mohit Daga, Monika Henzinger, Danupon Nanongkai, and Thatchaphol Saranurak (KTH, Sweden; University of Vienna, Austria; Toyota Technological Institute at Chicago, USA) We present the first sublineartime algorithm that can compute the edge connectivity λ of a network exactly on distributed messagepassing networks (the CONGEST model), as long as the network contains no multiedge. We present the first sublineartime algorithm for a distributed messagepassing network sto compute its edge connectivity λ exactly in the CONGEST model, as long as there are no parallel edges. Our algorithm takes Õ(n^{1−1/353}D^{1/353}+n^{1−1/706}) time to compute λ and a cut of cardinality λ with high probability, where n and D are the number of nodes and the diameter of the network, respectively, and Õ hides polylogarithmic factors. This running time is sublinear in n (i.e. Õ(n^{1−є})) whenever D is. Previous sublineartime distributed algorithms can solve this problem either (i) exactly only when λ=O(n^{1/8−є}) [Thurimella PODC’95; Pritchard, Thurimella, ACM Trans. Algorithms’11; Nanongkai, Su, DISC’14] or (ii) approximately [Ghaffari, Kuhn, DISC’13; Nanongkai, Su, DISC’14]. To achieve this we develop and combine several new techniques. First, we design the first distributed algorithm that can compute a kedge connectivity certificate for any k=O(n^{1−є}) in time Õ(√nk+D). The previous sublineartime algorithm can do so only when k=o(√n) [Thurimella PODC’95]. In fact, our algorithm can be turned into the first parallel algorithm with polylogarithmic depth and nearlinear work. Previous nearlinear work algorithms are essentially sequential and previous polylogarithmicdepth algorithms require Ω(mk) work in the worst case (e.g. [Karger, Motwani, STOC’93]). Second, we show that by combining the recent distributed expander decomposition technique of [Chang, Pettie, Zhang, SODA’19] with techniques from the sequential deterministic edge connectivity algorithm of [Kawarabayashi, Thorup, STOC’15], we can decompose the network into a sublinear number of clusters with small average diameter and without any mincut separating a cluster (except the “trivial” ones). This leads to a simplification of the KawarabayashiThorup framework (except that we are randomized while they are deterministic). This might make this framework more useful in other models of computation. Finally, by extending the tree packing technique from [Karger STOC’96], we can find the minimum cut in time proportional to the number of components. As a byproduct of this technique, we obtain an Õ(n)time algorithm for computing exact minimum cut for weighted graphs. @InProceedings{STOC19p343, author = {Mohit Daga and Monika Henzinger and Danupon Nanongkai and Thatchaphol Saranurak}, title = {Distributed Edge Connectivity in Sublinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {343354}, doi = {10.1145/3313276.3316346}, year = {2019}, } Publisher's Version STOC '19: "Distributed Exact Weighted ..." Distributed Exact Weighted AllPairs Shortest Paths in NearLinear Time Aaron Bernstein and Danupon Nanongkai (Rutgers University, USA; KTH, Sweden) In the distributed allpairs shortest paths problem (APSP), every node in the weighted undirected distributed network (the CONGEST model) needs to know the distance from every other node using least number of communication rounds (typically called time complexity). The problem admits (1+o(1))approximation Θ(n)time algorithm and a nearlytight Ω(n) lower bound [Nanongkai, STOC’14; Lenzen and PattShamir PODC’15]. For the exact case, Elkin [STOC’17] presented an O(n^{5/3} log^{2/3} n) time bound, which was later improved to Õ(n^{5/4}) in [Huang, Nanongkai, Saranurak FOCS’17]. It was shown that any superlinear lower bound (in n) requires a new technique [CensorHillel, Khoury, Paz, DISC’17], but otherwise it remained widely open whether there exists a Õ(n)time algorithm for the exact case, which would match the best possible approximation algorithm. This paper resolves this question positively: we present a randomized (Las Vegas) Õ(n)time algorithm, matching the lower bound up to polylogarithmic factors. Like the previous Õ(n^{5/4}) bound, our result works for directed graphs with zero (and even negative) edge weights. In addition to the improved running time, our algorithm works in a more general setting than that required by the previous Õ(n^{5/4}) bound; in our setting (i) the communication is only along edge directions (as opposed to bidirectional), and (ii) edge weights are arbitrary (as opposed to integers in {1, 2, ... poly(n)}). The previously best algorithm for this more difficult setting required Õ(n^{3/2}) time [Agarwal and Ramachandran, ArXiv’18] (this can be improved to Õ(n^{4/3}) if one allows bidirectional communication). Our algorithm is extremely simple and relies on a new technique called Random Filtered Broadcast. Given any sets of nodes A,B⊆ V and assuming that every b ∈ B knows all distances from nodes in A, and every node v ∈ V knows all distances from nodes in B, we want every v∈ V to know DistThrough_{B}(a,v) = min_{b∈ B} dist(a,b) + dist(b,v) for every a∈ A. Previous works typically solve this problem by broadcasting all knowledge of every b∈ B, causing superlinear edge congestion and time. We show a randomized algorithm that can reduce edge congestions and thus solve this problem in Õ(n) expected time. @InProceedings{STOC19p334, author = {Aaron Bernstein and Danupon Nanongkai}, title = {Distributed Exact Weighted AllPairs Shortest Paths in NearLinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {334342}, doi = {10.1145/3313276.3316326}, year = {2019}, } Publisher's Version Info STOC '19: "Breaking Quadratic Time for ..." Breaking Quadratic Time for Small Vertex Connectivity and an Approximation Scheme Danupon Nanongkai, Thatchaphol Saranurak, and Sorrachai Yingchareonthawornchai (KTH, Sweden; Toyota Technological Institute at Chicago, USA; Michigan State University, USA; Aalto University, Finland) Vertex connectivity a classic extensivelystudied problem. Given an integer k, its goal is to decide if an nnode medge graph can be disconnected by removing k vertices. Although a lineartime algorithm was postulated since 1974 [Aho, Hopcroft and Ullman], and despite its sibling problem of edge connectivity being resolved over two decades ago [Karger STOC’96], so far no vertex connectivity algorithms are faster than O(n^{2}) time even for k=4 and m=O(n). In the simplest case where m=O(n) and k=O(1), the O(n^{2}) bound dates five decades back to [Kleitman IEEE Trans. Circuit Theory’69]. For higher m, O(m) time is known for k≤ 3 [Tarjan FOCS’71; Hopcroft, Tarjan SICOMP’73], the first O(n^{2}) time is from [Kanevsky, Ramachandran, FOCS’87] for k=4 and from [Nagamochi, Ibaraki, Algorithmica’92] for k=O(1). For general k and m, the best bound is Õ(min(kn^{2}, n^{ω}+nk^{ω})) [Henzinger, Rao, Gabow FOCS’96; Linial, Lovász, Wigderson FOCS’86] where Õ hides polylogarithmic terms and ω<2.38 is the matrix multiplication exponent. In this paper, we present a randomized Monte Carlo algorithm with Õ(m+k^{7/3}n^{4/3}) time for any k=O(√n). This gives the first subquadratic time bound for any 4≤ k ≤ o(n^{2/7}) (subquadratic time refers to O(m)+o(n^{2}) time.) and improves all above classic bounds for all k≤ n^{0.44}. We also present a new randomized Monte Carlo (1+є)approximation algorithm that is strictly faster than the previous Henzinger’s 2approximation algorithm [J. Algorithms’97] and all previous exact algorithms. The story is the same for the directed case, where our exact Õ( min{km^{2/3}n, km^{4/3}} )time for any k = O(√n) and (1+є)approximation algorithms improve classic bounds for small and large k, respectively. Additionally, our algorithm is the first approximation algorithm on directed graphs. The key to our results is to avoid computing singlesource connectivity, which was needed by all previous exact algorithms and is not known to admit o(n^{2}) time. Instead, we design the first local algorithm for computing vertex connectivity; without reading the whole graph, our algorithm can find a separator of size at most k or certify that there is no separator of size at most k “near” a given seed node. @InProceedings{STOC19p241, author = {Danupon Nanongkai and Thatchaphol Saranurak and Sorrachai Yingchareonthawornchai}, title = {Breaking Quadratic Time for Small Vertex Connectivity and an Approximation Scheme}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {241252}, doi = {10.1145/3313276.3316394}, year = {2019}, } Publisher's Version 

Narayanan, Shyam 
STOC '19: "Optimal Terminal Dimensionality ..."
Optimal Terminal Dimensionality Reduction in Euclidean Space
Shyam Narayanan and Jelani Nelson (Harvard University, USA) Let ε∈(0,1) and X⊂^{d} be arbitrary with X having size n>1. The JohnsonLindenstrauss lemma states there exists f:X→^{m} with m = O(ε^{−2}logn) such that ∀ x∈ X ∀ y∈ X, x−y_{2} ≤ f(x)−f(y)_{2} ≤ (1+ε)x−y_{2} . We show that a strictly stronger version of this statement holds, answering one of the main open questions posed by Mahabadi et al. in STOC 2018: “∀ y∈ X” in the above statement may be replaced with “∀ y∈^{d}”, so that f not only preserves distances within X, but also distances to X from the rest of space. Previously this stronger version was only known with the worse bound m = O(ε^{−4}logn). Our proof is via a tighter analysis of (a specific instantiation of) the embedding recipe of Mahabadi et al. @InProceedings{STOC19p1064, author = {Shyam Narayanan and Jelani Nelson}, title = {Optimal Terminal Dimensionality Reduction in Euclidean Space}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10641069}, doi = {10.1145/3313276.3316307}, year = {2019}, } Publisher's Version 

Nelson, Jelani 
STOC '19: "Optimal Terminal Dimensionality ..."
Optimal Terminal Dimensionality Reduction in Euclidean Space
Shyam Narayanan and Jelani Nelson (Harvard University, USA) Let ε∈(0,1) and X⊂^{d} be arbitrary with X having size n>1. The JohnsonLindenstrauss lemma states there exists f:X→^{m} with m = O(ε^{−2}logn) such that ∀ x∈ X ∀ y∈ X, x−y_{2} ≤ f(x)−f(y)_{2} ≤ (1+ε)x−y_{2} . We show that a strictly stronger version of this statement holds, answering one of the main open questions posed by Mahabadi et al. in STOC 2018: “∀ y∈ X” in the above statement may be replaced with “∀ y∈^{d}”, so that f not only preserves distances within X, but also distances to X from the rest of space. Previously this stronger version was only known with the worse bound m = O(ε^{−4}logn). Our proof is via a tighter analysis of (a specific instantiation of) the embedding recipe of Mahabadi et al. @InProceedings{STOC19p1064, author = {Shyam Narayanan and Jelani Nelson}, title = {Optimal Terminal Dimensionality Reduction in Euclidean Space}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10641069}, doi = {10.1145/3313276.3316307}, year = {2019}, } Publisher's Version 

Nguyễn, Huy L. 
STOC '19: "Submodular Maximization with ..."
Submodular Maximization with Matroid and Packing Constraints in Parallel
Alina Ene, Huy L. Nguyễn, and Adrian Vladu (Boston University, USA; Northeastern University, USA) We consider the problem of maximizing the multilinear extension of a submodular function subject a single matroid constraint or multiple packing constraints with a small number of adaptive rounds of evaluation queries. We obtain the first algorithms with low adaptivity for submodular maximization with a matroid constraint. Our algorithms achieve a 1−1/e−є approximation for monotone functions and a 1/e−є approximation for nonmonotone functions, which nearly matches the best guarantees known in the fully adaptive setting. The number of rounds of adaptivity is O(log^{2} n/є^{3}), which is an exponential speedup over the existing algorithms. We obtain the first parallel algorithm for nonmonotone submodular maximization subject to packing constraints. Our algorithm achieves a 1/e−є approximation using O(log(n/є) log(1/є) log(n+m)/ є^{2}) parallel rounds, which is again an exponential speedup in parallel time over the existing algorithms. For monotone functions, we obtain a 1−1/e−є approximation in O(log(n/є)logm/є^{2}) parallel rounds. The number of parallel rounds of our algorithm matches that of the state of the art algorithm for solving packing LPs with a linear objective (Mahoney et al., 2016). Our results apply more generally to the problem of maximizing a diminishing returns submodular (DRsubmodular) function. @InProceedings{STOC19p90, author = {Alina Ene and Huy L. Nguyễn and Adrian Vladu}, title = {Submodular Maximization with Matroid and Packing Constraints in Parallel}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {90101}, doi = {10.1145/3313276.3316389}, year = {2019}, } Publisher's Version 

Nirkhe, Chinmay 
STOC '19: "Good Approximate Quantum LDPC ..."
Good Approximate Quantum LDPC Codes from Spacetime Circuit Hamiltonians
Thomas C. Bohdanowicz, Elizabeth Crosson, Chinmay Nirkhe, and Henry Yuen (California Institute of Technology, USA; University of New Mexico, USA; University of California at Berkeley, USA; University of Toronto, Canada) We study approximate quantum lowdensity paritycheck (QLDPC) codes, which are approximate quantum errorcorrecting codes specified as the ground space of a frustrationfree local Hamiltonian, whose terms do not necessarily commute. Such codes generalize stabilizer QLDPC codes, which are exact quantum errorcorrecting codes with sparse, lowweight stabilizer generators (i.e. each stabilizer generator acts on a few qubits, and each qubit participates in a few stabilizer generators). Our investigation is motivated by an important question in Hamiltonian complexity and quantum coding theory: do stabilizer QLDPC codes with constant rate, linear distance, and constantweight stabilizers exist? We show that obtaining such optimal scaling of parameters (modulo polylogarithmic corrections) is possible if we go beyond stabilizer codes: we prove the existence of a family of [[N,k,d,ε]] approximate QLDPC codes that encode k = Ω(N) logical qubits into N physical qubits with distance d = Ω(N) and approximation infidelity ε = 1/(N). The code space is stabilized by a set of 10local noncommuting projectors, with each physical qubit only participating in N projectors. We prove the existence of an efficient encoding map and show that the spectral gap of the code Hamiltonian scales as Ω(N^{−3.09}). We also show that arbitrary Pauli errors can be locally detected by circuits of polylogarithmic depth. Our family of approximate QLDPC codes is based on applying a recent connection between circuit Hamiltonians and approximate quantum codes (Nirkhe, et al., ICALP 2018) to a result showing that random Clifford circuits of polylogarithmic depth yield asymptotically good quantum codes (Brown and Fawzi, ISIT 2013). Then, in order to obtain a code with sparse checks and strong detection of local errors, we use a spacetime circuittoHamiltonian construction in order to take advantage of the parallelism of the BrownFawzi circuits. Because of this, we call our codes spacetime codes. The analysis of the spectral gap of the code Hamiltonian is the main technical contribution of this work. We show that for any depth D quantum circuit on n qubits there is an associated spacetime circuittoHamiltonian construction with spectral gap Ω(n^{−3.09} D^{−2} log^{−6}(n)). To lower bound this gap we use a Markov chain decomposition method to divide the state space of partially completed circuit configurations into overlapping subsets corresponding to uniform circuit segments of depth logn, which are based on bitonic sorting circuits. We use the combinatorial properties of these circuit configurations to show rapid mixing between the subsets, and within the subsets we develop a novel isomorphism between the local update Markov chain on bitonic circuit configurations and the edgeflip Markov chain on equalarea dyadic tilings, whose mixing time was recently shown to be polynomial (Cannon, Levin, and Stauffer, RANDOM 2017). Previous lower bounds on the spectral gap of spacetime circuit Hamiltonians have all been based on a connection to exactly solvable quantum spin chains and applied only to 1+1 dimensional nearestneighbor quantum circuits with at least linear depth. @InProceedings{STOC19p481, author = {Thomas C. Bohdanowicz and Elizabeth Crosson and Chinmay Nirkhe and Henry Yuen}, title = {Good Approximate Quantum LDPC Codes from Spacetime Circuit Hamiltonians}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {481490}, doi = {10.1145/3313276.3316384}, year = {2019}, } Publisher's Version 

Nisan, Noam 
STOC '19: "The Communication Complexity ..."
The Communication Complexity of Local Search
Yakov Babichenko, Shahar Dobzinski, and Noam Nisan (Technion, Israel; Weizmann Institute of Science, Israel; Hebrew University of Jerusalem, Israel) We study a communication variant of local search. There is some fixed, commonly known graph G. Alice holds f_{A} and Bob holds f_{B}, both are functions that specify a value for each vertex. The goal is to find a local maximum of f_{A}+f_{B} with respect to G, i.e., a vertex v for which (f_{A}+f_{B})(v)≥ (f_{A}+f_{B})(u) for each neighbor u of v. Our main result is that finding a local maximum requires polynomial (in the number of vertices) bits of communication. The result holds for the following families of graphs: three dimensional grids, hypercubes, odd graphs, and degree 4 graphs. Moreover, we prove an optimal communication bound of Ω(√N) for the hypercube, and for a constant dimension grid, where N is the number of vertices in the graph. We provide applications of our main result in two domains, exact potential games and combinatorial auctions. Each one of the results demonstrates an exponential separation between the nondeterministic communication complexity and the randomized communication complexity of a total search problem. First, we show that finding a pure Nash equilibrium in 2player Naction exact potential games requires poly(N) communication. We also show that finding a pure Nash equilibrium in nplayer 2action exact potential games requires exp(n) communication. The second domain that we consider is combinatorial auctions, in which we prove that finding a local maximum in combinatorial auctions requires exponential (in the number of items) communication even when the valuations are submodular. @InProceedings{STOC19p650, author = {Yakov Babichenko and Shahar Dobzinski and Noam Nisan}, title = {The Communication Complexity of Local Search}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {650661}, doi = {10.1145/3313276.3316354}, year = {2019}, } Publisher's Version 

O'Donnell, Ryan 
STOC '19: "Quantum State Certification ..."
Quantum State Certification
Costin Bădescu, Ryan O'Donnell, and John Wright (Carnegie Mellon University, USA; Massachusetts Institute of Technology, USA) We consider the problem of quantum state certification, where one is given n copies of an unknown ddimensional quantum mixed state ρ, and one wants to test whether ρ is equal to some known mixed state σ or else is єfar from σ. The goal is to use notably fewer copies than the Ω(d^{2}) needed for full tomography on ρ (i.e., density estimation). We give two robust state certification algorithms: one with respect to fidelity using n = O(d/є) copies, and one with respect to trace distance using n = O(d/є^{2}) copies. The latter algorithm also applies when σ is unknown as well. These copy complexities are optimal up to constant factors. @InProceedings{STOC19p503, author = {Costin Bădescu and Ryan O'Donnell and John Wright}, title = {Quantum State Certification}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {503514}, doi = {10.1145/3313276.3316344}, year = {2019}, } Publisher's Version STOC '19: "Fooling Polytopes ..." Fooling Polytopes Ryan O'Donnell, Rocco A. Servedio, and LiYang Tan (Carnegie Mellon University, USA; Columbia University, USA; Stanford University, USA) We give a pseudorandom generator that fools mfacet polytopes over {0,1}^{n} with seed length polylog(m) · log(n). The previous best seed length had superlinear dependence on m. An immediate consequence is a deterministic quasipolynomial time algorithm for approximating the number of solutions to any {0,1}integer program. @InProceedings{STOC19p614, author = {Ryan O'Donnell and Rocco A. Servedio and LiYang Tan}, title = {Fooling Polytopes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {614625}, doi = {10.1145/3313276.3316321}, year = {2019}, } Publisher's Version 

Opršal, Jakub 
STOC '19: "Algebraic Approach to Promise ..."
Algebraic Approach to Promise Constraint Satisfaction
Jakub Bulín, Andrei Krokhin, and Jakub Opršal (Charles University in Prague, Czechia; University of Durham, UK) The complexity and approximability of the constraint satisfaction problem (CSP) has been actively studied over the last 20 years. A new version of the CSP, the promise CSP (PCSP) has recently been proposed, motivated by open questions about the approximability of variants of satisfiability and graph colouring. The PCSP significantly extends the standard decision CSP. The complexity of CSPs with a fixed constraint language on a finite domain has recently been fully classified, greatly guided by the algebraic approach, which uses polymorphisms — highdimensional symmetries of solution spaces — to analyse the complexity of problems. The corresponding classification for PCSPs is wide open and includes some longstanding open questions, such as the complexity of approximate graph colouring, as special cases. The basic algebraic approach to PCSP was initiated by Brakensiek and Guruswami, and in this paper we significantly extend it and lift it from concrete properties of polymorphisms to their abstract properties. We introduce a new class of problems that can be viewed as algebraic versions of the (Gap) Label Cover problem, and show that every PCSP with a fixed constraint language is equivalent to a problem of this form. This allows us to identify a ”measure of symmetry” that is well suited for comparing and relating the complexity of different PCSPs via the algebraic approach. We demonstrate how our theory can be applied by improving the stateoftheart in approximate graph colouring: we show that, for any k≥ 3, it is NPhard to find a (2k−1)colouring of a given kcolourable graph. @InProceedings{STOC19p602, author = {Jakub Bulín and Andrei Krokhin and Jakub Opršal}, title = {Algebraic Approach to Promise Constraint Satisfaction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {602613}, doi = {10.1145/3313276.3316300}, year = {2019}, } Publisher's Version 

Pach, János 
STOC '19: "Planar Point Sets Determine ..."
Planar Point Sets Determine Many Pairwise Crossing Segments
János Pach, Natan Rubin, and Gábor Tardos (EPFL, Switzerland; Renyi Institute, Hungary; BenGurion University of the Negev, Israel; Central European University, Hungary) We show that any set of n points in general position in the plane determines n^{1−o(1)} pairwise crossing segments. The best previously known lower bound, Ω(√n), was proved more than 25 years ago by Aronov, Erdős, Goddard, Kleitman, Klugerman, Pach, and Schulman. Our proof is fully constructive, and extends to dense geometric graphs. @InProceedings{STOC19p1158, author = {János Pach and Natan Rubin and Gábor Tardos}, title = {Planar Point Sets Determine Many Pairwise Crossing Segments}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11581166}, doi = {10.1145/3313276.3316328}, year = {2019}, } Publisher's Version 

Panageas, Ioannis 
STOC '19: "Regression from Dependent ..."
Regression from Dependent Observations
Constantinos Daskalakis, Nishanth Dikkala, and Ioannis Panageas (Massachusetts Institute of Technology, USA; Singapore University of Technology and Design, Singapore) The standard linear and logistic regression models assume that the response variables are independent, but share the same linear relationship to their corresponding vectors of covariates. The assumption that the response variables are independent is, however, too strong. In many applications, these responses are collected on nodes of a network, or some spatial or temporal domain, and are dependent. Examples abound in financial and meteorological applications, and dependencies naturally arise in social networks through peer effects. Regression with dependent responses has thus received a lot of attention in the Statistics and Economics literature, but there are no strong consistency results unless multiple independent samples of the vectors of dependent responses can be collected from these models. We present computationally and statistically efficient methods for linear and logistic regression models when the response variables are dependent on a network. Given one sample from a networked linear or logistic regression model and under mild assumptions, we prove strong consistency results for recovering the vector of coefficients and the strength of the dependencies, recovering the rates of standard regression under independent observations. We use projected gradient descent on the negative loglikelihood, or negative logpseudolikelihood, and establish their strong convexity and consistency using concentration of measure for dependent random variables. @InProceedings{STOC19p881, author = {Constantinos Daskalakis and Nishanth Dikkala and Ioannis Panageas}, title = {Regression from Dependent Observations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {881889}, doi = {10.1145/3313276.3316362}, year = {2019}, } Publisher's Version 

Paneth, Omer 
STOC '19: "How to Delegate Computations ..."
How to Delegate Computations Publicly
Yael Tauman Kalai, Omer Paneth, and Lisa Yang (Microsoft Research, USA; Massachusetts Institute of Technology, USA) We construct a delegation scheme for all polynomial time computations. Our scheme is publicly verifiable and completely noninteractive in the common reference string (CRS) model. Our scheme is based on an efficiently falsifiable decisional assumption on groups with bilinear maps. Prior to this work, publicly verifiable noninteractive delegation schemes were only known under knowledge assumptions (or in the Random Oracle model) or under nonstandard assumptions related to obfuscation or multilinear maps. We obtain our result in two steps. First, we construct a scheme with a long CRS (polynomial in the running time of the computation) by following the blueprint of Paneth and Rothblum (TCC 2017). Then we bootstrap this scheme to obtain a short CRS. Our bootstrapping theorem exploits the fact that our scheme can securely delegate certain nondeterministic computations. @InProceedings{STOC19p1115, author = {Yael Tauman Kalai and Omer Paneth and Lisa Yang}, title = {How to Delegate Computations Publicly}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11151124}, doi = {10.1145/3313276.3316411}, year = {2019}, } Publisher's Version STOC '19: "Weak ZeroKnowledge Beyond ..." Weak ZeroKnowledge Beyond the BlackBox Barrier Nir Bitansky, Dakshita Khurana, and Omer Paneth (Tel Aviv University, Israel; Microsoft Research, USA; University of Illinois at UrbanaChampaign, USA; Massachusetts Institute of Technology, USA) The round complexity of zeroknowledge protocols is a longstanding open question, yet to be settled under standard assumptions. So far, the question has appeared equally challenging for relaxations such as weak zeroknowledge and witness hiding. Protocols satisfying these relaxed notions under standard assumptions have at least four messages, just like fullfledged zeroknowledge. The difficulty in improving round complexity stems from a fundamental barrier: none of these notions can be achieved in three messages via reductions (or simulators) that treat the verifier as a black box. We introduce a new nonblackbox technique and use it to obtain the first protocols that cross this barrier under standard assumptions. We obtain weak zeroknowledge for in two messages, assuming the existence of quasipolynomiallysecure fullyhomomorphic encryption and other standard primitives (known based on the quasipolynomial hardness of Learning with Errors), and subexponentiallysecure oneway functions. We also obtain weak zeroknowledge for in three messages under standard polynomial assumptions (following for example from fully homomorphic encryption and factoring). We also give, under polynomial assumptions, a twomessage witnesshiding protocol for any language ∈ that has a witness encryption scheme. This protocol is publicly verifiable. Our technique is based on a new homomorphic trapdoor paradigm, which can be seen as a nonblackbox analog of the classic FeigeLapidotShamir trapdoor paradigm. @InProceedings{STOC19p1091, author = {Nir Bitansky and Dakshita Khurana and Omer Paneth}, title = {Weak ZeroKnowledge Beyond the BlackBox Barrier}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10911102}, doi = {10.1145/3313276.3316382}, year = {2019}, } Publisher's Version 

Panigrahi, Debmalya 
STOC '19: "Dynamic Set Cover: Improved ..."
Dynamic Set Cover: Improved Algorithms and Lower Bounds
Amir Abboud, Raghavendra Addanki, Fabrizio Grandoni, Debmalya Panigrahi, and Barna Saha (IBM Research, USA; University of Massachusetts at Amherst, USA; IDSIA, Switzerland; Duke University, USA) We give new upper and lower bounds for the dynamic set cover problem. First, we give a (1+є) fapproximation for fully dynamic set cover in O(f^{2}logn/є^{5}) (amortized) update time, for any є > 0, where f is the maximum number of sets that an element belongs to. In the decremental setting, the update time can be improved to O(f^{2}/є^{5}), while still obtaining an (1+є) fapproximation. These are the first algorithms that obtain an approximation factor linear in f for dynamic set cover, thereby almost matching the best bounds known in the offline setting and improving upon the previous best approximation of O(f^{2}) in the dynamic setting. To complement our upper bounds, we also show that a linear dependence of the update time on f is necessary unless we can tolerate much worse approximation factors. Using the recent distributed PCPframework, we show that any dynamic set cover algorithm that has an amortized update time of O(f^{1−є}) must have an approximation factor that is Ω(n^{δ}) for some constant δ>0 under the Strong Exponential Time Hypothesis. @InProceedings{STOC19p114, author = {Amir Abboud and Raghavendra Addanki and Fabrizio Grandoni and Debmalya Panigrahi and Barna Saha}, title = {Dynamic Set Cover: Improved Algorithms and Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {114125}, doi = {10.1145/3313276.3316376}, year = {2019}, } Publisher's Version 

Parter, Merav 
STOC '19: "Planar Diameter via Metric ..."
Planar Diameter via Metric Compression
Jason Li and Merav Parter (Carnegie Mellon University, USA; Weizmann Institute of Science, Israel) We develop a new approach for distributed distance computation in planar graphs that is based on a variant of the metric compression problem recently introduced by Abboud et al. [SODA’18]. In our variant of the Planar Graph Metric Compression Problem, one is given an nvertex planar graph G=(V,E), a set of S ⊆ V source terminals lying on a single face, and a subset of target terminals T ⊆ V. The goal is to compactly encode the S× T distances. One of our key technical contributions is in providing a compression scheme that encodes all S × T distances using O(S·(D)+T) bits, for unweighted graphs with diameter D. This significantly improves the state of the art of O(S· 2^{D}+T · D) bits. We also consider an approximate version of the problem for weighted graphs, where the goal is to encode (1+є) approximation of the S × T distances, for a given input parameter є ∈ (0,1]. Here, our compression scheme uses O((S/є)+T) bits. In addition, we describe how these compression schemes can be computed in nearlinear time. At the heart of this compact compression scheme lies a VCdimension type argument on planar graphs, using the wellknown Sauer’’s lemma. This efficient compression scheme leads to several improvements and simplifications in the setting of diameter computation, most notably in the distributed setting: There is an O(D^{5})round randomized distributed algorithm for computing the diameter in planar graphs, w.h.p. There is an O(D^{3})+D^{2}(logn/є)round randomized distributed algorithm for computing a (1+є) approximation for the diameter in weighted planar graphs, with unweighted diameter D, w.h.p. No sublinear round algorithms were known for these problems before. These distributed constructions are based on a new recursive graph decomposition that preserves the (unweighted) diameter of each of the subgraphs up to a logarithmic term. Using this decomposition, we also get an exact SSSP tree computation within O(D^{2}) rounds. @InProceedings{STOC19p152, author = {Jason Li and Merav Parter}, title = {Planar Diameter via Metric Compression}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {152163}, doi = {10.1145/3313276.3316358}, year = {2019}, } Publisher's Version 

Peng, Richard 
STOC '19: "Fully Dynamic Spectral Vertex ..."
Fully Dynamic Spectral Vertex Sparsifiers and Applications
David Durfee, Yu Gao, Gramoz Goranci, and Richard Peng (Georgia Tech, USA; University of Vienna, Austria) We study dynamic algorithms for maintaining spectral vertex sparsifiers of graphs with respect to a set of terminals T of our choice. Such objects preserve pairwise resistances, solutions to systems of linear equations, and energy of electrical flows between the terminals in T. We give a data structure that supports insertions and deletions of edges, and terminal additions, all in sublinear time. We then show the applicability of our result to the following problems. (1) A data structure for dynamically maintaining solutions to Laplacian systems L x = b, where L is the graph Laplacian matrix and b is a demand vector. For a bounded degree, unweighted graph, we support modifications to both L and b while providing access to єapproximations to the energy of routing an electrical flow with demand b, as well as query access to entries of a vector x such that ∥x−L^{†} b ∥_{L} ≤ є ∥L^{†} b ∥_{L} in Õ(n^{11/12}є^{−5}) expected amortized update and query time. (2) A data structure for maintaining fully dynamic AllPairs Effective Resistance. For an intermixed sequence of edge insertions, deletions, and resistance queries, our data structures returns (1 ± є)approximation to all the resistance queries against an oblivious adversary with high probability. Its expected amortized update and query times are Õ(min(m^{3/4},n^{5/6} є^{−2}) є^{−4}) on an unweighted graph, and Õ(n^{5/6}є^{−6}) on weighted graphs. The key ingredients in these results are (1) the intepretation of Schur complement as a sum of random walks, and (2) a suitable choice of terminals based on the behavior of these random walks to make sure that the majority of walks are local, even when the graph itself is highly connected and (3) maintenance of these local walks and numerical solutions using data structures. These results together represent the first data structures for maintain key primitives from the Laplacian paradigm for graph algorithms in sublinear time without assumptions on the underlying graph topologies. The importance of routines such as effective resistance, electrical flows, and Laplacian solvers in the static setting make us optimistic that some of our components can provide new building blocks for dynamic graph algorithms. @InProceedings{STOC19p914, author = {David Durfee and Yu Gao and Gramoz Goranci and Richard Peng}, title = {Fully Dynamic Spectral Vertex Sparsifiers and Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {914925}, doi = {10.1145/3313276.3316379}, year = {2019}, } Publisher's Version STOC '19: "Flows in Almost Linear Time ..." Flows in Almost Linear Time via Adaptive Preconditioning Rasmus Kyng, Richard Peng, Sushant Sachdeva, and Di Wang (Harvard University, USA; Georgia Tech, USA; Microsoft Research, USA; University of Toronto, Canada) We present algorithms for solving a large class of flow and regression problems on unit weighted graphs to (1 + 1 / poly(n)) accuracy in almostlinear time. These problems include ℓ_{p}norm minimizing flow for p large (p ∈ [ω(1), o(log^{2/3} n) ]), and their duals, ℓ_{p}norm semisupervised learning for p close to 1. As p tends to infinity, pnorm flow and its dual tend to maxflow and mincut respectively. Using this connection and our algorithms, we give an alternate approach for approximating undirected maxflow, and the first almostlinear time approximations of discretizations of total variation minimization objectives. Our framework is inspired by the routingbased solver for Laplacian linear systems by Spielman and Teng (STOC ’04, SIMAX ’14), and is based on several new tools we develop, including adaptive nonlinear preconditioning, treeroutings, and (ultra)sparsification for mixed ℓ_{2} and ℓ_{p} norm objectives. @InProceedings{STOC19p902, author = {Rasmus Kyng and Richard Peng and Sushant Sachdeva and Di Wang}, title = {Flows in Almost Linear Time via Adaptive Preconditioning}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902913}, doi = {10.1145/3313276.3316410}, year = {2019}, } Publisher's Version 

Perkins, Will 
STOC '19: "Algorithmic PirogovSinai ..."
Algorithmic PirogovSinai Theory
Tyler Helmuth, Will Perkins, and Guus Regts (University of Bristol, UK; University of Illinois at Chicago, USA; University of Amsterdam, Netherlands) We develop an efficient algorithmic approach for approximate counting and sampling in the lowtemperature regime of a broad class of statistical physics models on finite subsets of the lattice ℤ^{d} and on the torus (ℤ/n ℤ)^{d}. Our approach is based on combining contour representations from Pirogov–Sinai theory with Barvinok’s approach to approximate counting using truncated Taylor series. Some consequences of our main results include an FPTAS for approximating the partition function of the hardcore model at sufficiently high fugacity on subsets of ℤ^{d} with appropriate boundary conditions and an efficient sampling algorithm for the ferromagnetic Potts model on the discrete torus (ℤ/n ℤ)^{d} at sufficiently low temperature. @InProceedings{STOC19p1009, author = {Tyler Helmuth and Will Perkins and Guus Regts}, title = {Algorithmic PirogovSinai Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10091020}, doi = {10.1145/3313276.3316305}, year = {2019}, } Publisher's Version 

Pietrzak, Krzysztof 
STOC '19: "Finding a Nash Equilibrium ..."
Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir
Arka Rai Choudhuri, Pavel Hubáček, Chethan Kamath, Krzysztof Pietrzak, Alon Rosen, and Guy N. Rothblum (Johns Hopkins University, USA; Charles University in Prague, Czechia; IST Austria, Austria; IDC Herzliya, Israel; Weizmann Institute of Science, Israel) The FiatShamir heuristic transforms a publiccoin interactive proof into a noninteractive argument, by replacing the verifier with a cryptographic hash function that is applied to the protocol’s transcript. Constructing hash functions for which this transformation is sound is a central and longstanding open question in cryptography. We show that solving the END−OF−METERED−LINE problem is no easier than breaking the soundness of the FiatShamir transformation when applied to the sumcheck protocol. In particular, if the transformed protocol is sound, then any hard problem in #P gives rise to a hard distribution in the class CLS, which is contained in PPAD. Our result opens up the possibility of sampling moderatelysized games for which it is hard to find a Nash equilibrium, by reducing the inversion of appropriately chosen oneway functions to #SAT. Our main technical contribution is a stateful incrementally verifiable procedure that, given a SAT instance over n variables, counts the number of satisfying assignments. This is accomplished via an exponential sequence of small steps, each computable in time poly(n). Incremental verifiability means that each intermediate state includes a sumcheckbased proof of its correctness, and the proof can be updated and verified in time poly(n). @InProceedings{STOC19p1103, author = {Arka Rai Choudhuri and Pavel Hubáček and Chethan Kamath and Krzysztof Pietrzak and Alon Rosen and Guy N. Rothblum}, title = {Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11031114}, doi = {10.1145/3313276.3316400}, year = {2019}, } Publisher's Version 

Polyanskiy, Yury 
STOC '19: "Communication Complexity of ..."
Communication Complexity of Estimating Correlations
Uri Hadar, Jingbo Liu, Yury Polyanskiy, and Ofer Shayevitz (Tel Aviv University, Israel; Massachusetts Institute of Technology, USA) We characterize the communication complexity of the following distributed estimation problem. Alice and Bob observe infinitely many iid copies of ρcorrelated unitvariance (Gaussian or ±1 binary) random variables, with unknown ρ∈[−1,1]. By interactively exchanging k bits, Bob wants to produce an estimate ρ of ρ. We show that the best possible performance (optimized over interaction protocol Π and estimator ρ) satisfies inf_{Π ρ}sup_{ρ}E [ρ−ρ^{2}] = k^{−1} (1/2 ln2 + o(1)). Curiously, the number of samples in our achievability scheme is exponential in k; by contrast, a naive scheme exchanging k samples achieves the same Ω(1/k) rate but with a suboptimal prefactor. Our protocol achieving optimal performance is oneway (noninteractive). We also prove the Ω(1/k) bound even when ρ is restricted to any small open subinterval of [−1,1] (i.e. a local minimax lower bound). Our proof techniques rely on symmetric strong dataprocessing inequalities and various tensorization techniques from informationtheoretic interactive commonrandomness extraction. Our results also imply an Ω(n) lower bound on the information complexity of the GapHamming problem, for which we show a direct informationtheoretic proof. @InProceedings{STOC19p792, author = {Uri Hadar and Jingbo Liu and Yury Polyanskiy and Ofer Shayevitz}, title = {Communication Complexity of Estimating Correlations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {792803}, doi = {10.1145/3313276.3316332}, year = {2019}, } Publisher's Version 

Potechin, Aaron 
STOC '19: "On the Approximation Resistance ..."
On the Approximation Resistance of Balanced Linear Threshold Functions
Aaron Potechin (University of Chicago, USA) In this paper, we show that there exists a balanced linear threshold function (LTF) which is unique games hard to approximate, refuting a conjecture of Austrin, Benabbas, and Magen. We also show that the almost monarchy predicate P(x) = sign((k−4)x_{1} + ∑_{i=2}^{k}x_{i}) is approximable for sufficiently large k. @InProceedings{STOC19p430, author = {Aaron Potechin}, title = {On the Approximation Resistance of Balanced Linear Threshold Functions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {430441}, doi = {10.1145/3313276.3316374}, year = {2019}, } Publisher's Version 

Probst, Maximilian 
STOC '19: "Decremental StronglyConnected ..."
Decremental StronglyConnected Components and SingleSource Reachability in NearLinear Time
Aaron Bernstein, Maximilian Probst, and Christian WulffNilsen (Rutgers University, USA; University of Copenhagen, Denmark) Computing the StronglyConnected Components (SCCs) in a graph G=(V,E) is known to take only O(m+n) time using an algorithm by Tarjan from 1972[SICOMP 72] where m = E, n=V. For fullydynamic graphs, conditional lower bounds provide evidence that the update time cannot be improved by polynomial factors over recomputing the SCCs from scratch after every update. Nevertheless, substantial progress has been made to find algorithms with fast update time for decremental graphs, i.e. graphs that undergo edge deletions. In this paper, we present the first algorithm for general decremental graphs that maintains the SCCs in total update time Õ(m), thus only a polylogarithmic factor from the optimal running time. Previously such a result was only known for the special case of planar graphs [Italiano et al, STOC 17]. Our result should be compared to the formerly best algorithm for general graphs achieving Õ(m√n) total update time by Chechik et.al. [FOCS 16] which improved upon a breakthrough result leading to O(mn^{0.9 + o(1)}) total update time by Henzinger, Krinninger and Nanongkai [STOC 14, ICALP 15]; these results in turn improved upon the longstanding bound of O(mn) by Roditty and Zwick [STOC 04]. All of the above results also apply to the decremental SingleSource Reachability (SSR) problem, which can be reduced to decrementally maintaining SCCs. A bound of O(mn) total update time for decremental SSR was established already in 1981 by Even and Shiloach [JACM 81]. @InProceedings{STOC19p365, author = {Aaron Bernstein and Maximilian Probst and Christian WulffNilsen}, title = {Decremental StronglyConnected Components and SingleSource Reachability in NearLinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {365376}, doi = {10.1145/3313276.3316335}, year = {2019}, } Publisher's Version 

Qi, Qi 
STOC '19: "Tight Approximation Ratio ..."
Tight Approximation Ratio of Anonymous Pricing
Yaonan Jin, Pinyan Lu, Qi Qi, Zhihao Gavin Tang, and Tao Xiao (Columbia University, USA; Shanghai University of Finance and Economics, China; Hong Kong University of Science and Technology, China; Shanghai Jiao Tong University, China) This paper considers two canonical Bayesian mechanism design settings. In the singleitem setting, the tight approximation ratio of Anonymous Pricing is obtained: (1) compared to Myerson Auction, Anonymous Pricing always generates at least a 1/2.62fraction of the revenue; (2) there is a matching lowerbound instance. In the unitdemand singlebuyer setting, the tight approximation ratio between the simplest deterministic mechanism and the optimal deterministic mechanism is attained: in terms of revenue, (1) Uniform Pricing admits a 2.62approximation to Item Pricing; (2) a matching lowerbound instance is presented also. These results answer two open questions asked by Alaei et al. (FOCS’15) and Cai and Daskalakis (GEB’15). As an implication, in the singleitem setting: the approximation ratio of SecondPrice Auction with Anonymous Reserve (Hartline and Roughgarden EC’09) is improved to 2.62, which breaks the best known upper bound of e ≈ 2.72. @InProceedings{STOC19p674, author = {Yaonan Jin and Pinyan Lu and Qi Qi and Zhihao Gavin Tang and Tao Xiao}, title = {Tight Approximation Ratio of Anonymous Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {674685}, doi = {10.1145/3313276.3316331}, year = {2019}, } Publisher's Version 

Quanrud, Kent 
STOC '19: "Parallelizing Greedy for Submodular ..."
Parallelizing Greedy for Submodular Set Function Maximization in Matroids and Beyond
Chandra Chekuri and Kent Quanrud (University of Illinois at UrbanaChampaign, USA) We consider parallel, or low adaptivity, algorithms for submodular function maximization. This line of work was recently initiated by Balkanski and Singer and has already led to several interesting results on the cardinality constraint and explicit packing constraints. An important open problem is the classical setting of matroid constraint, which has been instrumental for developments in submodular function maximization. In this paper we develop a general strategy to parallelize the wellstudied greedy algorithm and use it to obtain a randomized (1 / 2 − є)approximation in O( log^{2}(n) / ^{2} ) rounds of adaptivity. We rely on this algorithm, and an elegant amplification approach due to Badanidiyuru and Vondrák to obtain a fractional solution that yields a nearoptimal randomized ( 1 − 1/e − є )approximation in O( log^{2}(n) / є^{3} ) rounds of adaptivity. For nonnegative functions we obtain a ( 3−2√2 − є )approximation and a fractional solution that yields a ( 1 / e − є)approximation. Our approach for parallelizing greedy yields approximations for intersections of matroids and matchoids, and the approximation ratios are comparable to those known for sequential greedy. @InProceedings{STOC19p78, author = {Chandra Chekuri and Kent Quanrud}, title = {Parallelizing Greedy for Submodular Set Function Maximization in Matroids and Beyond}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {7889}, doi = {10.1145/3313276.3316406}, year = {2019}, } Publisher's Version 

Raftopoulou, Chrysanthi 
STOC '19: "Planar Graphs of Bounded Degree ..."
Planar Graphs of Bounded Degree Have Bounded Queue Number
Michael Bekos, Henry Förster, Martin Gronemann, Tamara Mchedlidze, Fabrizio Montecchiani, Chrysanthi Raftopoulou, and Torsten Ueckerdt (University of Tübingen, Germany; University of Cologne, Germany; KIT, Germany; University of Perugia, Italy; National Technical University of Athens, Greece) A queue layout of a graph consists of a linear order of its vertices and a partition of its edges into queues, so that no two independent edges of the same queue are nested. The queue number of a graph is the minimum number of queues required by any of its queue layouts. A longstanding conjecture by Heath, Leighton and Rosenberg states that the queue number of planar graphs is bounded.This conjecture has been partially settled in the positive for several sub families of planar graphs (most of which have bounded treewidth). In this paper, we make a further important step towards settling this conjecture. We prove that planar graphs of bounded degree (which may have unbounded treewidth) have bounded queue number. A notable implication of this result is that every planar graph of bounded degree admits a threedimensional straightline grid drawing in linear volume. Further implications are that every planar graph of bounded degree has bounded track number, and that every kplanar graph (i.e., every graph that can be drawn in the plane with at most k crossings per edge) of bounded degree as bounded queue number. @InProceedings{STOC19p176, author = {Michael Bekos and Henry Förster and Martin Gronemann and Tamara Mchedlidze and Fabrizio Montecchiani and Chrysanthi Raftopoulou and Torsten Ueckerdt}, title = {Planar Graphs of Bounded Degree Have Bounded Queue Number}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {176184}, doi = {10.1145/3313276.3316324}, year = {2019}, } Publisher's Version 

Raz, Ran 
STOC '19: "Oracle Separation of BQP and ..."
Oracle Separation of BQP and PH
Ran Raz and Avishay Tal (Princeton University, USA; Stanford University, USA) We present a distribution D over inputs in {−1,1}^{2N}, such that: (1) There exists a quantum algorithm that makes one (quantum) query to the input, and runs in time O(logN), that distinguishes between D and the uniform distribution with advantage Ω(1/logN). (2) No Boolean circuit of quasipolynomial size and constant depth distinguishes between D and the uniform distribution with advantage better than polylog(N)/√N. By well known reductions, this gives a separation of the classes PromiseBQP and PromisePH in the blackbox model and implies an oracle O relative to which BQP^{O} ⊈PH^{O}. @InProceedings{STOC19p13, author = {Ran Raz and Avishay Tal}, title = {Oracle Separation of BQP and PH}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1323}, doi = {10.1145/3313276.3316315}, year = {2019}, } Publisher's Version Info 

Razenshteyn, Ilya 
STOC '19: "Performance of JohnsonLindenstrauss ..."
Performance of JohnsonLindenstrauss Transform for kMeans and kMedians Clustering
Konstantin Makarychev, Yury Makarychev, and Ilya Razenshteyn (Northwestern University, USA; Toyota Technological Institute at Chicago, USA; Microsoft Research, USA) Consider an instance of Euclidean kmeans or kmedians clustering. We show that the cost of the optimal solution is preserved up to a factor of (1+ε) under a projection onto a random O(log(k /ε) / ε^{2})dimensional subspace. Further, the cost of every clustering is preserved within (1+ε). More generally, our result applies to any dimension reduction map satisfying a mild subGaussiantail condition. Our bound on the dimension is nearly optimal. Additionally, our result applies to Euclidean kclustering with the distances raised to the pth power for any constant p. For kmeans, our result resolves an open problem posed by Cohen, Elder, Musco, Musco, and Persu (STOC 2015); for kmedians, it answers a question raised by Kannan. @InProceedings{STOC19p1027, author = {Konstantin Makarychev and Yury Makarychev and Ilya Razenshteyn}, title = {Performance of JohnsonLindenstrauss Transform for <i>k</i>Means and <i>k</i>Medians Clustering}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10271038}, doi = {10.1145/3313276.3316350}, year = {2019}, } Publisher's Version 

Regts, Guus 
STOC '19: "Algorithmic PirogovSinai ..."
Algorithmic PirogovSinai Theory
Tyler Helmuth, Will Perkins, and Guus Regts (University of Bristol, UK; University of Illinois at Chicago, USA; University of Amsterdam, Netherlands) We develop an efficient algorithmic approach for approximate counting and sampling in the lowtemperature regime of a broad class of statistical physics models on finite subsets of the lattice ℤ^{d} and on the torus (ℤ/n ℤ)^{d}. Our approach is based on combining contour representations from Pirogov–Sinai theory with Barvinok’s approach to approximate counting using truncated Taylor series. Some consequences of our main results include an FPTAS for approximating the partition function of the hardcore model at sufficiently high fugacity on subsets of ℤ^{d} with appropriate boundary conditions and an efficient sampling algorithm for the ferromagnetic Potts model on the discrete torus (ℤ/n ℤ)^{d} at sufficiently low temperature. @InProceedings{STOC19p1009, author = {Tyler Helmuth and Will Perkins and Guus Regts}, title = {Algorithmic PirogovSinai Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10091020}, doi = {10.1145/3313276.3316305}, year = {2019}, } Publisher's Version 

Reingold, Omer 
STOC '19: "Pseudorandom Generators for ..."
Pseudorandom Generators for Width3 Branching Programs
Raghu Meka, Omer Reingold, and Avishay Tal (University of California at Los Angeles, USA; Stanford University, USA) We construct pseudorandom generators of seed length Õ(log(n)· log(1/є)) that єfool ordered readonce branching programs (ROBPs) of width 3 and length n. For unordered ROBPs, we construct pseudorandom generators with seed length Õ(log(n) · poly(1/є)). This is the first improvement for pseudorandom generators fooling width 3 ROBPs since the work of Nisan [Combinatorica, 1992]. Our constructions are based on the “iterated milder restrictions” approach of Gopalan et al. [FOCS, 2012] (which further extends the AjtaiWigderson framework [FOCS, 1985]), combined with the INWgenerator [STOC, 1994] at the last step (as analyzed by Braverman et al. [SICOMP, 2014]). For the unordered case, we combine iterated milder restrictions with the generator of Chattopadhyay et al. [CCC, 2018]. Two conceptual ideas that play an important role in our analysis are: (1) A relabeling technique allowing us to analyze a relabeled version of the given branching program, which turns out to be much easier. (2) Treating the number of colliding layers in a branching program as a progress measure and showing that it reduces significantly under pseudorandom restrictions. In addition, we achieve nearly optimal seedlength Õ(log(n/є)) for the classes of: (1) readonce polynomials on n variables, (2) locallymonotone ROBPs of length n and width 3 (generalizing readonce CNFs and DNFs), and (3) constantwidth ROBPs of length n having a layer of width 2 in every consecutive polylog(n) layers. @InProceedings{STOC19p626, author = {Raghu Meka and Omer Reingold and Avishay Tal}, title = {Pseudorandom Generators for Width3 Branching Programs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {626637}, doi = {10.1145/3313276.3316319}, year = {2019}, } Publisher's Version Info 

Risteski, Andrej 
STOC '19: "MeanField Approximation, ..."
MeanField Approximation, Convex Hierarchies, and the Optimality of Correlation Rounding: A Unified Perspective
Vishesh Jain, Frederic Koehler, and Andrej Risteski (Massachusetts Institute of Technology, USA) The free energy is a key quantity of interest in Ising models, but unfortunately, computing it in general is computationally intractable. Two popular (variational) approximation schemes for estimating the free energy of general Ising models (in particular, even in regimes where correlation decay does not hold) are: (i) the meanfield approximation with roots in statistical physics, which estimates the free energy from below, and (ii) hierarchies of convex relaxations with roots in theoretical computer science, which estimate the free energy from above. We show, surprisingly, that the tight regime for both methods to compute the free energy to leading order is identical. More precisely, we show that the meanfield approximation to the free energy is within O((nJ_{F})^{2/3}) of the true free energy, where J_{F} denotes the Frobenius norm of the interaction matrix of the Ising model. This simultaneously subsumes both the breakthrough work of Basak and Mukherjee, who showed the tight result that the meanfield approximation is within o(n) whenever J_{F} = o(√n), as well as the work of Jain, Koehler, and Mossel, who gave the previously best known nonasymptotic bound of O((nJ_{F})^{2/3}log^{1/3}(nJ_{F})). We give a simple, algorithmic proof of this result using a convex relaxation proposed by Risteski based on the SheraliAdams hierarchy, automatically giving subexponential time approximation schemes for the free energy in this entire regime. Our algorithmic result is tight under GapETH. We furthermore combine our techniques with spin glass theory to prove (in a strong sense) the optimality of correlation rounding, refuting a recent conjecture of Allen, O’Donnell, and Zhou. Finally, we give the tight generalization of all of these results to kMRFs, capturing as a special case previous work on approximating MAXkCSP. @InProceedings{STOC19p1226, author = {Vishesh Jain and Frederic Koehler and Andrej Risteski}, title = {MeanField Approximation, Convex Hierarchies, and the Optimality of Correlation Rounding: A Unified Perspective}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12261236}, doi = {10.1145/3313276.3316299}, year = {2019}, } Publisher's Version 

Roland, Jérémie 
STOC '19: "Quantum Weak Coin Flipping ..."
Quantum Weak Coin Flipping
Atul Singh Arora, Jérémie Roland, and Stephan Weis (Université libre de Bruxelles, Belgium) We investigate weak coin flipping, a fundamental cryptographic primitive where two distrustful parties need to remotely establish a shared random bit. A cheating player can try to bias the output bit towards a preferred value. For weak coin flipping the players have known opposite preferred values. A weak coinflipping protocol has a bias є if neither player can force the outcome towards their preferred value with probability more than 1/2+є. While it is known that all classical protocols have є=1/2, Mochon showed in 2007 that quantumly weak coin flipping can be achieved with arbitrarily small bias (near perfect) but the former best known explicit protocol has bias 1/6 (also due to Mochon, 2005). We propose a framework to construct new explicit protocols achieving biases below 1/6. In particular, we construct explicit unitaries for protocols with bias down to 1/10. To go lower, we introduce what we call the Elliptic Monotone Align (EMA) algorithm which, together with the framework, allows us to construct protocols with arbitrarily small biases. @InProceedings{STOC19p205, author = {Atul Singh Arora and Jérémie Roland and Stephan Weis}, title = {Quantum Weak Coin Flipping}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {205216}, doi = {10.1145/3313276.3316306}, year = {2019}, } Publisher's Version Info 

Rosen, Alon 
STOC '19: "Finding a Nash Equilibrium ..."
Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir
Arka Rai Choudhuri, Pavel Hubáček, Chethan Kamath, Krzysztof Pietrzak, Alon Rosen, and Guy N. Rothblum (Johns Hopkins University, USA; Charles University in Prague, Czechia; IST Austria, Austria; IDC Herzliya, Israel; Weizmann Institute of Science, Israel) The FiatShamir heuristic transforms a publiccoin interactive proof into a noninteractive argument, by replacing the verifier with a cryptographic hash function that is applied to the protocol’s transcript. Constructing hash functions for which this transformation is sound is a central and longstanding open question in cryptography. We show that solving the END−OF−METERED−LINE problem is no easier than breaking the soundness of the FiatShamir transformation when applied to the sumcheck protocol. In particular, if the transformed protocol is sound, then any hard problem in #P gives rise to a hard distribution in the class CLS, which is contained in PPAD. Our result opens up the possibility of sampling moderatelysized games for which it is hard to find a Nash equilibrium, by reducing the inversion of appropriately chosen oneway functions to #SAT. Our main technical contribution is a stateful incrementally verifiable procedure that, given a SAT instance over n variables, counts the number of satisfying assignments. This is accomplished via an exponential sequence of small steps, each computable in time poly(n). Incremental verifiability means that each intermediate state includes a sumcheckbased proof of its correctness, and the proof can be updated and verified in time poly(n). @InProceedings{STOC19p1103, author = {Arka Rai Choudhuri and Pavel Hubáček and Chethan Kamath and Krzysztof Pietrzak and Alon Rosen and Guy N. Rothblum}, title = {Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11031114}, doi = {10.1145/3313276.3316400}, year = {2019}, } Publisher's Version 

Rothblum, Guy N. 
STOC '19: "Gentle Measurement of Quantum ..."
Gentle Measurement of Quantum States and Differential Privacy
Scott Aaronson and Guy N. Rothblum (University of Texas at Austin, USA; Weizmann Institute of Science, Israel) In differential privacy (DP), we want to query a database about n users, in a way that “leaks at most ε about any individual user,” even conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure n quantum states, in a way that “damages the states by at most α,” even conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it. This paper proves a new and general connection between the two subjects. Specifically, we show that on products of n quantum states, any measurement that is αgentle for small α is also O( α) DP, and any product measurement that is εDP is also O( ε√n) gentle. Illustrating the power of this connection, we apply it to the recently studied problem of shadow tomography. Given an unknown ddimensional quantum state ρ, as well as known twooutcome measurements E_{1},…,E_{m}, shadow tomography asks us to estimate Pr[ E_{i} accepts ρ] , for every i∈[ m] , by measuring few copies of ρ. Using our connection theorem, together with a quantum analog of the socalled private multiplicative weights algorithm of Hardt and Rothblum, we give a protocol to solve this problem using order ( logm) ^{2}( logd) ^{2} copies of ρ, compared to Aaronson’s previous bound of O( ( logm) ^{4}( logd) ) . Our protocol has the advantages of being online (that is, the E_{i}’s are processed one at a time), gentle, and conceptually simple. Other applications of our connection include new lower bounds for shadow tomography from lower bounds on DP, and a result on the safe use of estimation algorithms as subroutines inside larger quantum algorithms. @InProceedings{STOC19p322, author = {Scott Aaronson and Guy N. Rothblum}, title = {Gentle Measurement of Quantum States and Differential Privacy}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {322333}, doi = {10.1145/3313276.3316378}, year = {2019}, } Publisher's Version Info STOC '19: "FiatShamir: From Practice ..." FiatShamir: From Practice to Theory Ran Canetti, Yilei Chen, Justin Holmgren, Alex Lombardi, Guy N. Rothblum, Ron D. Rothblum, and Daniel Wichs (Boston University, USA; Tel Aviv University, Israel; Visa Research, USA; Princeton University, USA; Massachusetts Institute of Technology, USA; Weizmann Institute of Science, Israel; Technion, Israel; Northeastern University, USA) We give new instantiations of the FiatShamir transform using explicit, efficiently computable hash functions. We improve over prior work by reducing the security of these protocols to qualitatively simpler and weaker computational hardness assumptions. As a consequence of our framework, we obtain the following concrete results. 1) There exists a succinct publicly verifiable noninteractive argument system for logspace uniform computations, under the assumption that any one of a broad class of fully homomorphic encryption (FHE) schemes has almost optimal security against polynomialtime adversaries. The class includes all FHE schemes in the literature that are based on the learning with errors (LWE) problem. 2) There exists a noninteractive zeroknowledge argument system for in the common reference string model, under either of the following two assumptions: (i) Almost optimal hardness of searchLWE against polynomialtime adversaries, or (ii) The existence of a circularsecure FHE scheme with a standard (polynomial time, negligible advantage) level of security. 3) The classic quadratic residuosity protocol of [Goldwasser, Micali, and Rackoff, SICOMP ’89] is not zero knowledge when repeated in parallel, under any of the hardness assumptions above. @InProceedings{STOC19p1082, author = {Ran Canetti and Yilei Chen and Justin Holmgren and Alex Lombardi and Guy N. Rothblum and Ron D. Rothblum and Daniel Wichs}, title = {FiatShamir: From Practice to Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10821090}, doi = {10.1145/3313276.3316380}, year = {2019}, } Publisher's Version STOC '19: "Finding a Nash Equilibrium ..." Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir Arka Rai Choudhuri, Pavel Hubáček, Chethan Kamath, Krzysztof Pietrzak, Alon Rosen, and Guy N. Rothblum (Johns Hopkins University, USA; Charles University in Prague, Czechia; IST Austria, Austria; IDC Herzliya, Israel; Weizmann Institute of Science, Israel) The FiatShamir heuristic transforms a publiccoin interactive proof into a noninteractive argument, by replacing the verifier with a cryptographic hash function that is applied to the protocol’s transcript. Constructing hash functions for which this transformation is sound is a central and longstanding open question in cryptography. We show that solving the END−OF−METERED−LINE problem is no easier than breaking the soundness of the FiatShamir transformation when applied to the sumcheck protocol. In particular, if the transformed protocol is sound, then any hard problem in #P gives rise to a hard distribution in the class CLS, which is contained in PPAD. Our result opens up the possibility of sampling moderatelysized games for which it is hard to find a Nash equilibrium, by reducing the inversion of appropriately chosen oneway functions to #SAT. Our main technical contribution is a stateful incrementally verifiable procedure that, given a SAT instance over n variables, counts the number of satisfying assignments. This is accomplished via an exponential sequence of small steps, each computable in time poly(n). Incremental verifiability means that each intermediate state includes a sumcheckbased proof of its correctness, and the proof can be updated and verified in time poly(n). @InProceedings{STOC19p1103, author = {Arka Rai Choudhuri and Pavel Hubáček and Chethan Kamath and Krzysztof Pietrzak and Alon Rosen and Guy N. Rothblum}, title = {Finding a Nash Equilibrium Is No Easier Than Breaking FiatShamir}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11031114}, doi = {10.1145/3313276.3316400}, year = {2019}, } Publisher's Version 

Rothblum, Ron D. 
STOC '19: "FiatShamir: From Practice ..."
FiatShamir: From Practice to Theory
Ran Canetti, Yilei Chen, Justin Holmgren, Alex Lombardi, Guy N. Rothblum, Ron D. Rothblum, and Daniel Wichs (Boston University, USA; Tel Aviv University, Israel; Visa Research, USA; Princeton University, USA; Massachusetts Institute of Technology, USA; Weizmann Institute of Science, Israel; Technion, Israel; Northeastern University, USA) We give new instantiations of the FiatShamir transform using explicit, efficiently computable hash functions. We improve over prior work by reducing the security of these protocols to qualitatively simpler and weaker computational hardness assumptions. As a consequence of our framework, we obtain the following concrete results. 1) There exists a succinct publicly verifiable noninteractive argument system for logspace uniform computations, under the assumption that any one of a broad class of fully homomorphic encryption (FHE) schemes has almost optimal security against polynomialtime adversaries. The class includes all FHE schemes in the literature that are based on the learning with errors (LWE) problem. 2) There exists a noninteractive zeroknowledge argument system for in the common reference string model, under either of the following two assumptions: (i) Almost optimal hardness of searchLWE against polynomialtime adversaries, or (ii) The existence of a circularsecure FHE scheme with a standard (polynomial time, negligible advantage) level of security. 3) The classic quadratic residuosity protocol of [Goldwasser, Micali, and Rackoff, SICOMP ’89] is not zero knowledge when repeated in parallel, under any of the hardness assumptions above. @InProceedings{STOC19p1082, author = {Ran Canetti and Yilei Chen and Justin Holmgren and Alex Lombardi and Guy N. Rothblum and Ron D. Rothblum and Daniel Wichs}, title = {FiatShamir: From Practice to Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10821090}, doi = {10.1145/3313276.3316380}, year = {2019}, } Publisher's Version 

Rubin, Natan 
STOC '19: "Planar Point Sets Determine ..."
Planar Point Sets Determine Many Pairwise Crossing Segments
János Pach, Natan Rubin, and Gábor Tardos (EPFL, Switzerland; Renyi Institute, Hungary; BenGurion University of the Negev, Israel; Central European University, Hungary) We show that any set of n points in general position in the plane determines n^{1−o(1)} pairwise crossing segments. The best previously known lower bound, Ω(√n), was proved more than 25 years ago by Aronov, Erdős, Goddard, Kleitman, Klugerman, Pach, and Schulman. Our proof is fully constructive, and extends to dense geometric graphs. @InProceedings{STOC19p1158, author = {János Pach and Natan Rubin and Gábor Tardos}, title = {Planar Point Sets Determine Many Pairwise Crossing Segments}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11581166}, doi = {10.1145/3313276.3316328}, year = {2019}, } Publisher's Version 

Rubinstein, Aviad 
STOC '19: "NearLinear Time InsertionDeletion ..."
NearLinear Time InsertionDeletion Codes and (1+ε)Approximating Edit Distance via Indexing
Bernhard Haeupler, Aviad Rubinstein, and Amirbehshad Shahrasbi (Carnegie Mellon University, USA; Stanford University, USA) We introduce fastdecodable indexing schemes for edit distance which can be used to speed up edit distance computations to nearlinear time if one of the strings is indexed by an indexing string I. In particular, for every length n and every ε >0, one can in near linear time construct a string I ∈ Σ′^{n} with Σ′ = O_{ε}(1), such that, indexing any string S ∈ Σ^{n}, symbolbysymbol, with I results in a string S′ ∈ Σ″^{n} where Σ″ = Σ × Σ′ for which edit distance computations are easy, i.e., one can compute a (1+ε)approximation of the edit distance between S′ and any other string in O(n (logn)) time. Our indexing schemes can be used to improve the decoding complexity of stateoftheart error correcting codes for insertions and deletions. In particular, they lead to nearlinear time decoding algorithms for the insertiondeletion codes of [Haeupler, Shahrasbi; STOC ‘17] and faster decoding algorithms for listdecodable insertiondeletion codes of [Haeupler, Shahrasbi, Sudan; ICALP ‘18]. Interestingly, the latter codes are a crucial ingredient in the construction of fastdecodable indexing schemes. @InProceedings{STOC19p697, author = {Bernhard Haeupler and Aviad Rubinstein and Amirbehshad Shahrasbi}, title = {NearLinear Time InsertionDeletion Codes and (1+<i>ε</i>)Approximating Edit Distance via Indexing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {697708}, doi = {10.1145/3313276.3316371}, year = {2019}, } Publisher's Version STOC '19: "An Optimal Approximation for ..." An Optimal Approximation for Submodular Maximization under a Matroid Constraint in the Adaptive Complexity Model Eric Balkanski, Aviad Rubinstein, and Yaron Singer (Harvard University, USA; Stanford University, USA) In this paper we study submodular maximization under a matroid constraint in the adaptive complexity model. This model was recently introduced in the context of submodular optimization to quantify the information theoretic complexity of blackbox optimization in a parallel computation model. Informally, the adaptivity of an algorithm is the number of sequential rounds it makes when each round can execute polynomiallymany function evaluations in parallel. Since submodular optimization is regularly applied on large datasets we seek algorithms with low adaptivity to enable speedups via parallelization. Consequently, a recent line of work has been devoted to designing constant factor approximation algorithms for maximizing submodular functions under various constraints in the adaptive complexity model. Despite the burst in work on submodular maximization in the adaptive complexity model, the fundamental problem of maximizing a monotone submodular function under a matroid constraint has remained elusive. In particular, all known techniques fail for this problem and there are no known constant factor approximation algorithms whose adaptivity is sublinear in the rank of the matroid k or in the worst case sublinear in the size of the ground set n. In this paper we present an approximation algorithm for the problem of maximizing a monotone submodular function under a matroid constraint in the adaptive complexity model. The approximation guarantee of the algorithm is arbitrarily close to the optimal 1−1/e and it has near optimal adaptivity of Ø(log(n)log(k)). This result is obtained using a novel technique of adaptive sequencing which departs from previous techniques for submodular maximization in the adaptive complexity model. In addition to our main result we show how to use this technique to design other approximation algorithms with strong approximation guarantees and polylogarithmic adaptivity. @InProceedings{STOC19p66, author = {Eric Balkanski and Aviad Rubinstein and Yaron Singer}, title = {An Optimal Approximation for Submodular Maximization under a Matroid Constraint in the Adaptive Complexity Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {6677}, doi = {10.1145/3313276.3316304}, year = {2019}, } Publisher's Version 

Sachdeva, Sushant 
STOC '19: "Flows in Almost Linear Time ..."
Flows in Almost Linear Time via Adaptive Preconditioning
Rasmus Kyng, Richard Peng, Sushant Sachdeva, and Di Wang (Harvard University, USA; Georgia Tech, USA; Microsoft Research, USA; University of Toronto, Canada) We present algorithms for solving a large class of flow and regression problems on unit weighted graphs to (1 + 1 / poly(n)) accuracy in almostlinear time. These problems include ℓ_{p}norm minimizing flow for p large (p ∈ [ω(1), o(log^{2/3} n) ]), and their duals, ℓ_{p}norm semisupervised learning for p close to 1. As p tends to infinity, pnorm flow and its dual tend to maxflow and mincut respectively. Using this connection and our algorithms, we give an alternate approach for approximating undirected maxflow, and the first almostlinear time approximations of discretizations of total variation minimization objectives. Our framework is inspired by the routingbased solver for Laplacian linear systems by Spielman and Teng (STOC ’04, SIMAX ’14), and is based on several new tools we develop, including adaptive nonlinear preconditioning, treeroutings, and (ultra)sparsification for mixed ℓ_{2} and ℓ_{p} norm objectives. @InProceedings{STOC19p902, author = {Rasmus Kyng and Richard Peng and Sushant Sachdeva and Di Wang}, title = {Flows in Almost Linear Time via Adaptive Preconditioning}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902913}, doi = {10.1145/3313276.3316410}, year = {2019}, } Publisher's Version 

Saha, Barna 
STOC '19: "Dynamic Set Cover: Improved ..."
Dynamic Set Cover: Improved Algorithms and Lower Bounds
Amir Abboud, Raghavendra Addanki, Fabrizio Grandoni, Debmalya Panigrahi, and Barna Saha (IBM Research, USA; University of Massachusetts at Amherst, USA; IDSIA, Switzerland; Duke University, USA) We give new upper and lower bounds for the dynamic set cover problem. First, we give a (1+є) fapproximation for fully dynamic set cover in O(f^{2}logn/є^{5}) (amortized) update time, for any є > 0, where f is the maximum number of sets that an element belongs to. In the decremental setting, the update time can be improved to O(f^{2}/є^{5}), while still obtaining an (1+є) fapproximation. These are the first algorithms that obtain an approximation factor linear in f for dynamic set cover, thereby almost matching the best bounds known in the offline setting and improving upon the previous best approximation of O(f^{2}) in the dynamic setting. To complement our upper bounds, we also show that a linear dependence of the update time on f is necessary unless we can tolerate much worse approximation factors. Using the recent distributed PCPframework, we show that any dynamic set cover algorithm that has an amortized update time of O(f^{1−є}) must have an approximation factor that is Ω(n^{δ}) for some constant δ>0 under the Strong Exponential Time Hypothesis. @InProceedings{STOC19p114, author = {Amir Abboud and Raghavendra Addanki and Fabrizio Grandoni and Debmalya Panigrahi and Barna Saha}, title = {Dynamic Set Cover: Improved Algorithms and Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {114125}, doi = {10.1145/3313276.3316376}, year = {2019}, } Publisher's Version 

Saha, Chandan 
STOC '19: "Reconstruction of Nondegenerate ..."
Reconstruction of Nondegenerate Homogeneous Depth Three Circuits
Neeraj Kayal and Chandan Saha (Microsoft Research, India; Indian Institute of Science, India) A homogeneous depth three circuit C computes a polynomial f = T_{1} + T_{2} + ... + T_{s}, where each T_{i} is a product of d linear forms in n variables over some underlying field F. Given blackbox access to f, can we efficiently reconstruct (i.e. proper learn) a homogeneous depth three circuit computing f? Learning various subclasses of circuits is natural and interesting from both theoretical and practical standpoints and in particular, properly learning homogeneous depth three circuits efficiently is stated as an open problem in a work by Klivans and Shpilka (COLT 2003) and is wellstudied. Unfortunately, there is substantial amount of evidence to show that this is a hard problem in the worst case. We give a (randomized) poly(n,d,s)time algorithm to reconstruct nondegenerate homogeneous depth three circuits for n = Ω(d^{2}) (with some additional mild requirements on s and the characteristic of F). We call a circuit C as nondegenerate if the dimension of the partial derivative space of f equals the sum of the dimensions of the partial derivative spaces of the terms T_{1}, T_{2}, …, T_{s}. In this sense, the terms are “independent” of each other in a nondegenerate circuit. A random homogeneous depth three circuit (where the coefficients of the linear forms are chosen according to the uniform distribution or any other reasonable distribution) is almost surely nondegenerate. In comparison, previous learning algorithms for this circuit class were either improper (with an exponential dependence on d), or they only worked for s < n (with a doubly exponential dependence of the running time on s). The main contribution of this work is to formulate the following paradigm for efficiently handling addition gates and to successfully implement it for the class of homogeneous depth three circuits. The problem of finding the children of an addition gate with large fanin s is first reduced to the problem of decomposing a suitable vector space U into a (direct) sum of simpler subspaces U_{1}, U_{2}, …, U_{s}. One then constructs a suitable space of operators S consisting of linear maps acting on U such that analyzing the simultaneous global structure of S enables us to efficiently decompose U. In our case, we exploit the structure of the set of low rank matrices in S and of the invariant subspaces of U induced by S. We feel that this paradigm is novel and powerful: it should lead to efficient reconstruction of many other subclasses of circuits for which the efficient reconstruction problem had hitherto looked unapproachable because of the presence of large fanin addition gates. @InProceedings{STOC19p413, author = {Neeraj Kayal and Chandan Saha}, title = {Reconstruction of Nondegenerate Homogeneous Depth Three Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {413424}, doi = {10.1145/3313276.3316360}, year = {2019}, } Publisher's Version 

Saranurak, Thatchaphol 
STOC '19: "Distributed Edge Connectivity ..."
Distributed Edge Connectivity in Sublinear Time
Mohit Daga, Monika Henzinger, Danupon Nanongkai, and Thatchaphol Saranurak (KTH, Sweden; University of Vienna, Austria; Toyota Technological Institute at Chicago, USA) We present the first sublineartime algorithm that can compute the edge connectivity λ of a network exactly on distributed messagepassing networks (the CONGEST model), as long as the network contains no multiedge. We present the first sublineartime algorithm for a distributed messagepassing network sto compute its edge connectivity λ exactly in the CONGEST model, as long as there are no parallel edges. Our algorithm takes Õ(n^{1−1/353}D^{1/353}+n^{1−1/706}) time to compute λ and a cut of cardinality λ with high probability, where n and D are the number of nodes and the diameter of the network, respectively, and Õ hides polylogarithmic factors. This running time is sublinear in n (i.e. Õ(n^{1−є})) whenever D is. Previous sublineartime distributed algorithms can solve this problem either (i) exactly only when λ=O(n^{1/8−є}) [Thurimella PODC’95; Pritchard, Thurimella, ACM Trans. Algorithms’11; Nanongkai, Su, DISC’14] or (ii) approximately [Ghaffari, Kuhn, DISC’13; Nanongkai, Su, DISC’14]. To achieve this we develop and combine several new techniques. First, we design the first distributed algorithm that can compute a kedge connectivity certificate for any k=O(n^{1−є}) in time Õ(√nk+D). The previous sublineartime algorithm can do so only when k=o(√n) [Thurimella PODC’95]. In fact, our algorithm can be turned into the first parallel algorithm with polylogarithmic depth and nearlinear work. Previous nearlinear work algorithms are essentially sequential and previous polylogarithmicdepth algorithms require Ω(mk) work in the worst case (e.g. [Karger, Motwani, STOC’93]). Second, we show that by combining the recent distributed expander decomposition technique of [Chang, Pettie, Zhang, SODA’19] with techniques from the sequential deterministic edge connectivity algorithm of [Kawarabayashi, Thorup, STOC’15], we can decompose the network into a sublinear number of clusters with small average diameter and without any mincut separating a cluster (except the “trivial” ones). This leads to a simplification of the KawarabayashiThorup framework (except that we are randomized while they are deterministic). This might make this framework more useful in other models of computation. Finally, by extending the tree packing technique from [Karger STOC’96], we can find the minimum cut in time proportional to the number of components. As a byproduct of this technique, we obtain an Õ(n)time algorithm for computing exact minimum cut for weighted graphs. @InProceedings{STOC19p343, author = {Mohit Daga and Monika Henzinger and Danupon Nanongkai and Thatchaphol Saranurak}, title = {Distributed Edge Connectivity in Sublinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {343354}, doi = {10.1145/3313276.3316346}, year = {2019}, } Publisher's Version STOC '19: "Breaking Quadratic Time for ..." Breaking Quadratic Time for Small Vertex Connectivity and an Approximation Scheme Danupon Nanongkai, Thatchaphol Saranurak, and Sorrachai Yingchareonthawornchai (KTH, Sweden; Toyota Technological Institute at Chicago, USA; Michigan State University, USA; Aalto University, Finland) Vertex connectivity a classic extensivelystudied problem. Given an integer k, its goal is to decide if an nnode medge graph can be disconnected by removing k vertices. Although a lineartime algorithm was postulated since 1974 [Aho, Hopcroft and Ullman], and despite its sibling problem of edge connectivity being resolved over two decades ago [Karger STOC’96], so far no vertex connectivity algorithms are faster than O(n^{2}) time even for k=4 and m=O(n). In the simplest case where m=O(n) and k=O(1), the O(n^{2}) bound dates five decades back to [Kleitman IEEE Trans. Circuit Theory’69]. For higher m, O(m) time is known for k≤ 3 [Tarjan FOCS’71; Hopcroft, Tarjan SICOMP’73], the first O(n^{2}) time is from [Kanevsky, Ramachandran, FOCS’87] for k=4 and from [Nagamochi, Ibaraki, Algorithmica’92] for k=O(1). For general k and m, the best bound is Õ(min(kn^{2}, n^{ω}+nk^{ω})) [Henzinger, Rao, Gabow FOCS’96; Linial, Lovász, Wigderson FOCS’86] where Õ hides polylogarithmic terms and ω<2.38 is the matrix multiplication exponent. In this paper, we present a randomized Monte Carlo algorithm with Õ(m+k^{7/3}n^{4/3}) time for any k=O(√n). This gives the first subquadratic time bound for any 4≤ k ≤ o(n^{2/7}) (subquadratic time refers to O(m)+o(n^{2}) time.) and improves all above classic bounds for all k≤ n^{0.44}. We also present a new randomized Monte Carlo (1+є)approximation algorithm that is strictly faster than the previous Henzinger’s 2approximation algorithm [J. Algorithms’97] and all previous exact algorithms. The story is the same for the directed case, where our exact Õ( min{km^{2/3}n, km^{4/3}} )time for any k = O(√n) and (1+є)approximation algorithms improve classic bounds for small and large k, respectively. Additionally, our algorithm is the first approximation algorithm on directed graphs. The key to our results is to avoid computing singlesource connectivity, which was needed by all previous exact algorithms and is not known to admit o(n^{2}) time. Instead, we design the first local algorithm for computing vertex connectivity; without reading the whole graph, our algorithm can find a separator of size at most k or certify that there is no separator of size at most k “near” a given seed node. @InProceedings{STOC19p241, author = {Danupon Nanongkai and Thatchaphol Saranurak and Sorrachai Yingchareonthawornchai}, title = {Breaking Quadratic Time for Small Vertex Connectivity and an Approximation Scheme}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {241252}, doi = {10.1145/3313276.3316394}, year = {2019}, } Publisher's Version 

Schaeffer, Luke 
STOC '19: "Exponential Separation between ..."
Exponential Separation between Shallow Quantum Circuits and Unbounded FanIn Shallow Classical Circuits
Adam Bene Watts, Robin Kothari, Luke Schaeffer, and Avishay Tal (Massachusetts Institute of Technology, USA; Microsoft Research, USA; Stanford University, USA) Recently, Bravyi, Gosset, and Konig (Science, 2018) exhibited a search problem called the 2D Hidden Linear Function (2D HLF) problem that can be solved exactly by a constantdepth quantum circuit using bounded fanin gates (or QNC^0 circuits), but cannot be solved by any constantdepth classical circuit using bounded fanin AND, OR, and NOT gates (or NC^0 circuits). In other words, they exhibited a search problem in QNC^0 that is not in NC^0. We strengthen their result by proving that the 2D HLF problem is not contained in AC^0, the class of classical, polynomialsize, constantdepth circuits over the gate set of unbounded fanin AND and OR gates, and NOT gates. We also supplement this worstcase lower bound with an averagecase result: There exists a simple distribution under which any AC^0 circuit (even of nearly exponential size) has exponentially small correlation with the 2D HLF problem. Our results are shown by constructing a new problem in QNC^0, which we call the Parity Halving Problem, which is easier to work with. We prove our AC^0 lower bounds for this problem, and then show that it reduces to the 2D HLF problem. @InProceedings{STOC19p515, author = {Adam Bene Watts and Robin Kothari and Luke Schaeffer and Avishay Tal}, title = {Exponential Separation between Shallow Quantum Circuits and Unbounded FanIn Shallow Classical Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {515526}, doi = {10.1145/3313276.3316404}, year = {2019}, } Publisher's Version 

Schweitzer, Pascal 
STOC '19: "A Unifying Method for the ..."
A Unifying Method for the Design of Algorithms Canonizing Combinatorial Objects
Pascal Schweitzer and Daniel Wiebking (TU Kaiserslautern, Germany; RWTH Aachen University, Germany) We devise a unified framework for the design of canonization algorithms. Using hereditarily finite sets, we define a general notion of combinatorial objects that includes graphs, hypergraphs, relational structures, codes, permutation groups, tree decompositions, and so on. Our approach allows for a systematic transfer of the techniques that have been developed for isomorphism testing to canonization. We use it to design a canonization algorithm for general combinatorial objects. This result gives new fastest canonization algorithms with an asymptotic running time matching the best known isomorphism algorithm for the following types of objects: hypergraphs, hypergraphs of bounded color class size, permutation groups (up to permutational isomorphism) and codes that are explicitly given (up to code equivalence). @InProceedings{STOC19p1247, author = {Pascal Schweitzer and Daniel Wiebking}, title = {A Unifying Method for the Design of Algorithms Canonizing Combinatorial Objects}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12471258}, doi = {10.1145/3313276.3316338}, year = {2019}, } Publisher's Version 

Schwiegelshohn, Chris 
STOC '19: "Oblivious Dimension Reduction ..."
Oblivious Dimension Reduction for kMeans: Beyond Subspaces and the JohnsonLindenstrauss Lemma
Luca Becchetti, Marc Bury, Vincent CohenAddad, Fabrizio Grandoni, and Chris Schwiegelshohn (Sapienza University of Rome, Italy; Zalando, Switzerland; CNRS, France; IDSIA, Switzerland) We show that for n points in ddimensional Euclidean space, a data oblivious random projection of the columns onto m∈ O((logk+loglogn)ε^{−6}log1/ε) dimensions is sufficient to approximate the cost of all kmeans clusterings up to a multiplicative (1±ε) factor. The previousbest upper bounds on m are O(logn· ε^{−2}) given by a direct application of the JohnsonLindenstrauss Lemma, and O(kε^{−2}) given by [Cohen et al.STOC’15]. @InProceedings{STOC19p1039, author = {Luca Becchetti and Marc Bury and Vincent CohenAddad and Fabrizio Grandoni and Chris Schwiegelshohn}, title = {Oblivious Dimension Reduction for <i>k</i>Means: Beyond Subspaces and the JohnsonLindenstrauss Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10391050}, doi = {10.1145/3313276.3316318}, year = {2019}, } Publisher's Version 

Seddighin, Saeed 
STOC '19: "1+ε Approximation ..."
1+ε Approximation of Tree Edit Distance in Quadratic Time
Mahdi Boroujeni, Mohammad Ghodsi, MohammadTaghi Hajiaghayi, and Saeed Seddighin (Sharif University of Technology, Iran; Institute for Research in Fundamental Sciences, Iran; University of Maryland, USA) Edit distance is one of the most fundamental problems in computer science. Tree edit distance is a natural generalization of edit distance to ordered rooted trees. Such a generalization extends the applications of edit distance to areas such as computational biology, structured data analysis (e.g., XML), image analysis, and compiler optimization. Perhaps the most notable application of tree edit distance is in the analysis of RNA molecules in computational biology where the secondary structure of RNA is typically represented as a rooted tree. The bestknown solution for tree edit distance runs in cubic time. Recently, Bringmann et al. show that an O(n^{2.99}) algorithm for weighted tree edit distance is unlikely by proving a conditional lower bound on the computational complexity of tree edit distance. This shows a substantial gap between the computational complexity of tree edit distance and that of edit distance for which a simple dynamic program solves the problem in quadratic time. In this work, we give the first nontrivial approximation algorithms for tree edit distance. Our main result is a quadratic time approximation scheme for tree edit distance that approximates the solution within a factor of 1+є for any constant є > 0. @InProceedings{STOC19p709, author = {Mahdi Boroujeni and Mohammad Ghodsi and MohammadTaghi Hajiaghayi and Saeed Seddighin}, title = {1+<i>ε</i> Approximation of Tree Edit Distance in Quadratic Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {709720}, doi = {10.1145/3313276.3316388}, year = {2019}, } Publisher's Version 

Sellke, Mark 
STOC '19: "Competitively Chasing Convex ..."
Competitively Chasing Convex Bodies
Sébastien Bubeck, Yin Tat Lee, Yuanzhi Li, and Mark Sellke (Microsoft Research, USA; University of Washington, USA; Stanford University, USA) Let F be a family of sets in some metric space. In the Fchasing problem, an online algorithm observes a request sequence of sets in F and responds (online) by giving a sequence of points in these sets. The movement cost is the distance between consecutive such points. The competitive ratio is the worst case ratio (over request sequences) between the total movement of the online algorithm and the smallest movement one could have achieved by knowing in advance the request sequence. The family F is said to be chaseable if there exists an online algorithm with finite competitive ratio. In 1991, Linial and Friedman conjectured that the family of convex sets in Euclidean space is chaseable. We prove this conjecture. @InProceedings{STOC19p861, author = {Sébastien Bubeck and Yin Tat Lee and Yuanzhi Li and Mark Sellke}, title = {Competitively Chasing Convex Bodies}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {861868}, doi = {10.1145/3313276.3316314}, year = {2019}, } Publisher's Version 

Servedio, Rocco A. 
STOC '19: "Fooling Polytopes ..."
Fooling Polytopes
Ryan O'Donnell, Rocco A. Servedio, and LiYang Tan (Carnegie Mellon University, USA; Columbia University, USA; Stanford University, USA) We give a pseudorandom generator that fools mfacet polytopes over {0,1}^{n} with seed length polylog(m) · log(n). The previous best seed length had superlinear dependence on m. An immediate consequence is a deterministic quasipolynomial time algorithm for approximating the number of solutions to any {0,1}integer program. @InProceedings{STOC19p614, author = {Ryan O'Donnell and Rocco A. Servedio and LiYang Tan}, title = {Fooling Polytopes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {614625}, doi = {10.1145/3313276.3316321}, year = {2019}, } Publisher's Version 

Seshadhri, C. 
STOC '19: "Random Walks and Forbidden ..."
Random Walks and Forbidden Minors II: A poly(d ε⁻¹)Query Tester for MinorClosed Properties of Bounded Degree Graphs
Akash Kumar, C. Seshadhri, and Andrew Stolman (Purdue University, USA; University of California at Santa Cruz, USA) Let G be a graph with n vertices and maximum degree d. Fix some minorclosed property P (such as planarity). We say that G is εfar from P if one has to remove ε dn edges to make it have P. The problem of property testing P was introduced in the seminal work of BenjaminiSchrammShapira (STOC 2008) that gave a tester with query complexity triply exponential in ε^{−1}. LeviRon (TALG 2015) have given the best tester to date, with a quasipolynomial (in ε^{−1}) query complexity. It is an open problem to get property testers whose query complexity is (dε^{−1}), even for planarity. In this paper, we resolve this open question. For any minorclosed property, we give a tester with query complexity d· (ε^{−1}). The previous line of work on (independent of n, twosided) testers is primarily combinatorial. Our work, on the other hand, employs techniques from spectral graph theory. This paper is a continuation of recent work of the authors (FOCS 2018) analyzing random walk algorithms that find forbidden minors. @InProceedings{STOC19p559, author = {Akash Kumar and C. Seshadhri and Andrew Stolman}, title = {Random Walks and Forbidden Minors II: A poly(<i>d ε</i>⁻¹)Query Tester for MinorClosed Properties of Bounded Degree Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {559567}, doi = {10.1145/3313276.3316330}, year = {2019}, } Publisher's Version 

Shahrasbi, Amirbehshad 
STOC '19: "NearLinear Time InsertionDeletion ..."
NearLinear Time InsertionDeletion Codes and (1+ε)Approximating Edit Distance via Indexing
Bernhard Haeupler, Aviad Rubinstein, and Amirbehshad Shahrasbi (Carnegie Mellon University, USA; Stanford University, USA) We introduce fastdecodable indexing schemes for edit distance which can be used to speed up edit distance computations to nearlinear time if one of the strings is indexed by an indexing string I. In particular, for every length n and every ε >0, one can in near linear time construct a string I ∈ Σ′^{n} with Σ′ = O_{ε}(1), such that, indexing any string S ∈ Σ^{n}, symbolbysymbol, with I results in a string S′ ∈ Σ″^{n} where Σ″ = Σ × Σ′ for which edit distance computations are easy, i.e., one can compute a (1+ε)approximation of the edit distance between S′ and any other string in O(n (logn)) time. Our indexing schemes can be used to improve the decoding complexity of stateoftheart error correcting codes for insertions and deletions. In particular, they lead to nearlinear time decoding algorithms for the insertiondeletion codes of [Haeupler, Shahrasbi; STOC ‘17] and faster decoding algorithms for listdecodable insertiondeletion codes of [Haeupler, Shahrasbi, Sudan; ICALP ‘18]. Interestingly, the latter codes are a crucial ingredient in the construction of fastdecodable indexing schemes. @InProceedings{STOC19p697, author = {Bernhard Haeupler and Aviad Rubinstein and Amirbehshad Shahrasbi}, title = {NearLinear Time InsertionDeletion Codes and (1+<i>ε</i>)Approximating Edit Distance via Indexing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {697708}, doi = {10.1145/3313276.3316371}, year = {2019}, } Publisher's Version 

Shapira, Asaf 
STOC '19: "Testing Graphs against an ..."
Testing Graphs against an Unknown Distribution
Lior Gishboliner and Asaf Shapira (Tel Aviv University, Israel) The classical model of graph property testing, introduced by Goldreich, Goldwasser and Ron, assumes that the algorithm can obtain uniformly distributed vertices from the input graph. Goldreich introduced a more general model, called the VertexDistributionFree model (or VDF for short) in which the testing algorithm obtains vertices drawn from an arbitrary and unknown distribution. The main motivation for this investigation is that it can allow one to give different weight/importance to different parts of the input graph, as well as handle situations where one cannot obtain uniformly selected vertices from the input. Goldreich proved that any property which is testable in this model must (essentially) be hereditary, and that several hereditary properties can indeed be tested in this model. He further asked which properties are testable in this model. In this paper we completely solve Goldreich’s problem by giving a precise characterization of the graph properties that are testable in the VDF model. Somewhat surprisingly this characterization takes the following clean form: say that a graph property P is extendable if given any graph G satisfying P, one can add one more vertex to G, and connect it to some of the vertices of G in a way that the resulting graph satisfies P. Then a property P is testable in the VDF model if and only if P is hereditary and extendable. @InProceedings{STOC19p535, author = {Lior Gishboliner and Asaf Shapira}, title = {Testing Graphs against an Unknown Distribution}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {535546}, doi = {10.1145/3313276.3316308}, year = {2019}, } Publisher's Version 

Sharan, Vatsal 
STOC '19: "MemorySample Tradeoffs for ..."
MemorySample Tradeoffs for Linear Regression with Small Error
Vatsal Sharan, Aaron Sidford, and Gregory Valiant (Stanford University, USA) We consider the problem of performing linear regression over a stream of ddimensional examples, and show that any algorithm that uses a subquadratic amount of memory exhibits a slower rate of convergence than can be achieved without memory constraints. Specifically, consider a sequence of labeled examples (a_{1},b_{1}), (a_{2},b_{2})…, with a_{i} drawn independently from a ddimensional isotropic Gaussian, and where b_{i} = ⟨ a_{i}, x⟩ + η_{i}, for a fixed x ∈ ℝ^{d} with x_{2} = 1 and with independent noise η_{i} drawn uniformly from the interval [−2^{−d/5},2^{−d/5}]. We show that any algorithm with at most d^{2}/4 bits of memory requires at least Ω(d loglog1/є) samples to approximate x to ℓ_{2} error є with probability of success at least 2/3, for є sufficiently small as a function of d. In contrast, for such є, x can be recovered to error є with probability 1−o(1) with memory O(d^{2} log(1/є)) using d examples. This represents the first nontrivial lower bounds for regression with superlinear memory, and may open the door for strong memory/sample tradeoffs for continuous optimization. @InProceedings{STOC19p890, author = {Vatsal Sharan and Aaron Sidford and Gregory Valiant}, title = {MemorySample Tradeoffs for Linear Regression with Small Error}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890901}, doi = {10.1145/3313276.3316403}, year = {2019}, } Publisher's Version 

Shayevitz, Ofer 
STOC '19: "Communication Complexity of ..."
Communication Complexity of Estimating Correlations
Uri Hadar, Jingbo Liu, Yury Polyanskiy, and Ofer Shayevitz (Tel Aviv University, Israel; Massachusetts Institute of Technology, USA) We characterize the communication complexity of the following distributed estimation problem. Alice and Bob observe infinitely many iid copies of ρcorrelated unitvariance (Gaussian or ±1 binary) random variables, with unknown ρ∈[−1,1]. By interactively exchanging k bits, Bob wants to produce an estimate ρ of ρ. We show that the best possible performance (optimized over interaction protocol Π and estimator ρ) satisfies inf_{Π ρ}sup_{ρ}E [ρ−ρ^{2}] = k^{−1} (1/2 ln2 + o(1)). Curiously, the number of samples in our achievability scheme is exponential in k; by contrast, a naive scheme exchanging k samples achieves the same Ω(1/k) rate but with a suboptimal prefactor. Our protocol achieving optimal performance is oneway (noninteractive). We also prove the Ω(1/k) bound even when ρ is restricted to any small open subinterval of [−1,1] (i.e. a local minimax lower bound). Our proof techniques rely on symmetric strong dataprocessing inequalities and various tensorization techniques from informationtheoretic interactive commonrandomness extraction. Our results also imply an Ω(n) lower bound on the information complexity of the GapHamming problem, for which we show a direct informationtheoretic proof. @InProceedings{STOC19p792, author = {Uri Hadar and Jingbo Liu and Yury Polyanskiy and Ofer Shayevitz}, title = {Communication Complexity of Estimating Correlations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {792803}, doi = {10.1145/3313276.3316332}, year = {2019}, } Publisher's Version 

Sherif, Suhail 
STOC '19: "The LogApproximateRank Conjecture ..."
The LogApproximateRank Conjecture Is False
Arkadev Chattopadhyay, Nikhil S. Mande, and Suhail Sherif (TIFR, India; Georgetown University, USA) We construct a simple and total XOR function F on 2n variables that has only O(√n) spectral norm, O(n^{2}) approximate rank and O(n^{2.5}) approximate nonnegative rank. We show it has polynomially large randomized boundederror communication complexity of Ω(√n). This yields the first exponential gap between the logarithm of the approximate rank and randomized communication complexity for total functions. Thus F witnesses a refutation of the LogApproximateRank Conjecture (LARC) which was posed by Lee and Shraibman as a very natural analogue for randomized communication of the still unresolved LogRank Conjecture for deterministic communication. The best known previous gap for any total function between the two measures is a recent 4thpower separation by G'o'os, Jayram, Pitassi and Watson. Additionally, our function F refutes Grolmusz’s Conjecture and a variant of the LogApproximateNonnegativeRank Conjecture, suggested recently by Kol, Moran, Shpilka and Yehudayoff, both of which are implied by the LARC. The complement of F has exponentially large approximate nonnegative rank. This answers a question of Lee and Kol et al., showing that approximate nonnegative rank can be exponentially larger than approximate rank. The function F also falsifies a conjecture about parity measures of Boolean functions made by Tsang, Wong, Xie and Zhang. The latter conjecture implied the LogRank Conjecture for XOR functions. We are pleased to note that shortly after we published our results two independent groups of researchers, Anshu, Boddu and Touchette, and Sinha and de Wolf, used our function F to prove that the QuantumLogRank Conjecture is also false by showing that F has Ω(n^{1/6}) quantum communication complexity. @InProceedings{STOC19p42, author = {Arkadev Chattopadhyay and Nikhil S. Mande and Suhail Sherif}, title = {The LogApproximateRank Conjecture Is False}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {4253}, doi = {10.1145/3313276.3316353}, year = {2019}, } Publisher's Version 

Sherstov, Alexander A. 
STOC '19: "NearOptimal Lower Bounds ..."
NearOptimal Lower Bounds on the Threshold Degree and SignRank of AC^{0}
Alexander A. Sherstov and Pei Wu (University of California at Los Angeles, USA) The threshold degree of a Boolean function f∶{0,1}^{n}→{0,1} is the minimum degree of a real polynomial p that represents f in sign: sgn p(x)=(−1)^{f(x)}. A related notion is signrank, defined for a Boolean matrix F=[F_{ij}] as the minimum rank of a real matrix M with sgn M_{ij}=(−1)^{Fij}. Determining the maximum threshold degree and signrank achievable by constantdepth circuits (AC^{0}) is a wellknown and extensively studied open problem, with complexitytheoretic and algorithmic applications. We give an essentially optimal solution to this problem. For any є>0, we construct an AC^{0} circuit in n variables that has threshold degree Ω(n^{1−є}) and signrank exp(Ω(n^{1−є})), improving on the previous best lower bounds of Ω(√n) and exp(Ω(√n)), respectively. Our results subsume all previous lower bounds on the threshold degree and signrank of AC^{0} circuits of any given depth, with a strict improvement starting at depth 4. As a corollary, we also obtain nearoptimal bounds on the discrepancy, threshold weight, and threshold density of AC^{0}, strictly subsuming previous work on these quantities. Our work gives some of the strongest lower bounds to date on the communication complexity of AC^{0}. @InProceedings{STOC19p401, author = {Alexander A. Sherstov and Pei Wu}, title = {NearOptimal Lower Bounds on the Threshold Degree and SignRank of AC<sup>0</sup>}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {401412}, doi = {10.1145/3313276.3316408}, year = {2019}, } Publisher's Version 

Shetty, Abhishek 
STOC '19: "NonGaussian Component Analysis ..."
NonGaussian Component Analysis using Entropy Methods
Navin Goyal and Abhishek Shetty (Microsoft Research, India) NonGaussian component analysis (NGCA) is a problem in multidimensional data analysis which, since its formulation in 2006, has attracted considerable attention in statistics and machine learning. In this problem, we have a random variable X in ndimensional Euclidean space. There is an unknown subspace Γ of the ndimensional Euclidean space such that the orthogonal projection of X onto Γ is standard multidimensional Gaussian and the orthogonal projection of X onto Γ^{⊥}, the orthogonal complement of Γ, is nonGaussian, in the sense that all its onedimensional marginals are different from the Gaussian in a certain metric defined in terms of moments. The NGCA problem is to approximate the nonGaussian subspace Γ^{⊥} given samples of X. Vectors in Γ^{⊥} correspond to ‘interesting’ directions, whereas vectors in Γ correspond to the directions where data is very noisy. The most interesting applications of the NGCA model is for the case when the magnitude of the noise is comparable to that of the true signal, a setting in which traditional noise reduction techniques such as PCA don’t apply directly. NGCA is also related to dimension reduction and to other data analysis problems such as ICA. NGCAlike problems have been studied in statistics for a long time using techniques such as projection pursuit. We give an algorithm that takes polynomial time in the dimension n and has an inverse polynomial dependence on the error parameter measuring the angle distance between the nonGaussian subspace and the subspace output by the algorithm. Our algorithm is based on relative entropy as the contrast function and fits under the projection pursuit framework. The techniques we develop for analyzing our algorithm maybe of use for other related problems. @InProceedings{STOC19p840, author = {Navin Goyal and Abhishek Shetty}, title = {NonGaussian Component Analysis using Entropy Methods}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {840851}, doi = {10.1145/3313276.3316309}, year = {2019}, } Publisher's Version 

Shi, Elaine 
STOC '19: "Lower Bounds for External ..."
Lower Bounds for External Memory Integer Sorting via Network Coding
Alireza Farhadi, MohammadTaghi Hajiaghayi, Kasper Green Larsen, and Elaine Shi (University of Maryland, USA; Aarhus University, Denmark; Cornell University, USA) Sorting extremely large datasets is a frequently occuring task in practice. These datasets are usually much larger than the computer’s main memory; thus external memory sorting algorithms, first introduced by Aggarwal and Vitter (1988), are often used. The complexity of comparison based external memory sorting has been understood for decades by now, however the situation remains elusive if we assume the keys to be sorted are integers. In internal memory, one can sort a set of n integer keys of Θ(lgn) bits each in O(n) time using the classic Radix Sort algorithm, however in external memory, there are no faster integer sorting algorithms known than the simple comparison based ones. Whether such algorithms exist has remained a central open problem in external memory algorithms for more than three decades. In this paper, we present a tight conditional lower bound on the complexity of external memory sorting of integers. Our lower bound is based on a famous conjecture in network coding by Li and Li (2004), who conjectured that network coding cannot help anything beyond the standard multicommodity flow rate in undirected graphs. The only previous work connecting the Li and Li conjecture to lower bounds for algorithms is due to Adler et al. (2006). Adler et al. indeed obtain relatively simple lower bounds for oblivious algorithms (the memory access pattern is fixed and independent of the input data). Unfortunately obliviousness is a strong limitations, especially for integer sorting: we show that the Li and Li conjecture implies an Ω(n logn) lower bound for internal memory oblivious sorting when the keys are Θ(lgn) bits. This is in sharp contrast to the classic (nonoblivious) Radix Sort algorithm. Indeed going beyond obliviousness is highly nontrivial; we need to introduce several new methods and involved techniques, which are of their own interest, to obtain our tight lower bound for external memory integer sorting. @InProceedings{STOC19p997, author = {Alireza Farhadi and MohammadTaghi Hajiaghayi and Kasper Green Larsen and Elaine Shi}, title = {Lower Bounds for External Memory Integer Sorting via Network Coding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {9971008}, doi = {10.1145/3313276.3316337}, year = {2019}, } Publisher's Version 

Shiragur, Kirankumar 
STOC '19: "Efficient Profile Maximum ..."
Efficient Profile Maximum Likelihood for Universal Symmetric Property Estimation
Moses Charikar, Kirankumar Shiragur, and Aaron Sidford (Stanford University, USA) Estimating symmetric properties of a distribution, e.g. support size, coverage, entropy, distance to uniformity, are among the most fundamental problems in algorithmic statistics. While these properties have been studied extensively and separate optimal estimators have been produced, in striking recent work Acharya et al. provided a single estimator that is competitive for each. They showed that the value of the property on the distribution that approximately maximizes profile likelihood (PML), i.e. the probability of observed frequency of frequencies, is sample competitive with respect to a broad class of estimators. Unfortunately, prior to this work, there was no known polynomial time algorithm to compute such an approximation or use PML to obtain a universal plugin estimator. In this paper we provide an algorithm that, given n samples from a distribution, computes an approximate PML distribution up to a multiplicative error of exp(n^{2/3} poly log(n)) in nearly linear time. Generalizing work of Acharya et al. we show that our algorithm yields a universal plugin estimator that is competitive with a broad range of estimators up to accuracy є = Ω(n^{−0.166}). Further, we provide efficient polynomialtime algorithms for computing a ddimensional generalization of PML (for constant d) that allows for universal plugin estimation of symmetric relationships between distributions. @InProceedings{STOC19p780, author = {Moses Charikar and Kirankumar Shiragur and Aaron Sidford}, title = {Efficient Profile Maximum Likelihood for Universal Symmetric Property Estimation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {780791}, doi = {10.1145/3313276.3316398}, year = {2019}, } Publisher's Version 

Shpilka, Amir 
STOC '19: "SylvesterGallai Type Theorems ..."
SylvesterGallai Type Theorems for Quadratic Polynomials
Amir Shpilka (Tel Aviv University, Israel) We prove SylvesterGallai type theorems for quadratic polynomials. Specifically, we prove that if a finite collection Q, of irreducible polynomials of degree at most 2, satisfy that for every two polynomials Q_{1},Q_{2}∈ Q there is a third polynomial Q_{3}∈Q so that whenever Q_{1} and Q_{2} vanish then also Q_{3} vanishes, then the linear span of the polynomials in Q has dimension O(1). We also prove a colored version of the theorem: If three finite sets of quadratic polynomials satisfy that for every two polynomials from distinct sets there is a polynomial in the third set satisfying the same vanishing condition then all polynomials are contained in an O(1)dimensional space. This answers affirmatively two conjectures of Gupta [Electronic Colloquium on Computational Complexity (ECCC), 21:130, 2014] that were raised in the context of solving certain depth4 polynomial identities. To obtain our main theorems we prove a new result classifying the possible ways that a quadratic polynomial Q can vanish when two other quadratic polynomials vanish. Our proofs also require robust versions of a theorem of Edelstein and Kelly (that extends the SylvesterGallai theorem to colored sets). @InProceedings{STOC19p1203, author = {Amir Shpilka}, title = {SylvesterGallai Type Theorems for Quadratic Polynomials}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12031214}, doi = {10.1145/3313276.3316341}, year = {2019}, } Publisher's Version 

Sidford, Aaron 
STOC '19: "Efficient Profile Maximum ..."
Efficient Profile Maximum Likelihood for Universal Symmetric Property Estimation
Moses Charikar, Kirankumar Shiragur, and Aaron Sidford (Stanford University, USA) Estimating symmetric properties of a distribution, e.g. support size, coverage, entropy, distance to uniformity, are among the most fundamental problems in algorithmic statistics. While these properties have been studied extensively and separate optimal estimators have been produced, in striking recent work Acharya et al. provided a single estimator that is competitive for each. They showed that the value of the property on the distribution that approximately maximizes profile likelihood (PML), i.e. the probability of observed frequency of frequencies, is sample competitive with respect to a broad class of estimators. Unfortunately, prior to this work, there was no known polynomial time algorithm to compute such an approximation or use PML to obtain a universal plugin estimator. In this paper we provide an algorithm that, given n samples from a distribution, computes an approximate PML distribution up to a multiplicative error of exp(n^{2/3} poly log(n)) in nearly linear time. Generalizing work of Acharya et al. we show that our algorithm yields a universal plugin estimator that is competitive with a broad range of estimators up to accuracy є = Ω(n^{−0.166}). Further, we provide efficient polynomialtime algorithms for computing a ddimensional generalization of PML (for constant d) that allows for universal plugin estimation of symmetric relationships between distributions. @InProceedings{STOC19p780, author = {Moses Charikar and Kirankumar Shiragur and Aaron Sidford}, title = {Efficient Profile Maximum Likelihood for Universal Symmetric Property Estimation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {780791}, doi = {10.1145/3313276.3316398}, year = {2019}, } Publisher's Version STOC '19: "MemorySample Tradeoffs for ..." MemorySample Tradeoffs for Linear Regression with Small Error Vatsal Sharan, Aaron Sidford, and Gregory Valiant (Stanford University, USA) We consider the problem of performing linear regression over a stream of ddimensional examples, and show that any algorithm that uses a subquadratic amount of memory exhibits a slower rate of convergence than can be achieved without memory constraints. Specifically, consider a sequence of labeled examples (a_{1},b_{1}), (a_{2},b_{2})…, with a_{i} drawn independently from a ddimensional isotropic Gaussian, and where b_{i} = ⟨ a_{i}, x⟩ + η_{i}, for a fixed x ∈ ℝ^{d} with x_{2} = 1 and with independent noise η_{i} drawn uniformly from the interval [−2^{−d/5},2^{−d/5}]. We show that any algorithm with at most d^{2}/4 bits of memory requires at least Ω(d loglog1/є) samples to approximate x to ℓ_{2} error є with probability of success at least 2/3, for є sufficiently small as a function of d. In contrast, for such є, x can be recovered to error є with probability 1−o(1) with memory O(d^{2} log(1/є)) using d examples. This represents the first nontrivial lower bounds for regression with superlinear memory, and may open the door for strong memory/sample tradeoffs for continuous optimization. @InProceedings{STOC19p890, author = {Vatsal Sharan and Aaron Sidford and Gregory Valiant}, title = {MemorySample Tradeoffs for Linear Regression with Small Error}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890901}, doi = {10.1145/3313276.3316403}, year = {2019}, } Publisher's Version 

Sidiropoulos, Anastasios 
STOC '19: "Polylogarithmic Approximation ..."
Polylogarithmic Approximation for Euler Genus on Bounded Degree Graphs
Kenichi Kawarabayashi and Anastasios Sidiropoulos (National Institute of Informatics, Japan; University of Illinois at Chicago, USA) Computing the Euler genus of a graph is a fundamental problem in algorithmic graph theory. It has been shown to be NPhard by [Thomassen ’89, Thomassen ’97], even for cubic graphs, and a lineartime fixedparameter algorithm has been obtained by [Mohar ’99]. Despite extensive study, the approximability of the Euler genus remains wide open. While the existence of an O(1)approximation is not ruled out, the currently bestknown upper bound is a O(n^{1−α})approximation, for some universal constant α>0 [Kawarabayashi and Sidiropoulos 2017]. We present an O(log^{2.5} n)approximation polynomial time algorithm for this problem on graphs of bounded degree. Prior to our work, the best known result on graphs of bounded degree was a n^{Ω(1)}approximation [Chekuri and Sidiropoulos 2013]. As an immediate corollary, we also obtain improved approximation algorithms for the crossing number problem and for the minimum vertex planarization problem, on graphs of bounded degree. Specifically, we obtain a polynomialtime O(^{2} log^{3.5} n)approximation algorithm for the minimum vertex planarization problem, on graphs of maximum degree . Moreover we obtain an algorithm which given a graph of crossing number k, computes a drawing with at most k^{2} log^{O(1)} n crossings in polynomial time. This also implies a n^{1/2} log^{O(1)} napproximation polynomial time algorithm. The previously bestknown result is a polynomial time algorithm that computes a drawing with k^{10} log^{O(1)} crossings, which implies a n^{9/10}log^{O(1)} napproximation algorithm [Chuzhoy 2011]. @InProceedings{STOC19p164, author = {Kenichi Kawarabayashi and Anastasios Sidiropoulos}, title = {Polylogarithmic Approximation for Euler Genus on Bounded Degree Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {164175}, doi = {10.1145/3313276.3316409}, year = {2019}, } Publisher's Version 

Singer, Yaron 
STOC '19: "An Optimal Approximation for ..."
An Optimal Approximation for Submodular Maximization under a Matroid Constraint in the Adaptive Complexity Model
Eric Balkanski, Aviad Rubinstein, and Yaron Singer (Harvard University, USA; Stanford University, USA) In this paper we study submodular maximization under a matroid constraint in the adaptive complexity model. This model was recently introduced in the context of submodular optimization to quantify the information theoretic complexity of blackbox optimization in a parallel computation model. Informally, the adaptivity of an algorithm is the number of sequential rounds it makes when each round can execute polynomiallymany function evaluations in parallel. Since submodular optimization is regularly applied on large datasets we seek algorithms with low adaptivity to enable speedups via parallelization. Consequently, a recent line of work has been devoted to designing constant factor approximation algorithms for maximizing submodular functions under various constraints in the adaptive complexity model. Despite the burst in work on submodular maximization in the adaptive complexity model, the fundamental problem of maximizing a monotone submodular function under a matroid constraint has remained elusive. In particular, all known techniques fail for this problem and there are no known constant factor approximation algorithms whose adaptivity is sublinear in the rank of the matroid k or in the worst case sublinear in the size of the ground set n. In this paper we present an approximation algorithm for the problem of maximizing a monotone submodular function under a matroid constraint in the adaptive complexity model. The approximation guarantee of the algorithm is arbitrarily close to the optimal 1−1/e and it has near optimal adaptivity of Ø(log(n)log(k)). This result is obtained using a novel technique of adaptive sequencing which departs from previous techniques for submodular maximization in the adaptive complexity model. In addition to our main result we show how to use this technique to design other approximation algorithms with strong approximation guarantees and polylogarithmic adaptivity. @InProceedings{STOC19p66, author = {Eric Balkanski and Aviad Rubinstein and Yaron Singer}, title = {An Optimal Approximation for Submodular Maximization under a Matroid Constraint in the Adaptive Complexity Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {6677}, doi = {10.1145/3313276.3316304}, year = {2019}, } Publisher's Version 

Sinha, Sandip 
STOC '19: "Local Decodability of the ..."
Local Decodability of the BurrowsWheeler Transform
Sandip Sinha and Omri Weinstein (Columbia University, USA) The BurrowsWheeler Transform (BWT) is among the most influential discoveries in text compression and DNA storage. It is a reversible preprocessing step that rearranges an nletter string into runs of identical characters (by exploiting context regularities), resulting in highly compressible strings, and is the basis of the bzip compression program. Alas, the decoding process of BWT is inherently sequential and requires Ω(n) time even to retrieve a single character. We study the succinct data structure problem of locally decoding short substrings of a given text under its compressed BWT, i.e., with small additive redundancy r over the MoveToFront (bzip) compression. The celebrated BWTbased FMindex (FOCS ’00), as well as other related literature, yield a tradeoff of r=Õ(n/√t) bits, when a single character is to be decoded in O(t) time. We give a nearquadratic improvement r=Õ(nlg(t)/t). As a byproduct, we obtain an exponential (in t) improvement on the redundancy of the FMindex for counting patternmatches on compressed text. In the interesting regime where the text compresses to o(n) (say, n/polylg(n)) bits, these results provide an exp(t) overall space reduction. For the local decoding problem of BWT, we also prove an Ω(n/t^{2}) cellprobe lower bound for “symmetric” data structures. We achieve our main result by designing a compressed partialsums (Rank) data structure over BWT. The key component is a locallydecodable MovetoFront (MTF) code: with only O(1) extra bits per block of length n^{Ω(1)}, the decoding time of a single character can be decreased from Ω(n) to O(lgn). This result is of independent interest in algorithmic information theory. @InProceedings{STOC19p744, author = {Sandip Sinha and Omri Weinstein}, title = {Local Decodability of the BurrowsWheeler Transform}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {744755}, doi = {10.1145/3313276.3316317}, year = {2019}, } Publisher's Version 

Smith, Adam 
STOC '19: "The Structure of Optimal Private ..."
The Structure of Optimal Private Tests for Simple Hypotheses
Clément L. Canonne, Gautam Kamath, Audra McMillan, Adam Smith, and Jonathan Ullman (Stanford University, USA; Simons Institute for the Theory of Computing Berkeley, USA; Boston University, USA; Northeastern University, USA) Hypothesis testing plays a central role in statistical inference, and is used in many settings where privacy concerns are paramount. This work answers a basic question about privately testing simple hypotheses: given two distributions P and Q, and a privacy level ε, how many i.i.d. samples are needed to distinguish P from Q subject to εdifferential privacy, and what sort of tests have optimal sample complexity? Specifically, we characterize this sample complexity up to constant factors in terms of the structure of P and Q and the privacy level ε, and show that this sample complexity is achieved by a certain randomized and clamped variant of the loglikelihood ratio test. Our result is an analogue of the classical NeymanPearson lemma in the setting of private hypothesis testing. We also give an application of our result to the private changepoint detection. Our characterization applies more generally to hypothesis tests satisfying essentially any notion of algorithmic stability, which is known to imply strong generalization bounds in adaptive data analysis, and thus our results have applications even when privacy is not a primary concern. @InProceedings{STOC19p310, author = {Clément L. Canonne and Gautam Kamath and Audra McMillan and Adam Smith and Jonathan Ullman}, title = {The Structure of Optimal Private Tests for Simple Hypotheses}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {310321}, doi = {10.1145/3313276.3316336}, year = {2019}, } Publisher's Version 

Song, Zhao 
STOC '19: "Solving Linear Programs in ..."
Solving Linear Programs in the Current Matrix Multiplication Time
Michael B. Cohen, Yin Tat Lee, and Zhao Song (Massachusetts Institute of Technology, USA; University of Washington, USA; Microsoft Research, USA; University of Texas at Austin, USA) This paper shows how to solve linear programs of the form min_{Ax=b,x≥0} c^{⊤}x with n variables in time O^{*}((n^{ω}+n^{2.5−α/2}+n^{2+1/6}) log(n/δ)) where ω is the exponent of matrix multiplication, α is the dual exponent of matrix multiplication, and δ is the relative accuracy. For the current value of ω∼2.37 and α∼0.31, our algorithm takes O^{*}(n^{ω} log(n/δ)) time. When ω = 2, our algorithm takes O^{*}(n^{2+1/6} log(n/δ)) time. Our algorithm utilizes several new concepts that we believe may be of independent interest: (1) We define a stochastic central path method. (2) We show how to maintain a projection matrix √W A^{⊤}(AWA^{⊤})^{−1}A√W in subquadratic time under ℓ_{2} multiplicative changes in the diagonal matrix W. @InProceedings{STOC19p938, author = {Michael B. Cohen and Yin Tat Lee and Zhao Song}, title = {Solving Linear Programs in the Current Matrix Multiplication Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {938942}, doi = {10.1145/3313276.3316303}, year = {2019}, } Publisher's Version STOC '19: "Stronger L2/L2 Compressed ..." Stronger L2/L2 Compressed Sensing; Without Iterating Vasileios Nakos and Zhao Song (Harvard University, USA; University of Texas at Austin, USA) We consider the extensively studied problem of ℓ_{2}/ℓ_{2} compressed sensing. The main contribution of our work is an improvement over [Gilbert, Li, Porat and Strauss, STOC 2010] with faster decoding time and significantly smaller column sparsity, answering two open questions of the aforementioned work. Previous work on sublineartime compressed sensing employed an iterative procedure, recovering the heavy coordinates in phases. We completely depart from that framework, and give the first sublineartime ℓ_{2}/ℓ_{2} scheme which achieves the optimal number of measurements without iterating; this new approach is the key step to our progress. Towards that, we satisfy the ℓ_{2}/ℓ_{2} guarantee by exploiting the heaviness of coordinates in a way that was not exploited in previous work. Via our techniques we obtain improved results for various sparse recovery tasks, and indicate possible further applications to problems in the field, to which the aforementioned iterative procedure creates significant obstructions. @InProceedings{STOC19p289, author = {Vasileios Nakos and Zhao Song}, title = {Stronger L2/L2 Compressed Sensing; Without Iterating}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {289297}, doi = {10.1145/3313276.3316355}, year = {2019}, } Publisher's Version 

Sreenivasaiah, Karteek 
STOC '19: "A FixedDepth SizeHierarchy ..."
A FixedDepth SizeHierarchy Theorem for AC^{0}[⊕] via the Coin Problem
Nutan Limaye, Karteek Sreenivasaiah, Srikanth Srinivasan, Utkarsh Tripathi, and S. Venkitesh (IIT Bombay, India; IIT Hyderabad, India) In this work we prove the first Fixeddepth SizeHierarchy Theorem for uniform AC^{0}[⊕]. In particular, we show that for any fixed d, the class C_{d,k} of functions that have uniform AC^{0}[⊕] formulas of depth d and size n^{k} form an infinite hierarchy. We show this by exhibiting the first class of explicit functions where we have nearly (up to a polynomial factor) matching upper and lower bounds for the class of AC^{0}[⊕] formulas. The explicit functions are derived from the δCoin Problem, which is the computational problem of distinguishing between coins that are heads with probability (1+δ)/2 or (1−δ)/2, where δ is a parameter that is going to 0. We study the complexity of this problem and make progress on both upper bound and lower bound fronts. Upper bounds. For any constant d≥ 2, we show that there are explicit monotone AC^{0} formulas (i.e. made up of AND and OR gates only) solving the δcoin problem that have depth d, size exp(O(d(1/δ)^{1/(d−1)})), and sample complexity (i.e. number of inputs) poly(1/δ). This matches previous upper bounds of O’Donnell and Wimmer (ICALP 2007) and Amano (ICALP 2009) in terms of size (which is optimal) and improves the sample complexity from exp(O(d(1/δ)^{1/(d−1)})) to poly(1/δ). Lower bounds. We show that the above upper bounds are nearly tight (in terms of size) even for the significantly stronger model of AC^{0}[⊕] formulas (which are also allowed NOT and Parity gates): formally, we show that any AC^{0}[⊕] formula solving the δcoin problem must have size exp(Ω(d(1/δ)^{1/(d−1)})). This strengthens a result of Shaltiel and Viola (SICOMP 2010), who prove a exp(Ω((1/δ)^{1/(d+2)})) lower bound for AC^{0}[⊕], and a lower bound of exp(Ω((1/δ)^{1/(d−1)})) shown by Cohen, Ganor and Raz (APPROXRANDOM 2014) for the class ^{0}. The upper bound is a derandomization involving a use of Janson’s inequality and classical combinatorial designs. The lower bound involves proving an optimal degree lower bound for polynomials over _{2} solving the δcoin problem. @InProceedings{STOC19p442, author = {Nutan Limaye and Karteek Sreenivasaiah and Srikanth Srinivasan and Utkarsh Tripathi and S. Venkitesh}, title = {A FixedDepth SizeHierarchy Theorem for AC<sup>0</sup>[⊕] via the Coin Problem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {442453}, doi = {10.1145/3313276.3316339}, year = {2019}, } Publisher's Version 

Srinivasan, Srikanth 
STOC '19: "A FixedDepth SizeHierarchy ..."
A FixedDepth SizeHierarchy Theorem for AC^{0}[⊕] via the Coin Problem
Nutan Limaye, Karteek Sreenivasaiah, Srikanth Srinivasan, Utkarsh Tripathi, and S. Venkitesh (IIT Bombay, India; IIT Hyderabad, India) In this work we prove the first Fixeddepth SizeHierarchy Theorem for uniform AC^{0}[⊕]. In particular, we show that for any fixed d, the class C_{d,k} of functions that have uniform AC^{0}[⊕] formulas of depth d and size n^{k} form an infinite hierarchy. We show this by exhibiting the first class of explicit functions where we have nearly (up to a polynomial factor) matching upper and lower bounds for the class of AC^{0}[⊕] formulas. The explicit functions are derived from the δCoin Problem, which is the computational problem of distinguishing between coins that are heads with probability (1+δ)/2 or (1−δ)/2, where δ is a parameter that is going to 0. We study the complexity of this problem and make progress on both upper bound and lower bound fronts. Upper bounds. For any constant d≥ 2, we show that there are explicit monotone AC^{0} formulas (i.e. made up of AND and OR gates only) solving the δcoin problem that have depth d, size exp(O(d(1/δ)^{1/(d−1)})), and sample complexity (i.e. number of inputs) poly(1/δ). This matches previous upper bounds of O’Donnell and Wimmer (ICALP 2007) and Amano (ICALP 2009) in terms of size (which is optimal) and improves the sample complexity from exp(O(d(1/δ)^{1/(d−1)})) to poly(1/δ). Lower bounds. We show that the above upper bounds are nearly tight (in terms of size) even for the significantly stronger model of AC^{0}[⊕] formulas (which are also allowed NOT and Parity gates): formally, we show that any AC^{0}[⊕] formula solving the δcoin problem must have size exp(Ω(d(1/δ)^{1/(d−1)})). This strengthens a result of Shaltiel and Viola (SICOMP 2010), who prove a exp(Ω((1/δ)^{1/(d+2)})) lower bound for AC^{0}[⊕], and a lower bound of exp(Ω((1/δ)^{1/(d−1)})) shown by Cohen, Ganor and Raz (APPROXRANDOM 2014) for the class ^{0}. The upper bound is a derandomization involving a use of Janson’s inequality and classical combinatorial designs. The lower bound involves proving an optimal degree lower bound for polynomials over _{2} solving the δcoin problem. @InProceedings{STOC19p442, author = {Nutan Limaye and Karteek Sreenivasaiah and Srikanth Srinivasan and Utkarsh Tripathi and S. Venkitesh}, title = {A FixedDepth SizeHierarchy Theorem for AC<sup>0</sup>[⊕] via the Coin Problem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {442453}, doi = {10.1145/3313276.3316339}, year = {2019}, } Publisher's Version 

Stolman, Andrew 
STOC '19: "Random Walks and Forbidden ..."
Random Walks and Forbidden Minors II: A poly(d ε⁻¹)Query Tester for MinorClosed Properties of Bounded Degree Graphs
Akash Kumar, C. Seshadhri, and Andrew Stolman (Purdue University, USA; University of California at Santa Cruz, USA) Let G be a graph with n vertices and maximum degree d. Fix some minorclosed property P (such as planarity). We say that G is εfar from P if one has to remove ε dn edges to make it have P. The problem of property testing P was introduced in the seminal work of BenjaminiSchrammShapira (STOC 2008) that gave a tester with query complexity triply exponential in ε^{−1}. LeviRon (TALG 2015) have given the best tester to date, with a quasipolynomial (in ε^{−1}) query complexity. It is an open problem to get property testers whose query complexity is (dε^{−1}), even for planarity. In this paper, we resolve this open question. For any minorclosed property, we give a tester with query complexity d· (ε^{−1}). The previous line of work on (independent of n, twosided) testers is primarily combinatorial. Our work, on the other hand, employs techniques from spectral graph theory. This paper is a continuation of recent work of the authors (FOCS 2018) analyzing random walk algorithms that find forbidden minors. @InProceedings{STOC19p559, author = {Akash Kumar and C. Seshadhri and Andrew Stolman}, title = {Random Walks and Forbidden Minors II: A poly(<i>d ε</i>⁻¹)Query Tester for MinorClosed Properties of Bounded Degree Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {559567}, doi = {10.1145/3313276.3316330}, year = {2019}, } Publisher's Version 

Su, HsinHao 
STOC '19: "Towards the Locality of Vizing’s ..."
Towards the Locality of Vizing’s Theorem
HsinHao Su and Hoa T. Vu (Boston College, USA) Vizing showed that it suffices to color the edges of a simple graph using Δ + 1 colors, where Δ is the maximum degree of the graph. However, up to this date, no efficient distributed edgecoloring algorithm is known for obtaining such coloring, even for constant degree graphs. The current algorithms that get closest to this number of colors are the randomized (Δ + Θ(√Δ))edgecoloring algorithm that runs in (n) rounds by Chang et al. [SODA 2018] and the deterministic (Δ + (n))edgecoloring algorithm that runs in (Δ, logn) rounds by Ghaffari et al. [STOC 2018]. We present two distributed edgecoloring algorithms that run in (Δ,logn) rounds. The first algorithm, with randomization, uses only Δ+2 colors. The second algorithm is a deterministic algorithm that uses Δ+ O(logn/ loglogn) colors. Our approach is to reduce the distributed edgecoloring problem into an online and restricted version of ballsintobins problem. If ℓ is the maximum load of the bins, our algorithm uses Δ + 2ℓ − 1 colors. We show how to achieve ℓ = 1 with randomization and ℓ = O(logn / loglogn) without randomization. @InProceedings{STOC19p355, author = {HsinHao Su and Hoa T. Vu}, title = {Towards the Locality of Vizing’s Theorem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {355364}, doi = {10.1145/3313276.3316393}, year = {2019}, } Publisher's Version 

Su, Yuan 
STOC '19: "Quantum Singular Value Transformation ..."
Quantum Singular Value Transformation and Beyond: Exponential Improvements for Quantum Matrix Arithmetics
András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe (CWI, Netherlands; University of Amsterdam, Netherlands; University of Maryland, USA; Microsoft Research, USA) An nqubit quantum circuit performs a unitary operation on an exponentially large, 2^{n}dimensional, Hilbert space, which is a major source of quantum speedups. We develop a new “Quantum singular value transformation” algorithm that can directly harness the advantages of exponential dimensionality by applying polynomial transformations to the singular values of a block of a unitary operator. The transformations are realized by quantum circuits with a very simple structure  typically using only a constant number of ancilla qubits  leading to optimal algorithms with appealing constant factors. We show that our framework allows describing many quantum algorithms on a high level, and enables remarkably concise proofs for many prominent quantum algorithms, ranging from optimal Hamiltonian simulation to various quantum machine learning applications. We also devise a new singular vector transformation algorithm, describe how to exponentially improve the complexity of implementing fractional queries to unitaries with a gapped spectrum, and show how to efficiently implement principal component regression. Finally, we also prove a quantum lower bound on spectral transformations. @InProceedings{STOC19p193, author = {András Gilyén and Yuan Su and Guang Hao Low and Nathan Wiebe}, title = {Quantum Singular Value Transformation and Beyond: Exponential Improvements for Quantum Matrix Arithmetics}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {193204}, doi = {10.1145/3313276.3316366}, year = {2019}, } Publisher's Version 

Sun, Nike 
STOC '19: "Capacity Lower Bound for the ..."
Capacity Lower Bound for the Ising Perceptron
Jian Ding and Nike Sun (University of Pennsylvania, USA; Massachusetts Institute of Technology, USA) We consider the Ising perceptron with gaussian disorder, which is equivalent to the discrete cube {−1,+1}^{N} intersected by M random halfspaces. The perceptron’s capacity is the largest integer M_{N} for which the intersection is nonempty. It is conjectured by Krauth and Mézard (1989) that the (random) ratio M_{N}/N converges in probability to an explicit constant α_{⋆}≐ 0.83. Kim and Roche (1998) proved the existence of a positive constant γ such that γ ≤ M_{N}/N ≤ 1−γ with high probability; see also Talagrand (1999). In this paper we show that the Krauth–Mézard conjecture α_{⋆} is a lower bound with positive probability, under the condition that an explicit univariate function S(λ) is maximized at λ=0. Our proof is an application of the second moment method to a certain slice of perceptron configurations, as selected by the socalled TAP (Thouless, Anderson, and Palmer, 1977) or AMP (approximate message passing) iteration, whose scaling limit has been characterized by Bayati and Montanari (2011) and Bolthausen (2012). For verifying the condition on S(λ) we outline one approach, which is implemented in the current version using (nonrigorous) numerical integration packages. In a future version of this paper we intend to complete the verification by implementing a rigorous numerical method. @InProceedings{STOC19p816, author = {Jian Ding and Nike Sun}, title = {Capacity Lower Bound for the Ising Perceptron}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {816827}, doi = {10.1145/3313276.3316383}, year = {2019}, } Publisher's Version 

Sun, Xiaoming 
STOC '19: "Quantum Lovász Local Lemma: ..."
Quantum Lovász Local Lemma: Shearer’s Bound Is Tight
Kun He, Qian Li, Xiaoming Sun, and Jiapeng Zhang (Institute of Computing Technology at Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Shenzhen Institute of Computing Sciences, China; Shenzhen University, China; University of California at San Diego, USA) Lovász Local Lemma (LLL) is a very powerful tool in combinatorics and probability theory to show the possibility of avoiding all “bad” events under some “weakly dependent” condition. Over the last decades, the algorithmic aspect of LLL has also attracted lots of attention in theoretical computer science. A tight criterion under which the abstract version LLL (ALLL) holds was given by Shearer. It turns out that Shearer’s bound is generally not tight for variable version LLL (VLLL). Recently, Ambainis et al. introduced a quantum version LLL (QLLL), which was then shown to be powerful for the quantum satisfiability problem. In this paper, we prove that Shearer’s bound is tight for QLLL, i.e., the relative dimension of the smallest satisfying subspace is completely characterized by the independent set polynomial, affirming a conjecture proposed by Sattath et al. Our result also shows the tightness of Gilyén and Sattath’s algorithm, and implies that the lattice gas partition function fully characterizes quantum satisfiability for almost all Hamiltonians with large enough qudits. Commuting LLL (CLLL), LLL for commuting local Hamiltonians which are widely studied in the literature, is also investigated here. We prove that the tight regions of CLLL and QLLL are different in general. This result might imply that it is possible to design an algorithm for CLLL which is still efficient beyond Shearer’s bound. In applications of LLLs, the symmetric cases are most common, i.e., the events are with the same probability and the Hamiltonians are with the same relative dimension. We give the first lower bound on the gap between the symmetric VLLL and Shearer’s bound. Our result can be viewed as a quantitative study on the separation between quantum and classical constraint satisfaction problems. Additionally, we obtain similar results for the symmetric CLLL. As an application, we give lower bounds on the critical thresholds of VLLL and CLLL for several of the most common lattices. @InProceedings{STOC19p461, author = {Kun He and Qian Li and Xiaoming Sun and Jiapeng Zhang}, title = {Quantum Lovász Local Lemma: Shearer’s Bound Is Tight}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {461472}, doi = {10.1145/3313276.3316392}, year = {2019}, } Publisher's Version 

Swamy, Chaitanya 
STOC '19: "Approximation Algorithms for ..."
Approximation Algorithms for DistributionallyRobust Stochastic Optimization with BlackBox Distributions
André Linhares and Chaitanya Swamy (University of Waterloo, Canada) Twostage stochastic optimization is a widely used framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios, and decisions are taken in two stages: we make firststage decisions knowing only the underlying distribution and before a scenario is realized, and may take additional secondstage recourse actions after a scenario is realized. The goal is typically to minimize the total expected cost. A common criticism levied at this model is that the underlying probability distribution is itself often imprecise! To address this, an approach that is quite versatile and has gained popularity in the stochasticoptimization literature is the distributionally robust 2stage model: given a collection D of probability distributions, our goal now is to minimize the maximum expected total cost with respect to a distribution in D. We provide a framework for designing approximation algorithms in such settings when the collection D is a ball around a central distribution and the central distribution is accessed only via a sampling black box. We first show that one can utilize the sample average approximation (SAA) method—solve the distributionally robust problem with an empirical estimate of the central distribution—to reduce the problem to the case where the central distribution has polynomialsize support. Complementing this, we show how to approximately solve a fractional relaxation of the SAA (i.e., polynomialscenario centraldistribution) problem. Unlike in 2stage stochastic or robust optimization, this turns out to be quite challenging. We utilize the ellipsoid method in conjunction with several new ideas to show that this problem can be approximately solved provided that we have an (approximation) algorithm for a certain maxmin problem that is akin to, and generalizes, the kmaxmin problem—find the worstcase scenario consisting of at most k elements—encountered in 2stage robust optimization. We obtain such a procedure for various discreteoptimization problems; by complementing this via LProunding algorithms that provide local (i.e., perscenario) approximation guarantees, we obtain the first approximation algorithms for the distributionally robust versions of a variety of discreteoptimization problems including set cover, vertex cover, edge cover, facility location, and Steiner tree, with guarantees that are, except for set cover, within O(1)factors of the guarantees known for the deterministic version of the problem. @InProceedings{STOC19p768, author = {André Linhares and Chaitanya Swamy}, title = {Approximation Algorithms for DistributionallyRobust Stochastic Optimization with BlackBox Distributions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {768779}, doi = {10.1145/3313276.3316391}, year = {2019}, } Publisher's Version STOC '19: "Approximation Algorithms for ..." Approximation Algorithms for Minimum Norm and Ordered Optimization Problems Deeparnab Chakrabarty and Chaitanya Swamy (Dartmouth College, USA; University of Waterloo, Canada) In many optimization problems, a feasible solution induces a multidimensional cost vector. For example, in loadbalancing a schedule induces a load vector across the machines. In kclustering, opening k facilities induces an assignment cost vector across the clients. Typically, one seeks a solution which either minimizes the sum or the max of this vector, and these problems (makespan minimization, kmedian, and kcenter) are classic NPhard problems which have been extensively studied. In this paper we consider the minimumnorm optimization problem. Given an arbitrary monotone, symmetric norm, the problem asks to find a solution which minimizes the norm of the induced costvector. Such norms are versatile and include ℓ_{p} norms, Topℓ norm (sum of the ℓ largest coordinates in absolute value), and ordered norms (nonnegative linear combination of Topℓ norms), and consequently, the minimumnorm problem models a wide variety of problems under one umbrella, We give a general framework to tackle the minimumnorm problem, and illustrate its efficacy in the unrelated machine load balancing and kclustering setting. Our concrete results are the following. (a) We give constant factor approximation algorithms for the minimum norm load balancing problem in unrelated machines, and the minimum norm kclustering problem. To our knowledge, our results constitute the first constantfactor approximations for such a general suite of objectives. (b) For load balancing on unrelated machines, we give a (2+ε)approximation for ordered load balancing (i.e., minnorm loadbalancing under an ordered norm). (c) For kclustering, we give a (5+ε)approximation for the ordered kmedian problem, which significantly improves upon the previousbest constantfactor approximation (Chakrabarty and Swamy (ICALP 2018); Byrka, Sornat, and Spoerhase (STOC 2018)). (d) Our techniques also imply O(1) approximations to the instancewise best simultaneous approximation factor for unrelatedmachine loadbalancing and kclustering. To our knowledge, these are the first positive simultaneous approximation results in these settings. At a technical level, one of our chief insights is that minimumnorm optimization can be reduced to a special case that we call minmax ordered optimization. Both the reduction, and the task of devising algorithms for the latter problem, require a sparsification idea that we develop, which is of interest for ordered optimization problems. The main ingredient in solving minmax ordered optimization is a deterministic, oblivious rounding procedure (that we devise) for suitable LP relaxations of the loadbalancing and kclustering problem; this may be of independent interest. @InProceedings{STOC19p126, author = {Deeparnab Chakrabarty and Chaitanya Swamy}, title = {Approximation Algorithms for Minimum Norm and Ordered Optimization Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {126137}, doi = {10.1145/3313276.3316322}, year = {2019}, } Publisher's Version 

Tal, Avishay 
STOC '19: "Pseudorandom Generators for ..."
Pseudorandom Generators for Width3 Branching Programs
Raghu Meka, Omer Reingold, and Avishay Tal (University of California at Los Angeles, USA; Stanford University, USA) We construct pseudorandom generators of seed length Õ(log(n)· log(1/є)) that єfool ordered readonce branching programs (ROBPs) of width 3 and length n. For unordered ROBPs, we construct pseudorandom generators with seed length Õ(log(n) · poly(1/є)). This is the first improvement for pseudorandom generators fooling width 3 ROBPs since the work of Nisan [Combinatorica, 1992]. Our constructions are based on the “iterated milder restrictions” approach of Gopalan et al. [FOCS, 2012] (which further extends the AjtaiWigderson framework [FOCS, 1985]), combined with the INWgenerator [STOC, 1994] at the last step (as analyzed by Braverman et al. [SICOMP, 2014]). For the unordered case, we combine iterated milder restrictions with the generator of Chattopadhyay et al. [CCC, 2018]. Two conceptual ideas that play an important role in our analysis are: (1) A relabeling technique allowing us to analyze a relabeled version of the given branching program, which turns out to be much easier. (2) Treating the number of colliding layers in a branching program as a progress measure and showing that it reduces significantly under pseudorandom restrictions. In addition, we achieve nearly optimal seedlength Õ(log(n/є)) for the classes of: (1) readonce polynomials on n variables, (2) locallymonotone ROBPs of length n and width 3 (generalizing readonce CNFs and DNFs), and (3) constantwidth ROBPs of length n having a layer of width 2 in every consecutive polylog(n) layers. @InProceedings{STOC19p626, author = {Raghu Meka and Omer Reingold and Avishay Tal}, title = {Pseudorandom Generators for Width3 Branching Programs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {626637}, doi = {10.1145/3313276.3316319}, year = {2019}, } Publisher's Version Info STOC '19: "Exponential Separation between ..." Exponential Separation between Shallow Quantum Circuits and Unbounded FanIn Shallow Classical Circuits Adam Bene Watts, Robin Kothari, Luke Schaeffer, and Avishay Tal (Massachusetts Institute of Technology, USA; Microsoft Research, USA; Stanford University, USA) Recently, Bravyi, Gosset, and Konig (Science, 2018) exhibited a search problem called the 2D Hidden Linear Function (2D HLF) problem that can be solved exactly by a constantdepth quantum circuit using bounded fanin gates (or QNC^0 circuits), but cannot be solved by any constantdepth classical circuit using bounded fanin AND, OR, and NOT gates (or NC^0 circuits). In other words, they exhibited a search problem in QNC^0 that is not in NC^0. We strengthen their result by proving that the 2D HLF problem is not contained in AC^0, the class of classical, polynomialsize, constantdepth circuits over the gate set of unbounded fanin AND and OR gates, and NOT gates. We also supplement this worstcase lower bound with an averagecase result: There exists a simple distribution under which any AC^0 circuit (even of nearly exponential size) has exponentially small correlation with the 2D HLF problem. Our results are shown by constructing a new problem in QNC^0, which we call the Parity Halving Problem, which is easier to work with. We prove our AC^0 lower bounds for this problem, and then show that it reduces to the 2D HLF problem. @InProceedings{STOC19p515, author = {Adam Bene Watts and Robin Kothari and Luke Schaeffer and Avishay Tal}, title = {Exponential Separation between Shallow Quantum Circuits and Unbounded FanIn Shallow Classical Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {515526}, doi = {10.1145/3313276.3316404}, year = {2019}, } Publisher's Version STOC '19: "Oracle Separation of BQP and ..." Oracle Separation of BQP and PH Ran Raz and Avishay Tal (Princeton University, USA; Stanford University, USA) We present a distribution D over inputs in {−1,1}^{2N}, such that: (1) There exists a quantum algorithm that makes one (quantum) query to the input, and runs in time O(logN), that distinguishes between D and the uniform distribution with advantage Ω(1/logN). (2) No Boolean circuit of quasipolynomial size and constant depth distinguishes between D and the uniform distribution with advantage better than polylog(N)/√N. By well known reductions, this gives a separation of the classes PromiseBQP and PromisePH in the blackbox model and implies an oracle O relative to which BQP^{O} ⊈PH^{O}. @InProceedings{STOC19p13, author = {Ran Raz and Avishay Tal}, title = {Oracle Separation of BQP and PH}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1323}, doi = {10.1145/3313276.3316315}, year = {2019}, } Publisher's Version Info 

Talwar, Kunal 
STOC '19: "Private Selection from Private ..."
Private Selection from Private Candidates
Jingcheng Liu and Kunal Talwar (University of California at Berkeley, USA; Google Brain, USA) Differentially Private algorithms often need to select the best amongst many candidate options. Classical works on this selection problem require that the candidates’ goodness, measured as a realvalued score function, does not change by much when one person’s data changes. In many applications such as hyperparameter optimization, this stability assumption is much too strong. In this work, we consider the selection problem under a much weaker stability assumption on the candidates, namely that the score functions are differentially private. Under this assumption, we present algorithms that are nearoptimal along the three relevant dimensions: privacy, utility and computational efficiency. Our result can be seen as a generalization of the exponential mechanism and its existing generalizations. We also develop an online version of our algorithm, that can be seen as a generalization of the sparse vector technique to this weaker stability assumption. We show how our results imply better algorithms for hyperparameter selection in differentially private machine learning, as well as for adaptive data analysis. @InProceedings{STOC19p298, author = {Jingcheng Liu and Kunal Talwar}, title = {Private Selection from Private Candidates}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {298309}, doi = {10.1145/3313276.3316377}, year = {2019}, } Publisher's Version 

Tan, LiYang 
STOC '19: "Fooling Polytopes ..."
Fooling Polytopes
Ryan O'Donnell, Rocco A. Servedio, and LiYang Tan (Carnegie Mellon University, USA; Columbia University, USA; Stanford University, USA) We give a pseudorandom generator that fools mfacet polytopes over {0,1}^{n} with seed length polylog(m) · log(n). The previous best seed length had superlinear dependence on m. An immediate consequence is a deterministic quasipolynomial time algorithm for approximating the number of solutions to any {0,1}integer program. @InProceedings{STOC19p614, author = {Ryan O'Donnell and Rocco A. Servedio and LiYang Tan}, title = {Fooling Polytopes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {614625}, doi = {10.1145/3313276.3316321}, year = {2019}, } Publisher's Version 

Tang, Ewin 
STOC '19: "A QuantumInspired Classical ..."
A QuantumInspired Classical Algorithm for Recommendation Systems
Ewin Tang (University of Texas at Austin, USA) We give a classical analogue to Kerenidis and Prakash’s quantum recommendation system, previously believed to be one of the strongest candidates for provably exponential speedups in quantum machine learning. Our main result is an algorithm that, given an m × n matrix in a data structure supporting certain ℓ^{2}norm sampling operations, outputs an ℓ^{2}norm sample from a rankk approximation of that matrix in time O(poly(k)log(mn)), only polynomially slower than the quantum algorithm. As a consequence, Kerenidis and Prakash’s algorithm does not in fact give an exponential speedup over classical algorithms. Further, under strong input assumptions, the classical recommendation system resulting from our algorithm produces recommendations exponentially faster than previous classical systems, which run in time linear in m and n. The main insight of this work is the use of simple routines to manipulate ℓ^{2}norm sampling distributions, which play the role of quantum superpositions in the classical setting. This correspondence indicates a potentially fruitful framework for formally comparing quantum machine learning algorithms to classical machine learning algorithms. @InProceedings{STOC19p217, author = {Ewin Tang}, title = {A QuantumInspired Classical Algorithm for Recommendation Systems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {217228}, doi = {10.1145/3313276.3316310}, year = {2019}, } Publisher's Version 

Tang, Zhihao Gavin 
STOC '19: "Tight Approximation Ratio ..."
Tight Approximation Ratio of Anonymous Pricing
Yaonan Jin, Pinyan Lu, Qi Qi, Zhihao Gavin Tang, and Tao Xiao (Columbia University, USA; Shanghai University of Finance and Economics, China; Hong Kong University of Science and Technology, China; Shanghai Jiao Tong University, China) This paper considers two canonical Bayesian mechanism design settings. In the singleitem setting, the tight approximation ratio of Anonymous Pricing is obtained: (1) compared to Myerson Auction, Anonymous Pricing always generates at least a 1/2.62fraction of the revenue; (2) there is a matching lowerbound instance. In the unitdemand singlebuyer setting, the tight approximation ratio between the simplest deterministic mechanism and the optimal deterministic mechanism is attained: in terms of revenue, (1) Uniform Pricing admits a 2.62approximation to Item Pricing; (2) a matching lowerbound instance is presented also. These results answer two open questions asked by Alaei et al. (FOCS’15) and Cai and Daskalakis (GEB’15). As an implication, in the singleitem setting: the approximation ratio of SecondPrice Auction with Anonymous Reserve (Hartline and Roughgarden EC’09) is improved to 2.62, which breaks the best known upper bound of e ≈ 2.72. @InProceedings{STOC19p674, author = {Yaonan Jin and Pinyan Lu and Qi Qi and Zhihao Gavin Tang and Tao Xiao}, title = {Tight Approximation Ratio of Anonymous Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {674685}, doi = {10.1145/3313276.3316331}, year = {2019}, } Publisher's Version 

Tardos, Gábor 
STOC '19: "Planar Point Sets Determine ..."
Planar Point Sets Determine Many Pairwise Crossing Segments
János Pach, Natan Rubin, and Gábor Tardos (EPFL, Switzerland; Renyi Institute, Hungary; BenGurion University of the Negev, Israel; Central European University, Hungary) We show that any set of n points in general position in the plane determines n^{1−o(1)} pairwise crossing segments. The best previously known lower bound, Ω(√n), was proved more than 25 years ago by Aronov, Erdős, Goddard, Kleitman, Klugerman, Pach, and Schulman. Our proof is fully constructive, and extends to dense geometric graphs. @InProceedings{STOC19p1158, author = {János Pach and Natan Rubin and Gábor Tardos}, title = {Planar Point Sets Determine Many Pairwise Crossing Segments}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11581166}, doi = {10.1145/3313276.3316328}, year = {2019}, } Publisher's Version 

Tell, Roei 
STOC '19: "Bootstrapping Results for ..."
Bootstrapping Results for Threshold Circuits “Just Beyond” Known Lower Bounds
Lijie Chen and Roei Tell (Massachusetts Institute of Technology, USA; Weizmann Institute of Science, Israel) The best known lower bounds for the circuit class TC^{0} are only slightly superlinear. Similarly, the best known algorithm for derandomization of this class is an algorithm for quantified derandomization (i.e., a weak type of derandomization) of circuits of slightly superlinear size. In this paper we show that even very mild quantitative improvements of either of the two foregoing results would already imply superpolynomial lower bounds for TC^{0}. Specifically: (1) If for every c>1 and sufficiently large d∈ℕ it holds that nbit TC^{0} circuits of depth d require n^{1+c−d} wires to compute certain NC^{1}complete functions, then TC^{0}≠NC^{1}. In fact, even lower bounds for TC^{0} circuits of size n^{1+c−d} against these functions when c>1 is fixed and sufficiently small would yield lower bounds for polynomialsized circuits. Lower bounds of the form n^{1+c−d} against these functions are already known, but for a fixed c≈2.41 that is too large to yield new lower bounds via our results. (2) If there exists a deterministic algorithm that gets as input an nbit TC^{0} circuit of depth d and n^{1+(1.61)−d} wires, runs in time 2^{no(1)}, and distinguishes circuits that accept at most B(n)=2^{n1−(1.61)−d} inputs from circuits that reject at most B(n) inputs, then NEXP⊈TC^{0}. An algorithm for this “quantified derandomization” task is already known, but it works only when the number of wires is n^{1+c−d}, for c>30, and with a smaller B(n)≈2^{n1−(30/c)d}. Intuitively, the “takeaway” message from our work is that the gap between currentlyknown results and results that would suffice to get superpolynomial lower bounds for TC^{0} boils down to the precise constant c>1 in the bound n^{1+c−d} on the number of wires. Our results improve previous results of Allender and Koucký (2010) and of the second author (2018), respectively, whose hypotheses referred to circuits with n^{1+c/d} wires (rather than n^{1+c−d} wires). We also prove results similar to two results above for other circuit classes (i.e., ACC^{0} and CC^{0}). @InProceedings{STOC19p34, author = {Lijie Chen and Roei Tell}, title = {Bootstrapping Results for Threshold Circuits “Just Beyond” Known Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3441}, doi = {10.1145/3313276.3316333}, year = {2019}, } Publisher's Version 

Tripathi, Utkarsh 
STOC '19: "A FixedDepth SizeHierarchy ..."
A FixedDepth SizeHierarchy Theorem for AC^{0}[⊕] via the Coin Problem
Nutan Limaye, Karteek Sreenivasaiah, Srikanth Srinivasan, Utkarsh Tripathi, and S. Venkitesh (IIT Bombay, India; IIT Hyderabad, India) In this work we prove the first Fixeddepth SizeHierarchy Theorem for uniform AC^{0}[⊕]. In particular, we show that for any fixed d, the class C_{d,k} of functions that have uniform AC^{0}[⊕] formulas of depth d and size n^{k} form an infinite hierarchy. We show this by exhibiting the first class of explicit functions where we have nearly (up to a polynomial factor) matching upper and lower bounds for the class of AC^{0}[⊕] formulas. The explicit functions are derived from the δCoin Problem, which is the computational problem of distinguishing between coins that are heads with probability (1+δ)/2 or (1−δ)/2, where δ is a parameter that is going to 0. We study the complexity of this problem and make progress on both upper bound and lower bound fronts. Upper bounds. For any constant d≥ 2, we show that there are explicit monotone AC^{0} formulas (i.e. made up of AND and OR gates only) solving the δcoin problem that have depth d, size exp(O(d(1/δ)^{1/(d−1)})), and sample complexity (i.e. number of inputs) poly(1/δ). This matches previous upper bounds of O’Donnell and Wimmer (ICALP 2007) and Amano (ICALP 2009) in terms of size (which is optimal) and improves the sample complexity from exp(O(d(1/δ)^{1/(d−1)})) to poly(1/δ). Lower bounds. We show that the above upper bounds are nearly tight (in terms of size) even for the significantly stronger model of AC^{0}[⊕] formulas (which are also allowed NOT and Parity gates): formally, we show that any AC^{0}[⊕] formula solving the δcoin problem must have size exp(Ω(d(1/δ)^{1/(d−1)})). This strengthens a result of Shaltiel and Viola (SICOMP 2010), who prove a exp(Ω((1/δ)^{1/(d+2)})) lower bound for AC^{0}[⊕], and a lower bound of exp(Ω((1/δ)^{1/(d−1)})) shown by Cohen, Ganor and Raz (APPROXRANDOM 2014) for the class ^{0}. The upper bound is a derandomization involving a use of Janson’s inequality and classical combinatorial designs. The lower bound involves proving an optimal degree lower bound for polynomials over _{2} solving the δcoin problem. @InProceedings{STOC19p442, author = {Nutan Limaye and Karteek Sreenivasaiah and Srikanth Srinivasan and Utkarsh Tripathi and S. Venkitesh}, title = {A FixedDepth SizeHierarchy Theorem for AC<sup>0</sup>[⊕] via the Coin Problem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {442453}, doi = {10.1145/3313276.3316339}, year = {2019}, } Publisher's Version 

Ueckerdt, Torsten 