STOC 2017 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E F G H I J K L M N O P R S T U V W X Y Z
Aaronson, Scott |
STOC '17: "The Computational Complexity ..."
The Computational Complexity of Ball Permutations
Scott Aaronson, Adam Bouland, Greg Kuperberg, and Saeed Mehraban (University of Texas at Austin, USA; Massachusetts Institute of Technology, USA; University of California at Davis, USA) We define several models of computation based on permuting distinguishable particles (which we call balls) and characterize their computational complexity. In the quantum setting, we use the representation theory of the symmetric group to find variants of this model which are intermediate between BPP and DQC1 (the class of problems solvable with one clean qubit) and between DQC1 and BQP. Furthermore, we consider a restricted version of this model based on an exactly solvable scattering problem of particles moving on a line. Despite the simplicity of this model from the perspective of mathematical physics, we show that if we allow intermediate destructive measurements and specific input states, then the model cannot be efficiently simulated classically up to multiplicative error unless the polynomial hierarchy collapses. Finally, we define a classical version of this model in which one can probabilistically permute balls. We find this yields a complexity class which is intermediate between L and BPP, and that a nondeterministic version of this model is NP-complete. @InProceedings{STOC17p317, author = {Scott Aaronson and Adam Bouland and Greg Kuperberg and Saeed Mehraban}, title = {The Computational Complexity of Ball Permutations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {317--327}, doi = {}, year = {2017}, } |
|
Abolhassani, Melika |
STOC '17: "Beating 1-1/e for Ordered ..."
Beating 1-1/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 1-1/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {61--71}, doi = {}, year = {2017}, } |
|
Agarwal, Naman |
STOC '17: "Finding Approximate Local ..."
Finding Approximate Local Minima Faster than Gradient Descent
Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma (Princeton University, USA; IAS, USA) We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning. @InProceedings{STOC17p1195, author = {Naman Agarwal and Zeyuan Allen-Zhu and Brian Bullins and Elad Hazan and Tengyu Ma}, title = {Finding Approximate Local Minima Faster than Gradient Descent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1195--1199}, doi = {}, year = {2017}, } |
|
Allen-Zhu, Zeyuan |
STOC '17: "Katyusha: The First Direct ..."
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu (IAS, USA; Princeton University, USA) Nesterov’s momentum trick is famously known for accelerating gradient descent, and has been proven useful in building fast iterative algorithms. However, in the stochastic setting, counterexamples exist and prevent Nesterov’s momentum from providing similar acceleration, even if the underlying problem is convex. We introduce Katyusha, a direct, primal-only stochastic gradient method to fix this issue. It has a provably accelerated convergence rate in convex (off-line) stochastic optimization. The main ingredient is Katyusha momentum, a novel “negative momentum” on top of Nesterov’s momentum that can be incorporated into a variance-reduction based algorithm and speed it up. Since variance reduction has been successfully applied to a growing list of practical problems, our paper suggests that in each of such cases, one could potentially give Katyusha a hug. @InProceedings{STOC17p1200, author = {Zeyuan Allen-Zhu}, title = {Katyusha: The First Direct Acceleration of Stochastic Gradient Methods}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1200--1205}, doi = {}, year = {2017}, } STOC '17: "Finding Approximate Local ..." Finding Approximate Local Minima Faster than Gradient Descent Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma (Princeton University, USA; IAS, USA) We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning. @InProceedings{STOC17p1195, author = {Naman Agarwal and Zeyuan Allen-Zhu and Brian Bullins and Elad Hazan and Tengyu Ma}, title = {Finding Approximate Local Minima Faster than Gradient Descent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1195--1199}, doi = {}, year = {2017}, } |
|
Alman, Josh |
STOC '17: "Probabilistic Rank and Matrix ..."
Probabilistic Rank and Matrix Rigidity
Josh Alman and Ryan Williams (Massachusetts Institute of Technology, USA) We consider a notion of probabilistic rank and probabilistic sign-rank of a matrix, which measure the extent to which a matrix can be probabilistically represented by low-rank matrices. We demonstrate several connections with matrix rigidity, communication complexity, and circuit lower bounds. The most interesting outcomes are: The Walsh-Hadamard Transform is Not Very Rigid. We give surprising upper bounds on the rigidity of a family of matrices whose rigidity has been extensively studied, and was conjectured to be highly rigid. For the 2n × 2n Walsh-Hadamard transform Hn (a.k.a. Sylvester matrices, a.k.a. the communication matrix of Inner Product modulo 2), we show how to modify only 2ε n entries in each row and make the rank of Hn drop below 2n(1−Ω(ε2/log(1/ε))), for all small ε > 0, over any field. That is, it is not possible to prove arithmetic circuit lower bounds on Hadamard matrices such as Hn, via L. Valiant’s matrix rigidity approach. We also show non-trivial rigidity upper bounds for Hn with smaller target rank. Matrix Rigidity and Threshold Circuit Lower Bounds. We give new consequences of rigid matrices for Boolean circuit complexity. First, we show that explicit n × n Boolean matrices which maintain rank at least 2(logn)1−δ after n2/2(logn)δ/2 modified entries (over any field, for any δ > 0) would yield an explicit function that does not have sub-quadratic-size AC0 circuits with two layers of arbitrary linear threshold gates. Second, we prove that explicit 0/1 matrices over the reals which are modestly more rigid than the best known rigidity lower bounds for sign-rank would imply exponential-gate lower bounds for the infamously difficult class of depth-two linear threshold circuits with arbitrary weights on both layers. In particular, we show that matrices defined by these seemingly-difficult circuit classes actually have low probabilistic rank and sign-rank, respectively. An Equivalence Between Communication, Probabilistic Rank, and Rigidity. It has been known since Razborov [1989] that explicit rigidity lower bounds would resolve longstanding lower-bound problems in communication complexity, but it seemed possible that communication lower bounds could be proved without making progress on matrix rigidity. We show that for every function f which is randomly self-reducible in a natural way (the inner product mod 2 is an example), bounding the communication complexity of f (in a precise technical sense) is equivalent to bounding the rigidity of the matrix of f, via an equivalence with probabilistic rank. @InProceedings{STOC17p641, author = {Josh Alman and Ryan Williams}, title = {Probabilistic Rank and Matrix Rigidity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {641--652}, doi = {}, year = {2017}, } |
|
Ambainis, Andris |
STOC '17: "Quantum Algorithm for Tree ..."
Quantum Algorithm for Tree Size Estimation, with Applications to Backtracking and 2-Player Games
Andris Ambainis and Martins Kokainis (University of Latvia, Latvia) We study quantum algorithms on search trees of unknown structure, in a model where the tree can be discovered by local exploration. That is, we are given the root of the tree and access to a black box which, given a vertex v, outputs the children of v. We construct a quantum algorithm which, given such access to a search tree of depth at most n, estimates the size of the tree T within a factor of 1± δ in Õ(√nT) steps. More generally, the same algorithm can be used to estimate size of directed acyclic graphs (DAGs) in a similar model. We then show two applications of this result: a) We show how to transform a classical backtracking search algorithm which examines T nodes of a search tree into an Õ(√Tn3/2) time quantum algorithm, improving over an earlier quantum backtracking algorithm of Montanaro (arXiv:1509.02374). b) We give a quantum algorithm for evaluating AND-OR formulas in a model where the formula can be discovered by local exploration (modeling position trees in 2-player games) which evaluates formulas of size T and depth To(1) in time O(T1/2+o(1)). Thus, the quantum speedup is essentially the same as in the case when the formula is known in advance. @InProceedings{STOC17p989, author = {Andris Ambainis and Martins Kokainis}, title = {Quantum Algorithm for Tree Size Estimation, with Applications to Backtracking and 2-Player Games}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {989--1002}, doi = {}, year = {2017}, } |
|
Anari, Nima |
STOC '17: "A Generalization of Permanent ..."
A Generalization of Permanent Inequalities and Applications in Counting and Optimization
Nima Anari and Shayan Oveis Gharan (Stanford University, USA; University of Washington, USA) A polynomial p∈ℝ[z1,…,zn] is real stable if it has no roots in the upper-half complex plane. Gurvits’s permanent inequality gives a lower bound on the coefficient of the z1z2… zn monomial of a real stable polynomial p with nonnegative coefficients. This fundamental inequality has been used to attack several counting and optimization problems. Here, we study a more general question: Given a stable multilinear polynomial p with nonnegative coefficients and a set of monomials S, we show that if the polynomial obtained by summing up all monomials in S is real stable, then we can lower bound the sum of coefficients of monomials of p that are in S. We also prove generalizations of this theorem to (real stable) polynomials that are not multilinear. We use our theorem to give a new proof of Schrijver’s inequality on the number of perfect matchings of a regular bipartite graph, generalize a recent result of Nikolov and Singh, and give deterministic polynomial time approximation algorithms for several counting problems. @InProceedings{STOC17p384, author = {Nima Anari and Shayan Oveis Gharan}, title = {A Generalization of Permanent Inequalities and Applications in Counting and Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {384--396}, doi = {}, year = {2017}, } |
|
Andoni, Alexandr |
STOC '17: "Approximate Near Neighbors ..."
Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA) We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation. Specifically, for every n, d = no(1), and every d-dimensional symmetric norm ||·||, there exists a data structure for (loglogn)-approximate nearest neighbor search over ||·|| for n-point datasets achieving no(1) query time and n1+o(1) space. The main technical ingredient of the algorithm is a low-distortion embedding of a symmetric norm into a low-dimensional iterated product of top-k norms. We also show that our techniques cannot be extended to *general* norms. @InProceedings{STOC17p902, author = {Alexandr Andoni and Huy L. Nguyen and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Approximate Near Neighbors for General Symmetric Norms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902--913}, doi = {}, year = {2017}, } |
|
Angel, Omer |
STOC '17: "Local Max-Cut in Smoothed ..."
Local Max-Cut in Smoothed Polynomial Time
Omer Angel, Sébastien Bubeck, Yuval Peres, and Fan Wei (University of British Columbia, Canada; Microsoft Research, USA; Stanford University, USA) In 1988, Johnson, Papadimitriou and Yannakakis wrote that “Practically all the empirical evidence would lead us to conclude that finding locally optimal solutions is much easier than solving NP-hard problems”. Since then the empirical evidence has continued to amass, but formal proofs of this phenomenon have remained elusive. A canonical (and indeed complete) example is the local max-cut problem, for which no polynomial time method is known. In a breakthrough paper, Etscheid and R'oglin proved that the smoothed complexity of local max-cut is quasi-polynomial, i.e., if arbitrary bounded weights are randomly perturbed, a local maximum can be found in φ nO(logn) steps where φ is an upper bound on the random edge weight density. In this paper we prove smoothed polynomial complexity for local max-cut, thus confirming that finding local optima for max-cut is much easier than solving it. @InProceedings{STOC17p429, author = {Omer Angel and Sébastien Bubeck and Yuval Peres and Fan Wei}, title = {Local Max-Cut in Smoothed Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {429--437}, doi = {}, year = {2017}, } |
|
Angelidakis, Haris |
STOC '17: "Algorithms for Stable and ..."
Algorithms for Stable and Perturbation-Resilient Problems
Haris Angelidakis, Konstantin Makarychev, and Yury Makarychev (Toyota Technological Institute at Chicago, USA; Northwestern University, USA) We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√2≈2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2−ε)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2−2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2−2/k+δ)-weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomial-time algorithms for n1−ε-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP). @InProceedings{STOC17p438, author = {Haris Angelidakis and Konstantin Makarychev and Yury Makarychev}, title = {Algorithms for Stable and Perturbation-Resilient Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {438--451}, doi = {}, year = {2017}, } |
|
Anshu, Anurag |
STOC '17: "Exponential Separation of ..."
Exponential Separation of Quantum Communication and Classical Information
Anurag Anshu, Dave Touchette, Penghui Yao, and Nengkun Yu (National University of Singapore, Singapore; University of Waterloo, Canada; Perimeter Institute for Theoretical Physics, Canada; University of Maryland, USA; University of Technology Sydney, Australia) We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550–1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric k-ary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15-057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The high-level idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of round-elimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal trade-off between Alice’s and Bob’s communication is given, even when allowing pre-shared entanglement, while computing the related Greater-Than function on n bits: say Bob communicates at most b bits, then Alice must send n/2O (b) bits to Bob. We also present a classical protocol achieving this bound. @InProceedings{STOC17p277, author = {Anurag Anshu and Dave Touchette and Penghui Yao and Nengkun Yu}, title = {Exponential Separation of Quantum Communication and Classical Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {277--288}, doi = {}, year = {2017}, } |
|
Arora, Sanjeev |
STOC '17: "Provable Learning of Noisy-or ..."
Provable Learning of Noisy-or Networks
Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski (Princeton University, USA; Duke University, USA) Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Learning the model ---that is, the mapping from hidden variables to visible ones and vice versa---is NP-hard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structure: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the single-layer noisy-OR network, which is a textbook example of a bayes net, and used for example in the classic QMR-DT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable. @InProceedings{STOC17p1057, author = {Sanjeev Arora and Rong Ge and Tengyu Ma and Andrej Risteski}, title = {Provable Learning of Noisy-or Networks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1057--1066}, doi = {}, year = {2017}, } |
|
Artmann, Stephan |
STOC '17: "A Strongly Polynomial Algorithm ..."
A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming
Stephan Artmann, Robert Weismantel, and Rico Zenklusen (ETH Zurich, Switzerland) We present a strongly polynomial algorithm to solve integer programs of the form max{cT x∶ Ax≤ b, x∈ℤn }, for A∈ℤm× n with rank(A)=n, b∈ℤm, c∈ℤn, and where all determinants of (n× n)-sub-matrices of A are bounded by 2 in absolute value. In particular, this implies that integer programs max{cT x : Q x≤ b, x∈ ℤ 0n}, where Q∈ ℤm× n has the property that all subdeterminants are bounded by 2 in absolute value, can be solved in strongly polynomial time. We thus obtain an extension of the well-known result that integer programs with constraint matrices that are totally unimodular are solvable in strongly polynomial time. @InProceedings{STOC17p1206, author = {Stephan Artmann and Robert Weismantel and Rico Zenklusen}, title = {A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1206--1219}, doi = {}, year = {2017}, } |
|
Arvind, V. |
STOC '17: "Randomized Polynomial Time ..."
Randomized Polynomial Time Identity Testing for Noncommutative Circuits
V. Arvind, Pushkar S Joglekar, Partha Mukhopadhyay, and S. Raja (Institute of Mathematical Sciences, India; Vishwakarma Institute of Technology Pune, India; Chennai Mathematical Institute, India) In this paper we show that black-box polynomial identity testing for noncommutative polynomials f∈F⟨ z1,z2,⋯,zn ⟩ of degree D and sparsity t, can be done in randomized (n,logt,logD) time. As a consequence, given a circuit C of size s computing a polynomial f∈F⟨ z1,z2,⋯,zn ⟩ with at most t non-zero monomials, then testing if f is identically zero can be done by a randomized algorithm with running time polynomial in s and n and logt. This makes significant progress on a question that has been open for over ten years. Our algorithm is based on automata-theoretic ideas that can efficiently isolate a monomial in the given polynomial. In particular, we carry out the monomial isolation using nondeterministic automata. In general, noncommutative circuits of size s can compute polynomials of degree exponential in s and number of monomials double-exponential in s. In this paper, we consider a natural class of homogeneous noncommutative circuits, that we call +-regular circuits, and give a white-box polynomial time deterministic polynomial identity test. These circuits can compute noncommutative polynomials with number of monomials double-exponential in the circuit size. Our algorithm combines some new structural results for +-regular circuits with known results for noncommutative ABP identity testing, rank bound of commutative depth three identities, and equivalence testing problem for words. Finally, we consider the black-box identity testing problem for depth three +-regular circuits and give a randomized polynomial time identity test. In particular, we show if f∈⟨ Z⟩ is a nonzero noncommutative polynomial computed by a depth three +-regular circuit of size s, then f cannot be a polynomial identity for the matrix algebra Ms(F) when F is sufficiently large depending on the degree of f. @InProceedings{STOC17p831, author = {V. Arvind and Pushkar S Joglekar and Partha Mukhopadhyay and S. Raja}, title = {Randomized Polynomial Time Identity Testing for Noncommutative Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {831--841}, doi = {}, year = {2017}, } |
|
Azar, Yossi |
STOC '17: "Online Service with Delay ..."
Online Service with Delay
Yossi Azar, Arun Ganesh, Rong Ge, and Debmalya Panigrahi (Tel Aviv University, Israel; Duke University, USA) In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a poly-logarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems. @InProceedings{STOC17p551, author = {Yossi Azar and Arun Ganesh and Rong Ge and Debmalya Panigrahi}, title = {Online Service with Delay}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {551--563}, doi = {}, year = {2017}, } |
|
Babaioff, Moshe |
STOC '17: "The Menu-Size Complexity of ..."
The Menu-Size Complexity of Revenue Approximation
Moshe Babaioff, Yannai A. Gonczarowski, and Noam Nisan (Microsoft Research, Israel; Hebrew University of Jerusalem, Israel) We consider a monopolist that is selling n items to a single additive buyer, where the buyer’s values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly have unbounded support. It is well known that — unlike in the single item case — the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open. In this paper, we give an affirmative answer to this open question, showing that for every n and for every ε>0, there exists a complexity bound C=C(n,ε) such that auctions of menu size at most C suffice for obtaining a (1−ε) fraction of the optimal revenue from any F1,…,Fn. We prove upper and lower bounds on the revenue approximation complexity C(n,ε), as well as on the deterministic communication complexity required to run an auction that achieves such an approximation. @InProceedings{STOC17p869, author = {Moshe Babaioff and Yannai A. Gonczarowski and Noam Nisan}, title = {The Menu-Size Complexity of Revenue Approximation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {869--877}, doi = {}, year = {2017}, } |
|
Babichenko, Yakov |
STOC '17: "Communication Complexity of ..."
Communication Complexity of Approximate Nash Equilibria
Yakov Babichenko and Aviad Rubinstein (Technion, Israel; University of California at Berkeley, USA) For a constant є, we prove a (N) lower bound on the (randomized) communication complexity of є-Nash equilibrium in two-player N× N games. For n-player binary-action games we prove an exp(n) lower bound for the (randomized) communication complexity of (є,є)-weak approximate Nash equilibrium, which is a profile of mixed actions such that at least (1−є)-fraction of the players are є-best replying. @InProceedings{STOC17p878, author = {Yakov Babichenko and Aviad Rubinstein}, title = {Communication Complexity of Approximate Nash Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {878--889}, doi = {}, year = {2017}, } |
|
Balkanski, Eric |
STOC '17: "The Limitations of Optimization ..."
The Limitations of Optimization from Samples
Eric Balkanski, Aviad Rubinstein, and Yaron Singer (Harvard University, USA; University of California at Berkeley, USA) In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature. @InProceedings{STOC17p1016, author = {Eric Balkanski and Aviad Rubinstein and Yaron Singer}, title = {The Limitations of Optimization from Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1016--1027}, doi = {}, year = {2017}, } |
|
Ball, Marshall |
STOC '17: "Average-Case Fine-Grained ..."
Average-Case Fine-Grained Hardness
Marshall Ball, Alon Rosen, Manuel Sabin, and Prashant Nalini Vasudevan (Columbia University, USA; IDC Herzliya, Israel; University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL ’94), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO ’03). We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems – namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) – in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2−o(1) time to compute on average, and that of APSP gives us a function that requires n3−o(1) time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions. @InProceedings{STOC17p483, author = {Marshall Ball and Alon Rosen and Manuel Sabin and Prashant Nalini Vasudevan}, title = {Average-Case Fine-Grained Hardness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {483--496}, doi = {}, year = {2017}, } |
|
Bansal, Nikhil |
STOC '17: "Algorithmic Discrepancy Beyond ..."
Algorithmic Discrepancy Beyond Partial Coloring
Nikhil Bansal and Shashwat Garg (Eindhoven University of Technology, Netherlands) The partial coloring method is one of the most powerful and widely used method in combinatorial discrepancy problems. However, in many cases it leads to sub-optimal bounds as the partial coloring step must be iterated a logarithmic number of times, and the errors can add up in an adversarial way. We give a new and general algorithmic framework that overcomes the limitations of the partial coloring method and can be applied in a black-box manner to various problems. Using this framework, we give new improved bounds and algorithms for several classic problems in discrepancy. In particular, for Tusnady’s problem, we give an improved O(log2 n) bound for discrepancy of axis-parallel rectangles and more generally an Od(logdn) bound for d-dimensional boxes in ℝd. Previously, even non-constructively, the best bounds were O(log2.5 n) and Od(logd+0.5n) respectively. Similarly, for the Steinitz problem we give the first algorithm that matches the best known non-constructive bounds due to Banaszczyk in the ℓ∞ case, and improves the previous algorithmic bounds substantially in the ℓ2 case. Our framework is based upon a substantial generalization of the techniques developed recently in the context of the Komlós discrepancy problem. @InProceedings{STOC17p914, author = {Nikhil Bansal and Shashwat Garg}, title = {Algorithmic Discrepancy Beyond Partial Coloring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {914--926}, doi = {}, year = {2017}, } STOC '17: "Faster Space-Efficient Algorithms ..." Faster Space-Efficient Algorithms for Subset Sum and k-Sum Nikhil Bansal, Shashwat Garg, Jesper Nederlof, and Nikhil Vyas (Eindhoven University of Technology, Netherlands; IIT Bombay, India) We present randomized algorithms that solve Subset Sum and Knapsack instances with n items in O*(20.86n) time, where the O*(·) notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve Binary Linear Programming on n variables with few constraints in a similar running time. We also show that for any constant k≥ 2, random instances of k-Sum can be solved using O(nk−0.5(n)) time and O(logn) space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using O(logn) space significantly faster than the trivial O(n2) time algorithm if no value occurs too often in the same list. @InProceedings{STOC17p198, author = {Nikhil Bansal and Shashwat Garg and Jesper Nederlof and Nikhil Vyas}, title = {Faster Space-Efficient Algorithms for Subset Sum and k-Sum}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {198--209}, doi = {}, year = {2017}, } |
|
Barak, Boaz |
STOC '17: "Quantum Entanglement, Sum ..."
Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture
Boaz Barak, Pravesh K. Kothari, and David Steurer (Harvard University, USA; Princeton University, USA; IAS, USA; Cornell University, USA) For every constant є>0, we give an exp(Õ(√n))-time algorithm for the 1 vs 1−є Best Separable State (BSS) problem of distinguishing, given an n2× n2 matrix corresponding to a quantum measurement, between the case that there is a separable (i.e., non-entangled) state ρ that accepts with probability 1, and the case that every separable state is accepted with probability at most 1−є. Equivalently, our algorithm takes the description of a subspace ⊆ Fn2 (where F can be either the real or complex field) and distinguishes between the case that contains a rank one matrix, and the case that every rank one matrix is at least є far (in ℓ2 distance) from . To the best of our knowledge, this is the first improvement over the brute-force exp(n)-time algorithm for this problem. Our algorithm is based on the sum-of-squares hierarchy and its analysis is inspired by Lovett’s proof (STOC ’14, JACM ’16) that the communication complexity of every rank-n Boolean matrix is bounded by Õ(√n). @InProceedings{STOC17p975, author = {Boaz Barak and Pravesh K. Kothari and David Steurer}, title = {Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {975--988}, doi = {}, year = {2017}, } |
|
Bavarian, Mohammad |
STOC '17: "Hardness Amplification for ..."
Hardness Amplification for Entangled Games via Anchoring
Mohammad Bavarian, Thomas Vidick, and Henry Yuen (Massachusetts Institute of Technology, USA; California Institute of Technology, USA; University of California at Berkeley, USA) We study the parallel repetition of one-round games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate — in other words, does an analogue of Raz’s parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games. We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the Feige-Kilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the Feige-Kilian transformation, our anchoring transformation is completeness preserving. We prove an exponential-decay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games. Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture. @InProceedings{STOC17p303, author = {Mohammad Bavarian and Thomas Vidick and Henry Yuen}, title = {Hardness Amplification for Entangled Games via Anchoring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {303--316}, doi = {}, year = {2017}, } |
|
Ben-Aroya, Avraham |
STOC '17: "An Efficient Reduction from ..."
An Efficient Reduction from Two-Source to Non-malleable Extractors: Achieving Near-Logarithmic Min-entropy
Avraham Ben-Aroya, Dean Doron, and Amnon Ta-Shma (Tel Aviv University, Israel) The breakthrough result of Chattopadhyay and Zuckerman (2016) gives a reduction from the construction of explicit two-source extractors to the construction of explicit non-malleable extractors. However, even assuming the existence of optimal explicit non-malleable extractors only gives a two-source extractor (or a Ramsey graph) for poly(logn) entropy, rather than the optimal O(logn). In this paper we modify the construction to solve the above barrier. Using the currently best explicit non-malleable extractors we get an explicit bipartite Ramsey graphs for sets of size 2k, for k=O(logn loglogn). Any further improvement in the construction of non-malleable extractors would immediately yield a corresponding two-source extractor. Intuitively, Chattopadhyay and Zuckerman use an extractor as a sampler, and we observe that one could use a weaker object – a somewhere-random condenser with a small entropy gap and a very short seed. We also show how to explicitly construct this weaker object using the error reduction technique of Raz, Reingold and Vadhan (1999), and the constant-degree dispersers of Zuckerman (2006) that also work against extremely small tests. @InProceedings{STOC17p1185, author = {Avraham Ben-Aroya and Dean Doron and Amnon Ta-Shma}, title = {An Efficient Reduction from Two-Source to Non-malleable Extractors: Achieving Near-Logarithmic Min-entropy}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1185--1194}, doi = {}, year = {2017}, } |
|
Bouland, Adam |
STOC '17: "The Computational Complexity ..."
The Computational Complexity of Ball Permutations
Scott Aaronson, Adam Bouland, Greg Kuperberg, and Saeed Mehraban (University of Texas at Austin, USA; Massachusetts Institute of Technology, USA; University of California at Davis, USA) We define several models of computation based on permuting distinguishable particles (which we call balls) and characterize their computational complexity. In the quantum setting, we use the representation theory of the symmetric group to find variants of this model which are intermediate between BPP and DQC1 (the class of problems solvable with one clean qubit) and between DQC1 and BQP. Furthermore, we consider a restricted version of this model based on an exactly solvable scattering problem of particles moving on a line. Despite the simplicity of this model from the perspective of mathematical physics, we show that if we allow intermediate destructive measurements and specific input states, then the model cannot be efficiently simulated classically up to multiplicative error unless the polynomial hierarchy collapses. Finally, we define a classical version of this model in which one can probabilistically permute balls. We find this yields a complexity class which is intermediate between L and BPP, and that a nondeterministic version of this model is NP-complete. @InProceedings{STOC17p317, author = {Scott Aaronson and Adam Bouland and Greg Kuperberg and Saeed Mehraban}, title = {The Computational Complexity of Ball Permutations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {317--327}, doi = {}, year = {2017}, } |
|
Brakerski, Zvika |
STOC '17: "Non-interactive Delegation ..."
Non-interactive Delegation and Batch NP Verification from Standard Computational Assumptions
Zvika Brakerski, Justin Holmgren, and Yael Kalai (Weizmann Institute of Science, Israel; Massachusetts Institute of Technology, USA; Microsoft Research, USA) We present an adaptive and non-interactive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomial-time cryptographic assumptions (e.g. the worst case hardness of polynomial-factor approximation of short-vector lattice problems). In our protocol, the verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements by simpling sending a proof to the verifier. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer non-adaptive two-message protocols (where the first message could not be re-used), and required either obfuscation-based assumptions or super-polynomial hardness assumptions. We show that our techniques can also be applied to construct a new type of (non-adaptive) 2-message argument for batch NP-statements. Specifically, we can simultaneously prove (with computational soundness) the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness. @InProceedings{STOC17p474, author = {Zvika Brakerski and Justin Holmgren and Yael Kalai}, title = {Non-interactive Delegation and Batch NP Verification from Standard Computational Assumptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {474--482}, doi = {}, year = {2017}, } |
|
Braverman, Vladimir |
STOC '17: "Streaming Symmetric Norms ..."
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang (Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel) We characterize the streaming space complexity of every symmetric norm l (a norm on ℝn invariant under sign-flips and coordinate-permutations), by relating this space complexity to the measure-concentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±є)-approximation to the norm of the stream, for every 0<є≤ 1/2. (The bounds match up to (є−1 logn) factors.) We further extend those bounds to any large approximation ratio D≥ 1.1, showing that the decrease in space complexity is proportional to D2, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l2 unit sphere. The same median governs many phenomena in high-dimensional spaces, such as large-deviation bounds and the critical dimension in Dvoretzky’s Theorem. The family of symmetric norms contains several well-studied norms, such as all lp norms, and indeed we provide a new explanation for the disparity in space complexity between p≤ 2 and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the top-k norm and the k-support norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in http://sublinear.info). @InProceedings{STOC17p716, author = {Jarosław Błasiok and Vladimir Braverman and Stephen R. Chestnut and Robert Krauthgamer and Lin F. Yang}, title = {Streaming Symmetric Norms via Measure Concentration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716--729}, doi = {}, year = {2017}, } |
|
Bubeck, Sébastien |
STOC '17: "Kernel-Based Methods for Bandit ..."
Kernel-Based Methods for Bandit Convex Optimization
Sébastien Bubeck, Yin Tat Lee, and Ronen Eldan (Microsoft Research, USA; Weizmann Institute of Science, Israel) We consider the adversarial convex bandit problem and we build the first poly(T)-time algorithm with poly(n) √T-regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves Õ(n9.5 √T)-regret, and we show that a simple variant of this algorithm can be run in poly(n log(T))-time per step at the cost of an additional poly(n) To(1) factor in the regret. These results improve upon the Õ(n11 √T)-regret and exp(poly(T))-time result of the first two authors, and the log(T)poly(n) √T-regret and log(T)poly(n)-time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve Õ(n1.5 √T)-regret, and moreover that this regret is unimprovable (the current best lower bound being Ω(n √T) and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order n3 / є2. @InProceedings{STOC17p72, author = {Sébastien Bubeck and Yin Tat Lee and Ronen Eldan}, title = {Kernel-Based Methods for Bandit Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {72--85}, doi = {}, year = {2017}, } Video Info STOC '17: "Local Max-Cut in Smoothed ..." Local Max-Cut in Smoothed Polynomial Time Omer Angel, Sébastien Bubeck, Yuval Peres, and Fan Wei (University of British Columbia, Canada; Microsoft Research, USA; Stanford University, USA) In 1988, Johnson, Papadimitriou and Yannakakis wrote that “Practically all the empirical evidence would lead us to conclude that finding locally optimal solutions is much easier than solving NP-hard problems”. Since then the empirical evidence has continued to amass, but formal proofs of this phenomenon have remained elusive. A canonical (and indeed complete) example is the local max-cut problem, for which no polynomial time method is known. In a breakthrough paper, Etscheid and R'oglin proved that the smoothed complexity of local max-cut is quasi-polynomial, i.e., if arbitrary bounded weights are randomly perturbed, a local maximum can be found in φ nO(logn) steps where φ is an upper bound on the random edge weight density. In this paper we prove smoothed polynomial complexity for local max-cut, thus confirming that finding local optima for max-cut is much easier than solving it. @InProceedings{STOC17p429, author = {Omer Angel and Sébastien Bubeck and Yuval Peres and Fan Wei}, title = {Local Max-Cut in Smoothed Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {429--437}, doi = {}, year = {2017}, } |
|
Bullins, Brian |
STOC '17: "Finding Approximate Local ..."
Finding Approximate Local Minima Faster than Gradient Descent
Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma (Princeton University, USA; IAS, USA) We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning. @InProceedings{STOC17p1195, author = {Naman Agarwal and Zeyuan Allen-Zhu and Brian Bullins and Elad Hazan and Tengyu Ma}, title = {Finding Approximate Local Minima Faster than Gradient Descent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1195--1199}, doi = {}, year = {2017}, } |
|
Błasiok, Jarosław |
STOC '17: "Streaming Symmetric Norms ..."
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang (Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel) We characterize the streaming space complexity of every symmetric norm l (a norm on ℝn invariant under sign-flips and coordinate-permutations), by relating this space complexity to the measure-concentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±є)-approximation to the norm of the stream, for every 0<є≤ 1/2. (The bounds match up to (є−1 logn) factors.) We further extend those bounds to any large approximation ratio D≥ 1.1, showing that the decrease in space complexity is proportional to D2, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l2 unit sphere. The same median governs many phenomena in high-dimensional spaces, such as large-deviation bounds and the critical dimension in Dvoretzky’s Theorem. The family of symmetric norms contains several well-studied norms, such as all lp norms, and indeed we provide a new explanation for the disparity in space complexity between p≤ 2 and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the top-k norm and the k-support norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in http://sublinear.info). @InProceedings{STOC17p716, author = {Jarosław Błasiok and Vladimir Braverman and Stephen R. Chestnut and Robert Krauthgamer and Lin F. Yang}, title = {Streaming Symmetric Norms via Measure Concentration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716--729}, doi = {}, year = {2017}, } |
|
Cai, Jin-Yi |
STOC '17: "Holographic Algorithm with ..."
Holographic Algorithm with Matchgates Is Universal for Planar #CSP over Boolean Domain
Jin-Yi Cai and Zhiguo Fu (University of Wisconsin-Madison, USA; Jilin University, China) We prove a complexity classification theorem that classifies all counting constraint satisfaction problems (#CSP) over Boolean variables into exactly three classes: (1) Polynomial-time solvable; (2) #P-hard for general instances, but solvable in polynomial-time over planar structures; and (3) #P-hard over planar structures. The classification applies to all finite sets of complex-valued, not necessarily symmetric, constraint functions on Boolean variables. It is shown that Valiant's holographic algorithm with matchgates is universal strategy for all problems in class (2). @InProceedings{STOC17p842, author = {Jin-Yi Cai and Zhiguo Fu}, title = {Holographic Algorithm with Matchgates Is Universal for Planar #CSP over Boolean Domain}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {842--855}, doi = {}, year = {2017}, } |
|
Cai, Yang |
STOC '17: "Simple Mechanisms for Subadditive ..."
Simple Mechanisms for Subadditive Buyers via Duality
Yang Cai and Mingfei Zhao (McGill University, Canada) We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers’ valuations are XOS over independent items. If the buyers’ valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques. @InProceedings{STOC17p170, author = {Yang Cai and Mingfei Zhao}, title = {Simple Mechanisms for Subadditive Buyers via Duality}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {170--183}, doi = {}, year = {2017}, } |
|
Calude, Cristian S. |
STOC '17: "Deciding Parity Games in Quasipolynomial ..."
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan (University of Auckland, New Zealand; National University of Singapore, Singapore) It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime nO(√n) to O(nlog(m)+6) and from an XP-algorithm with runtime O(nΘ(m)) for fixed parameter m to an FPT-algorithm with runtime O(n5)+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((mm · n)5); it is also shown that this bound cannot be improved to O((2m · n)c), for any c, unless FPT = W[1]. @InProceedings{STOC17p252, author = {Cristian S. Calude and Sanjay Jain and Bakhadyr Khoussainov and Wei Li and Frank Stephan}, title = {Deciding Parity Games in Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {252--263}, doi = {}, year = {2017}, } |
|
Canetti, Ran |
STOC '17: "Equivocating Yao: Constant-Round ..."
Equivocating Yao: Constant-Round Adaptively Secure Multiparty Computation in the Plain Model
Ran Canetti, Oxana Poburinnaya, and Muthuramakrishnan Venkitasubramaniam (Boston University, USA; Tel Aviv University, Israel; University of Rochester, USA) Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable two-message, two-party secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds? We provide a positive answer to this question. We define a new type of encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function. Combining our scheme with non-committing encryption, we obtain the first two-message, two-party computation protocol, and the first constant-rounds multiparty computation protocol, in the plain model, that are secure against semi-honest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within. @InProceedings{STOC17p497, author = {Ran Canetti and Oxana Poburinnaya and Muthuramakrishnan Venkitasubramaniam}, title = {Equivocating Yao: Constant-Round Adaptively Secure Multiparty Computation in the Plain Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {497--509}, doi = {}, year = {2017}, } Info |
|
Cevher, Volkan |
STOC '17: "An Adaptive Sublinear-Time ..."
An Adaptive Sublinear-Time Block Sparse Fourier Transform
Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh (EPFL, Switzerland) The problem of approximately computing the k dominant Fourier coefficients of a vector X quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with O(klognlog(n/k)) runtime [Hassanieh et al., STOC’12] and O(klogn) sample complexity [Indyk et al., FOCS’14]. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a (k0,k1)-block sparse model. In this model, signal frequencies are clustered in k0 intervals with width k1 in Fourier space, and k= k0k1 is the total sparsity. Our main result is the first sparse FFT algorithm for (k0, k1)-block sparse signals with a sample complexity of O*(k0k1 + k0log(1+ k0)logn) at constant signal-to-noise ratios, and sublinear runtime. Our algorithm crucially uses adaptivity to achieve the improved sample complexity bound, and we provide a lower bound showing that this is essential in the Fourier setting: Any non-adaptive algorithm must use Ω(k0k1logn/k0k1) samples for the (k0,k1)-block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energy-based importance sampling technique that may be of independent interest. @InProceedings{STOC17p702, author = {Volkan Cevher and Michael Kapralov and Jonathan Scarlett and Amir Zandieh}, title = {An Adaptive Sublinear-Time Block Sparse Fourier Transform}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {702--715}, doi = {}, year = {2017}, } |
|
Chakrabarty, Deeparnab |
STOC '17: "Subquadratic Submodular Function ..."
Subquadratic Submodular Function Minimization
Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, and Sam Chiu-wai Wong (Dartmouth College, USA; Microsoft Research, USA; Stanford University, USA; University of California at Berkeley, USA) Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n2lognM·+n3logO(1)nM) time and O(n3log2n·+n4logO(1)n) time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in O(nM3logn·) time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in [−1,1], we give an algorithm which in Õ(n5/3·/ε2) time returns an ε-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n2 bound – we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast Fujishige-Wolfe heuristic [Fujishige, 1980; Wolfe, 1976] @InProceedings{STOC17p1220, author = {Deeparnab Chakrabarty and Yin Tat Lee and Aaron Sidford and Sam Chiu-wai Wong}, title = {Subquadratic Submodular Function Minimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1220--1231}, doi = {}, year = {2017}, } |
|
Chang, Yi-Jun |
STOC '17: "Exponential Separations in ..."
Exponential Separations in the Energy Complexity of Leader Election
Yi-Jun Chang, Tsvi Kopelowitz, Seth Pettie, Ruosong Wang, and Wei Zhan (University of Michigan, USA; Tsinghua University, China) Energy is often the most constrained resource for battery-powered wireless devices and the lion’s share of energy is often spent on transceiver usage (sending/receiving packets), not on computation. In this paper we study the energy complexity of Leader Election and Approximate Counting in several models of wireless radio networks. It turns out that energy complexity is very sensitive to whether the devices can generate random bits and their ability to detect collisions. We consider four collision-detection models: Strong-CD (in which transmitters and listeners detect collisions), Sender-CD and Receiver-CD (in which only transmitters or only listeners detect collisions), and No-CD (in which no one detects collisions.) The take-away message of our results is quite surprising. For randomized Leader Election algorithms, there is an exponential gap between the energy complexity of Sender-CD and Receiver-CD: No-CD = Sender-CD ≫ Receiver-CD = Strong-CD and for deterministic Leader Election algorithms, there is another exponential gap in energy complexity, but in the reverse direction: No-CD = Receiver-CD ≫ Sender-CD = Strong-CD In particular, the randomized energy complexity of Leader Election is Θ(log* n) in Sender-CD but Θ(log(log* n)) in Receiver-CD, where n is the (unknown) number of devices. Its deterministic complexity is Θ(logN) in Receiver-CD but Θ(loglogN) in Sender-CD, where N is the (known) size of the devices’ ID space. There is a tradeoff between time and energy. We give a new upper bound on the time-energy tradeoff curve for randomized Leader Election and Approximate Counting. A critical component of this algorithm is a new deterministic Leader Election algorithm for dense instances, when n=Θ(N), with inverse-Ackermann-type (O(α(N))) energy complexity. @InProceedings{STOC17p771, author = {Yi-Jun Chang and Tsvi Kopelowitz and Seth Pettie and Ruosong Wang and Wei Zhan}, title = {Exponential Separations in the Energy Complexity of Leader Election}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {771--783}, doi = {}, year = {2017}, } |
|
Charikar, Moses |
STOC '17: "Learning from Untrusted Data ..."
Learning from Untrusted Data
Moses Charikar, Jacob Steinhardt, and Gregory Valiant (Stanford University, USA) The vast majority of theoretical results in machine learning and statistics assume that the training data is a reliable reflection of the phenomena to be learned. Similarly, most learning techniques used in practice are brittle to the presence of large amounts of biased or malicious data. Motivated by this, we consider two frameworks for studying estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, list-decodable learning, asks whether it is possible to return a list of answers such that at least one is accurate. For example, given a dataset of n points for which an unknown subset of α n points are drawn from a distribution of interest, and no assumptions are made about the remaining (1−α)n points, is it possible to return a list of poly(1/α) answers? The second framework, which we term the semi-verified model, asks whether a small dataset of trusted data (drawn from the distribution in question) can be used to extract accurate information from a much larger but untrusted dataset (of which only an α-fraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This result has immediate implications for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary. @InProceedings{STOC17p47, author = {Moses Charikar and Jacob Steinhardt and Gregory Valiant}, title = {Learning from Untrusted Data}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {47--60}, doi = {}, year = {2017}, } |
|
Chattopadhyay, Eshan |
STOC '17: "Non-malleable Codes and Extractors ..."
Non-malleable Codes and Extractors for Small-Depth Circuits, and Affine Functions
Eshan Chattopadhyay and Xin Li (IAS, USA; Johns Hopkins University, USA) Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs as an elegant relaxation of error correcting codes, where the motivation is to handle more general forms of tampering while still providing meaningful guarantees. This has led to many elegant constructions and applications in cryptography. However, most works so far only studied tampering in the split-state model where different parts of the codeword are tampered independently, and thus do not apply to many other natural classes of tampering functions. The only exceptions are the work of Agrawal et al. which studied non-malleable codes against bit permutation composed with bit-wise tampering, and the works of Faust et al/ and Ball et al. which studied non-malleable codes against local functions. However, in both cases each tampered bit only depends on a subset of input bits. In this work, we study the problem of constructing non-malleable codes against more general tampering functions that act on the entire codeword. We give the first efficient constructions of non-malleable codes against tampering functions and affine tampering functions. These are the first explicit non-malleable codes against tampering functions where each tampered bit can depend on all input bits. We also give efficient non-malleable codes against t-local functions for t=o(√n), where a t-local function has the property that any output bit depends on at most t input bits. In the case of deterministic decoders, this improves upon the results of Ball et al, which can handle t≤ n1/4. All our results on non-malleable codes are obtained by using the connection between non-malleable codes and seedless non-malleable extractors discovered by Cheraghchi and Guruswami. Therefore, we also give the first efficient constructions of seedless non-malleable extractors against tampering functions, t-local tampering functions for t=o(√n), and affine tampering functions. To derive our results on non-malleable codes, we design efficient algorithms to almost uniformly sample from the pre-image of any given output of our non-malleable extractor. @InProceedings{STOC17p1171, author = {Eshan Chattopadhyay and Xin Li}, title = {Non-malleable Codes and Extractors for Small-Depth Circuits, and Affine Functions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1171--1184}, doi = {}, year = {2017}, } Video |
|
Chawla, Shuchi |
STOC '17: "Stability of Service under ..."
Stability of Service under Time-of-Use Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of Wisconsin-Madison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under Time-of-Use Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184--197}, doi = {}, year = {2017}, } |
|
Chen, Xi |
STOC '17: "Addition Is Exponentially ..."
Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits
Xi Chen, Igor C. Oliveira, and Rocco A. Servedio (Columbia University, USA; Charles University in Prague, Czechia) Let Addk,N denote the Boolean function which takes as input k strings of N bits each, representing k numbers a(1),…,a(k) in {0,1,…,2N−1}, and outputs 1 if and only if a(1) + ⋯ + a(k) ≥ 2N. Let MAJt,n denote a monotone unweighted threshold gate, i.e., the Boolean function which takes as input a single string x ∈ {0,1}n and outputs 1 if and only if x1 + ⋯ + xn ≥ t. The function Addk,N may be viewed as a monotone function that performs addition, and MAJt,n may be viewed as a monotone gate that performs counting. We refer to circuits that are composed of MAJ gates as monotone majority circuits. The main result of this paper is an exponential lower bound on the size of bounded-depth monotone majority circuits that compute Addk,N. More precisely, we show that for any constant d ≥ 2, any depth-d monotone majority circuit that computes Addd,N must have size 2Ω(N1/d). As Addk,N can be computed by a single monotone weighted threshold gate (that uses exponentially large weights), our lower bound implies that constant-depth monotone majority circuits require exponential size to simulate monotone weighted threshold gates. This answers a question posed by Goldmann and Karpinski (STOC’93) and recently restated by Håstad (2010, 2014). We also show that our lower bound is essentially best possible, by constructing a depth-d, size 2O(N1/d) monotone majority circuit for Addd,N. As a corollary of our lower bound, we significantly strengthen a classical theorem in circuit complexity due to Ajtai and Gurevich (JACM’87). They exhibited a monotone function that is in AC0 but requires super-polynomial size for any constant-depth monotone circuit composed of unbounded fan-in AND and OR gates. We describe a monotone function that is in depth-3 AC0 but requires exponential size monotone circuits of any constant depth, even if the circuits are composed of MAJ gates. @InProceedings{STOC17p1232, author = {Xi Chen and Igor C. Oliveira and Rocco A. Servedio}, title = {Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1232--1245}, doi = {}, year = {2017}, } STOC '17: "Beyond Talagrand Functions: ..." Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness Xi Chen, Erik Waingarten, and Jinyu Xie (Columbia University, USA) We prove a lower bound of Ω(n1/3) for the query complexity of any two-sided and adaptive algorithm that tests whether an unknown Boolean function f:{0,1}n→ {0,1} is monotone versus far from monotone. This improves the recent lower bound of Ω(n1/4) for the same problem by Belovs and Blais (STOC’16). Our result builds on a new family of random Boolean functions that can be viewed as a two-level extension of Talagrand’s random DNFs. Beyond monotonicity we prove a lower bound of Ω(√n) for two-sided, adaptive algorithms and a lower bound of Ω(n) for one-sided, non-adaptive algorithms for testing unateness, a natural generalization of monotonicity. The latter matches the linear upper bounds by Khot and Shinkar (RANDOM’16) and by Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri (2017). @InProceedings{STOC17p523, author = {Xi Chen and Erik Waingarten and Jinyu Xie}, title = {Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {523--536}, doi = {}, year = {2017}, } |
|
Chestnut, Stephen R. |
STOC '17: "Streaming Symmetric Norms ..."
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang (Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel) We characterize the streaming space complexity of every symmetric norm l (a norm on ℝn invariant under sign-flips and coordinate-permutations), by relating this space complexity to the measure-concentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±є)-approximation to the norm of the stream, for every 0<є≤ 1/2. (The bounds match up to (є−1 logn) factors.) We further extend those bounds to any large approximation ratio D≥ 1.1, showing that the decrease in space complexity is proportional to D2, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l2 unit sphere. The same median governs many phenomena in high-dimensional spaces, such as large-deviation bounds and the critical dimension in Dvoretzky’s Theorem. The family of symmetric norms contains several well-studied norms, such as all lp norms, and indeed we provide a new explanation for the disparity in space complexity between p≤ 2 and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the top-k norm and the k-support norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in http://sublinear.info). @InProceedings{STOC17p716, author = {Jarosław Błasiok and Vladimir Braverman and Stephen R. Chestnut and Robert Krauthgamer and Lin F. Yang}, title = {Streaming Symmetric Norms via Measure Concentration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716--729}, doi = {}, year = {2017}, } |
|
Christiani, Tobias |
STOC '17: "Set Similarity Search Beyond ..."
Set Similarity Search Beyond MinHash
Tobias Christiani and Rasmus Pagh (IT University of Copenhagen, Denmark) We consider the problem of approximate set similarity search under Braun-Blanquet similarity B(x, y) = |x ∩ y| / max(|x|, |y|). The (b1, b2)-approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets P such that, given a query set q, if there exists x ∈ P with B(q, x) ≥ b1, then we can efficiently return x′ ∈ P with B(q, x′) > b2. We present a simple data structure that solves this problem with space usage O(n1+ρlogn + ∑x ∈ P|x|) and query time O(|q|nρ logn) where n = |P| and ρ = log(1/b1)/log(1/b2). Making use of existing lower bounds for locality-sensitive hashing by O’Donnell et al. (TOCT 2014) we show that this value of ρ is tight across the parameter space, i.e., for every choice of constants 0 < b2 < b1 < 1. In the case where all sets have the same size our solution strictly improves upon the value of ρ that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broder’s MinHash (CCS 1997) for Jaccard similarity and Andoni et al.’s cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-dependent method by Andoni and Razenshteyn (STOC 2015). @InProceedings{STOC17p1094, author = {Tobias Christiani and Rasmus Pagh}, title = {Set Similarity Search Beyond MinHash}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1094--1107}, doi = {}, year = {2017}, } |
|
Chuzhoy, Julia |
STOC '17: "New Hardness Results for Routing ..."
New Hardness Results for Routing on Disjoint Paths
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical Node-Disjoint Paths (NDP) problem, the input consists of an undirected n-vertex graph G, and a collection M={(s1,t1),…,(sk,tk)} of pairs of its vertices, called source-destination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via node-disjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is O(√n), while the best current negative result is an Ω(log1/2−δn)-hardness of approximation for any constant δ, under standard complexity assumptions. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves an Õ(n1/4)-approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is Õ(n9/19). The best currently known lower bound for both these versions of the problem is APX-hardness. In this paper we prove that NDP is 2Ω(√logn)-hard to approximate, unless all problems in NP have algorithms with running time nO(logn). Our result holds even when the underlying graph is a planar graph with maximum vertex degree 4, and all source vertices lie on the boundary of a single face (but the destination vertices may lie anywhere in the graph). We extend this result to the closely related Edge-Disjoint Paths problem, showing the same hardness of approximation ratio even for sub-cubic planar graphs with all sources lying on the boundary of a single face. @InProceedings{STOC17p86, author = {Julia Chuzhoy and David H. K. Kim and Rachit Nimavat}, title = {New Hardness Results for Routing on Disjoint Paths}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {86--99}, doi = {}, year = {2017}, } |
|
Cohen, Gil |
STOC '17: "Towards Optimal Two-Source ..."
Towards Optimal Two-Source Extractors and Ramsey Graphs
Gil Cohen (Princeton University, USA) The main contribution of this work is a construction of a two-source extractor for quasi-logarithmic min-entropy. That is, an extractor for two independent n-bit sources with min-entropy O(logn), which is optimal up to the poly(loglogn) factor. A strong motivation for constructing two-source extractors for low entropy is for Ramsey graphs constructions. Our two-source extractor readily yields a (logn)(logloglogn)O(1)-Ramsey graph on n vertices. Although there has been exciting progress towards constructing O(logn)-Ramsey graphs in recent years, a line of work that this paper contributes to, it is not clear if current techniques can be pushed so as to match this bound. Interestingly, however, as an artifact of current techniques, one obtains strongly explicit Ramsey graphs, namely, graphs on n vertices where the existence of an edge connecting any pair of vertices can be determined in time poly(logn). On top of our strongly explicit construction, in this work, we consider algorithms that output the entire graph in poly(n)-time, and make progress towards matching the desired O(logn) bound in this setting. In our opinion, this is a natural setting in which Ramsey graphs constructions should be studied. The main technical novelty of this work lies in an improved construction of an independence-preserving merger (IPM), a variant of the well-studied notion of a merger, which was recently introduced by Cohen and Schulman. Our construction is based on a new connection to correlation breakers with advice. In fact, our IPM satisfies a stronger and more natural property than that required by the original definition, and we believe it may find further applications. @InProceedings{STOC17p1157, author = {Gil Cohen}, title = {Towards Optimal Two-Source Extractors and Ramsey Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1157--1170}, doi = {}, year = {2017}, } |
|
Cohen, Michael B. |
STOC '17: "Almost-Linear-Time Algorithms ..."
Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graph-theoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almost-linear-time directed Laplacian system solver, and, by leveraging the recent framework of [Cohen-Kelner-Peebles-Peng-Sidford-Vladu, FOCS’16], we also obtain almost-linear-time algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm3/4 + n2/3 m) logO(1) (n κ є−1)) to O((m + n2O(√lognloglogn)) logO(1) (n κ є−1)) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410--419}, doi = {}, year = {2017}, } |
|
Coja-Oghlan, Amin |
STOC '17: "Information-Theoretic Thresholds ..."
Information-Theoretic Thresholds from the Cavity Method
Amin Coja-Oghlan, Florent Krzakala, Will Perkins, and Lenka Zdeborova (Goethe University Frankfurt, Germany; CNRS, France; PSL Research University, France; ENS, France; UPMC, France; University of Birmingham, UK; CEA, France; University of Paris-Saclay, France) Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications. @InProceedings{STOC17p146, author = {Amin Coja-Oghlan and Florent Krzakala and Will Perkins and Lenka Zdeborova}, title = {Information-Theoretic Thresholds from the Cavity Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {146--157}, doi = {}, year = {2017}, } |
|
Curticapean, Radu |
STOC '17: "Homomorphisms Are a Good Basis ..."
Homomorphisms Are a Good Basis for Counting Small Subgraphs
Radu Curticapean, Holger Dell, and Dániel Marx (Hungarian Academy of Sciences, Hungary; Saarland University, Germany) We introduce graph motif parameters, a class of graph parameters that depend only on the frequencies of constant-size induced subgraphs. Classical works by Lovász show that many interesting quantities have this form, including, for fixed graphs H, the number of H-copies (induced or not) in an input graph G, and the number of homomorphisms from H to G. We use the framework of graph motif parameters to obtain faster algorithms for counting subgraph copies of fixed graphs H in host graphs G. More precisely, for graphs H on k edges, we show how to count subgraph copies of H in time kO(k)· n0.174k + o(k) by a surprisingly simple algorithm. This improves upon previously known running times, such as O(n0.91k + c) time for k-edge matchings or O(n0.46k + c) time for k-cycles. Furthermore, we prove a general complexity dichotomy for evaluating graph motif parameters: Given a class C of such parameters, we consider the problem of evaluating f∈ C on input graphs G, parameterized by the number of induced subgraphs that f depends upon. For every recursively enumerable class C, we prove the above problem to be either FPT or #W[1]-hard, with an explicit dichotomy criterion. This allows us to recover known dichotomies for counting subgraphs, induced subgraphs, and homomorphisms in a uniform and simplified way, together with improved lower bounds. Finally, we extend graph motif parameters to colored subgraphs and prove a complexity trichotomy: For vertex-colored graphs H and G, where H is from a fixed class of graphs, we want to count color-preserving H-copies in G. We show that this problem is either polynomial-time solvable or FPT or #W[1]-hard, and that the FPT cases indeed need FPT time under reasonable assumptions. @InProceedings{STOC17p210, author = {Radu Curticapean and Holger Dell and Dániel Marx}, title = {Homomorphisms Are a Good Basis for Counting Small Subgraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {210--223}, doi = {}, year = {2017}, } |
|
Dagan, Yuval |
STOC '17: "Twenty (Simple) Questions ..."
Twenty (Simple) Questions
Yuval Dagan, Yuval Filmus, Ariel Gabizon, and Shay Moran (Technion, Israel; Zerocoin Electronic Coin, USA; University of California at San Diego, USA; Simons Institute for the Theory of Computing Berkeley, USA) A basic combinatorial interpretation of Shannon’s entropy function is via the ”20 questions” game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the ”20 questions” game is given by a Huffman code for π: Bob’s questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately?* Our first main result shows that for every distribution π, Bob has a strategy that uses only questions of the form ”x < c?” and ”x = c?”, and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn1/r) questions that achieve a performance of at most H(π)+r, and show that Ω(rn1/r) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25n+o(n) questions such that for every distribution π, Bob can implement an optimal strategy for π using only questions from Q. We also show that 1.25n−o(n) questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)Θ(1/r) questions are necessary and sufficient. @InProceedings{STOC17p9, author = {Yuval Dagan and Yuval Filmus and Ariel Gabizon and Shay Moran}, title = {Twenty (Simple) Questions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {9--21}, doi = {}, year = {2017}, } |
|
Dahlgaard, Søren |
STOC '17: "Finding Even Cycles Faster ..."
Finding Even Cycles Faster via Capped k-Walks
Søren Dahlgaard, Mathias Bæk Tejs Knudsen, and Morten Stöckel (University of Copenhagen, Denmark) Finding cycles in graphs is a fundamental problem in algorithmic graph theory. In this paper, we consider the problem of finding and reporting a cycle of length 2k in an undirected graph G with n nodes and m edges for constant k≥ 2. A classic result by Bondy and Simonovits [J. Combinatorial Theory, 1974] implies that if m ≥ 100k n1+1/k, then G contains a 2k-cycle, further implying that one needs to consider only graphs with m = O(n1+1/k). Previously the best known algorithms were an O(n2) algorithm due to Yuster and Zwick [J. Discrete Math 1997] as well as a O(m2−(1+⌈ k/2 ⌉−1)/(k+1)) algorithm by Alon et. al. [Algorithmica 1997]. We present an algorithm that uses O( m2k/(k+1) ) time and finds a 2k-cycle if one exists. This bound is O(n2) exactly when m = Θ(n1+1/k). When finding 4-cycles our new bound coincides with Alon et. al., while for every k>2 our new bound yields a polynomial improvement in m. Yuster and Zwick noted that it is “plausible to conjecture that O(n2) is the best possible bound in terms of n”. We show “conditional optimality”: if this hypothesis holds then our O(m2k/(k+1)) algorithm is tight as well. Furthermore, a folklore reduction implies that no combinatorial algorithm can determine if a graph contains a 6-cycle in time O(m3/2−ε) for any ε>0 unless boolean matrix multiplication can be solved combinatorially in time O(n3−ε′) for some ε′ > 0, which is widely believed to be false. Coupled with our main result, this gives tight bounds for finding 6-cycles combinatorially and also separates the complexity of finding 4- and 6-cycles giving evidence that the exponent of m in the running time should indeed increase with k. The key ingredient in our algorithm is a new notion of capped k-walks, which are walks of length k that visit only nodes according to a fixed ordering. Our main technical contribution is an involved analysis proving several properties of such walks which may be of independent interest. @InProceedings{STOC17p112, author = {Søren Dahlgaard and Mathias Bæk Tejs Knudsen and Morten Stöckel}, title = {Finding Even Cycles Faster via Capped k-Walks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {112--120}, doi = {}, year = {2017}, } |
|
De, Anindya |
STOC '17: "Optimal Mean-Based Algorithms ..."
Optimal Mean-Based Algorithms for Trace Reconstruction
Anindya De, Ryan O'Donnell, and Rocco A. Servedio (Northwestern University, USA; Carnegie Mellon University, USA; Columbia University, USA) In the (deletion-channel) trace reconstruction problem, there is an unknown n-bit source string x. An algorithm is given access to independent “traces” of x, where a trace is formed by deleting each bit of x independently with probability δ. The goal of the algorithm is to recover x exactly (with high probability), while minimizing samples (number of traces) and running time. Previously, the best known algorithm for the trace reconstruction problem was due to Holenstein et al. [HMPW08]; it uses exp(O(n1/2)) samples and running time for any fixed 0 < δ < 1. It is also what we call a “mean-based algorithm”, meaning that it only uses the empirical means of the individual bits of the traces. Holenstein et al. also gave a lower bound, showing that any mean-based algorithm must use at least nΩ(logn) samples. In this paper we improve both of these results, obtaining matching upper and lower bounds for mean-based trace reconstruction. For any constant deletion rate 0 < δ < 1, we give a mean-based algorithm that uses exp(O(n1/3)) time and traces; we also prove that any mean-based algorithm must use at least exp(Ω(n1/3)) traces. In fact, we obtain matching upper and lower bounds even for δ subconstant and ρ 1−δ subconstant: when (log3 n)/n ≪ δ ≤ 1/2 the bound is exp(−Θ(δ n)1/3), and when 1/√n ≪ ρ ≤ 1/2 the bound is exp(−Θ(n/ρ)1/3). Our proofs involve estimates for the maxima of Littlewood polynomials on complex disks. We show that these techniques can also be used to perform trace reconstruction with random insertions and bit-flips in addition to deletions. We also find a surprising result: for deletion probabilities δ > 1/2, the presence of insertions can actually help with trace reconstruction. @InProceedings{STOC17p1047, author = {Anindya De and Ryan O'Donnell and Rocco A. Servedio}, title = {Optimal Mean-Based Algorithms for Trace Reconstruction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1047--1056}, doi = {}, year = {2017}, } |
|
Dell, Holger |
STOC '17: "Homomorphisms Are a Good Basis ..."
Homomorphisms Are a Good Basis for Counting Small Subgraphs
Radu Curticapean, Holger Dell, and Dániel Marx (Hungarian Academy of Sciences, Hungary; Saarland University, Germany) We introduce graph motif parameters, a class of graph parameters that depend only on the frequencies of constant-size induced subgraphs. Classical works by Lovász show that many interesting quantities have this form, including, for fixed graphs H, the number of H-copies (induced or not) in an input graph G, and the number of homomorphisms from H to G. We use the framework of graph motif parameters to obtain faster algorithms for counting subgraph copies of fixed graphs H in host graphs G. More precisely, for graphs H on k edges, we show how to count subgraph copies of H in time kO(k)· n0.174k + o(k) by a surprisingly simple algorithm. This improves upon previously known running times, such as O(n0.91k + c) time for k-edge matchings or O(n0.46k + c) time for k-cycles. Furthermore, we prove a general complexity dichotomy for evaluating graph motif parameters: Given a class C of such parameters, we consider the problem of evaluating f∈ C on input graphs G, parameterized by the number of induced subgraphs that f depends upon. For every recursively enumerable class C, we prove the above problem to be either FPT or #W[1]-hard, with an explicit dichotomy criterion. This allows us to recover known dichotomies for counting subgraphs, induced subgraphs, and homomorphisms in a uniform and simplified way, together with improved lower bounds. Finally, we extend graph motif parameters to colored subgraphs and prove a complexity trichotomy: For vertex-colored graphs H and G, where H is from a fixed class of graphs, we want to count color-preserving H-copies in G. We show that this problem is either polynomial-time solvable or FPT or #W[1]-hard, and that the FPT cases indeed need FPT time under reasonable assumptions. @InProceedings{STOC17p210, author = {Radu Curticapean and Holger Dell and Dániel Marx}, title = {Homomorphisms Are a Good Basis for Counting Small Subgraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {210--223}, doi = {}, year = {2017}, } |
|
Devanur, Nikhil R. |
STOC '17: "Stability of Service under ..."
Stability of Service under Time-of-Use Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of Wisconsin-Madison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under Time-of-Use Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184--197}, doi = {}, year = {2017}, } |
|
Doron, Dean |
STOC '17: "An Efficient Reduction from ..."
An Efficient Reduction from Two-Source to Non-malleable Extractors: Achieving Near-Logarithmic Min-entropy
Avraham Ben-Aroya, Dean Doron, and Amnon Ta-Shma (Tel Aviv University, Israel) The breakthrough result of Chattopadhyay and Zuckerman (2016) gives a reduction from the construction of explicit two-source extractors to the construction of explicit non-malleable extractors. However, even assuming the existence of optimal explicit non-malleable extractors only gives a two-source extractor (or a Ramsey graph) for poly(logn) entropy, rather than the optimal O(logn). In this paper we modify the construction to solve the above barrier. Using the currently best explicit non-malleable extractors we get an explicit bipartite Ramsey graphs for sets of size 2k, for k=O(logn loglogn). Any further improvement in the construction of non-malleable extractors would immediately yield a corresponding two-source extractor. Intuitively, Chattopadhyay and Zuckerman use an extractor as a sampler, and we observe that one could use a weaker object – a somewhere-random condenser with a small entropy gap and a very short seed. We also show how to explicitly construct this weaker object using the error reduction technique of Raz, Reingold and Vadhan (1999), and the constant-degree dispersers of Zuckerman (2006) that also work against extremely small tests. @InProceedings{STOC17p1185, author = {Avraham Ben-Aroya and Dean Doron and Amnon Ta-Shma}, title = {An Efficient Reduction from Two-Source to Non-malleable Extractors: Achieving Near-Logarithmic Min-entropy}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1185--1194}, doi = {}, year = {2017}, } |
|
Dughmi, Shaddin |
STOC '17: "Bernoulli Factories and Black-Box ..."
Bernoulli Factories and Black-Box Reductions in Mechanism Design
Shaddin Dughmi, Jason D. Hartline, Robert Kleinberg, and Rad Niazadeh (University of Southern California, USA; Northwestern University, USA; Cornell University, USA) We provide a polynomial-time reduction from Bayesian incentive-compatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multi-dimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior black-box reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism’s output, which is typically #P-hard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on ”Bernoulli Factories”. In a Bernoulli factory problem, one is given a function mapping the bias of an “input coin” to that of an “output coin”, and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the ”expectations from samples” computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by ”exponential weights”: expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of Hartline-Malekian-Kleinberg [2015] exactly incentive compatible. @InProceedings{STOC17p158, author = {Shaddin Dughmi and Jason D. Hartline and Robert Kleinberg and Rad Niazadeh}, title = {Bernoulli Factories and Black-Box Reductions in Mechanism Design}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {158--169}, doi = {}, year = {2017}, } |
|
Durfee, David |
STOC '17: "Sampling Random Spanning Trees ..."
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva (Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA) We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in (n5/3 m1/3) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(nω). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{nω,m√n,m4/3}) for m ≫ n7/4 (Colbourn et al. ’96, Kelner-Madry ’09, Madry et al. ’15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute -approximate effective resistances for a set S of vertex pairs via approximate Schur complements in (m+(n + |S|)−2) time, without using the Johnson-Lindenstrauss lemma which requires ( min{(m + |S|)−2, m+n−4 +|S|−2}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn’t sufficiently accurate. @InProceedings{STOC17p730, author = {David Durfee and Rasmus Kyng and John Peebles and Anup B. Rao and Sushant Sachdeva}, title = {Sampling Random Spanning Trees Faster Than Matrix Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {730--742}, doi = {}, year = {2017}, } Info |
|
Eenberg, Kasper |
STOC '17: "DecreaseKeys Are Expensive ..."
DecreaseKeys Are Expensive for External Memory Priority Queues
Kasper Eenberg, Kasper Green Larsen, and Huacheng Yu (Aarhus University, Denmark; Stanford University, USA) One of the biggest open problems in external memory data structures is the priority queue problem with DecreaseKey operations. If only Insert and ExtractMin operations need to be supported, one can design a comparison-based priority queue performing O((N/B)lgM/B N) I/Os over a sequence of N operations, where B is the disk block size in number of words and M is the main memory size in number of words. This matches the lower bound for comparison-based sorting and is hence optimal for comparison-based priority queues. However, if we also need to support DecreaseKeys, the performance of the best known priority queue is only O((N/B) lg2 N) I/Os. The big open question is whether a degradation in performance really is necessary. We answer this question affirmatively by proving a lower bound of Ω((N/B) lglgN B) I/Os for processing a sequence of N intermixed Insert, ExtraxtMin and DecreaseKey operations. Our lower bound is proved in the cell probe model and thus holds also for non-comparison-based priority queues. @InProceedings{STOC17p1081, author = {Kasper Eenberg and Kasper Green Larsen and Huacheng Yu}, title = {DecreaseKeys Are Expensive for External Memory Priority Queues}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1081--1093}, doi = {}, year = {2017}, } |
|
Ehsani, Soheil |
STOC '17: "Beating 1-1/e for Ordered ..."
Beating 1-1/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 1-1/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {61--71}, doi = {}, year = {2017}, } |
|
Eldan, Ronen |
STOC '17: "Kernel-Based Methods for Bandit ..."
Kernel-Based Methods for Bandit Convex Optimization
Sébastien Bubeck, Yin Tat Lee, and Ronen Eldan (Microsoft Research, USA; Weizmann Institute of Science, Israel) We consider the adversarial convex bandit problem and we build the first poly(T)-time algorithm with poly(n) √T-regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves Õ(n9.5 √T)-regret, and we show that a simple variant of this algorithm can be run in poly(n log(T))-time per step at the cost of an additional poly(n) To(1) factor in the regret. These results improve upon the Õ(n11 √T)-regret and exp(poly(T))-time result of the first two authors, and the log(T)poly(n) √T-regret and log(T)poly(n)-time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve Õ(n1.5 √T)-regret, and moreover that this regret is unimprovable (the current best lower bound being Ω(n √T) and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order n3 / є2. @InProceedings{STOC17p72, author = {Sébastien Bubeck and Yin Tat Lee and Ronen Eldan}, title = {Kernel-Based Methods for Bandit Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {72--85}, doi = {}, year = {2017}, } Video Info |
|
Elkin, Michael |
STOC '17: "Distributed Exact Shortest ..."
Distributed Exact Shortest Paths in Sublinear Time
Michael Elkin (Ben-Gurion University of the Negev, Israel) The distributed single-source shortest paths problem is one of the most fundamental and central problems in the message-passing distributed computing. Classical Bellman-Ford algorithm solves it in O(n) time, where n is the number of vertices in the input graph G. Peleg and Rubinovich, FOCS’99, showed a lower bound of Ω(D + √n) for this problem, where D is the hop-diameter of G. Whether or not this problem can be solved in o(n) time when D is relatively small is a major notorious open question. Despite intensive research that yielded near-optimal algorithms for the approximate variant of this problem, no progress was reported for the original problem. In this paper we answer this question in the affirmative. We devise an algorithm that requires O((n logn)5/6) time, for D = O(√n logn), and O(D1/3 · (n logn)2/3) time, for larger D. This running time is sublinear in n in almost the entire range of parameters, specifically, for D = o(n/log2 n). We also generalize our result in two directions. One is when edges have bandwidth b ≥ 1, and the other is the s-sources shortest paths problem. For the former problem, our algorithm provides an improved bound, compared to the unit-bandwidth case. In particular, we provide an all-pairs shortest paths algorithm that requires O(n5/3 · log2/3 n) time, even for b = 1, for all values of D. For the latter problem (of s sources), our algorithm also provides bounds that improve upon the previous state-of-the-art in the entire range of parameters. From the technical viewpoint, our algorithm computes a hopset G″ of a skeleton graph G′ of G without first computing G′ itself. We then conduct a Bellman-Ford exploration in G′ ∪ G″, while computing the required edges of G′ on the fly. As a result, our algorithm computes exactly those edges of G′ that it really needs, rather than computing approximately the entire G′. @InProceedings{STOC17p757, author = {Michael Elkin}, title = {Distributed Exact Shortest Paths in Sublinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {757--770}, doi = {}, year = {2017}, } |
|
Esfandiari, Hossein |
STOC '17: "Beating 1-1/e for Ordered ..."
Beating 1-1/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 1-1/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {61--71}, doi = {}, year = {2017}, } |
|
Fan, Zhou |
STOC '17: "How Well Do Local Algorithms ..."
How Well Do Local Algorithms Solve Semidefinite Programs?
Zhou Fan and Andrea Montanari (Stanford University, USA) Several probabilistic models from high-dimensional statistics and machine learning reveal an intriguing and yet poorly understood dichotomy. Either simple local algorithms succeed in estimating the object of interest, or even sophisticated semi-definite programming (SDP) relaxations fail. In order to explore this phenomenon, we study a classical SDP relaxation of the minimum graph bisection problem, when applied to Erdos-Rényi random graphs with bounded average degree d > 1, and obtain several types of results. First, we use a dual witness construction (using the so-called non-backtracking matrix of the graph) to upper bound the SDP value. Second, we prove that a simple local algorithm approximately solves the SDP to within a factor 2d^2/(2d^2 + d - 1) of the upper bound. In particular, the local algorithm is at most 8/9 suboptimal, and 1 + O(d^{-1}) suboptimal for large degree. We then analyze a more sophisticated local algorithm, which aggregates information according to the harmonic measure on the limiting Galton-Watson (GW) tree. The resulting lower bound is expressed in terms of the conductance of the GW tree and matches surprisingly well the empirically determined SDP values on large-scale Erdos-Rényi graphs. We finally consider the planted partition model. In this case, purely local algorithms are known to fail, but they do succeed if a small amount of side information is available. Our results imply quantitative bounds on the threshold for partial recovery using SDP in this model. @InProceedings{STOC17p604, author = {Zhou Fan and Andrea Montanari}, title = {How Well Do Local Algorithms Solve Semidefinite Programs?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {604--614}, doi = {}, year = {2017}, } |
|
Feige, Uriel |
STOC '17: "Approximate Modularity Revisited ..."
Approximate Modularity Revisited
Uriel Feige, Michal Feldman, and Inbal Talgam-Cohen (Weizmann Institute of Science, Israel; Microsoft Research, Israel; Tel Aviv University, Israel; Hebrew University of Jerusalem, Israel) Set functions with convenient properties (such as submodularity) appear in application areas of current interest, such as algorithmic game theory, and allow for improved optimization algorithms. It is natural to ask (e.g., in the context of data driven optimization) how robust such properties are, and whether small deviations from them can be tolerated. We consider two such questions in the important special case of linear set functions. One question that we address is whether any set function that approximately satisfies the modularity equation (linear functions satisfy the modularity equation exactly) is close to a linear function. The answer to this is positive (in a precise formal sense) as shown by Kalton and Roberts [1983] (and further improved by Bondarenko, Prymak, and Radchenko [2013]). We revisit their proof idea that is based on expander graphs, and provide significantly stronger upper bounds by combining it with new techniques. Furthermore, we provide improved lower bounds for this problem. Another question that we address is that of how to learn a linear function h that is close to an approximately linear function f, while querying the value of f on only a small number of sets. We present a deterministic algorithm that makes only linearly many (in the number of items) nonadaptive queries, by this improving over a previous algorithm of Chierichetti, Das, Dasgupta and Kumar [2015] that is randomized and makes more than a quadratic number of queries. Our learning algorithm is based on a Hadamard transform. @InProceedings{STOC17p1028, author = {Uriel Feige and Michal Feldman and Inbal Talgam-Cohen}, title = {Approximate Modularity Revisited}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1028--1041}, doi = {}, year = {2017}, } |
|
Feldman, Michal |
STOC '17: "Approximate Modularity Revisited ..."
Approximate Modularity Revisited
Uriel Feige, Michal Feldman, and Inbal Talgam-Cohen (Weizmann Institute of Science, Israel; Microsoft Research, Israel; Tel Aviv University, Israel; Hebrew University of Jerusalem, Israel) Set functions with convenient properties (such as submodularity) appear in application areas of current interest, such as algorithmic game theory, and allow for improved optimization algorithms. It is natural to ask (e.g., in the context of data driven optimization) how robust such properties are, and whether small deviations from them can be tolerated. We consider two such questions in the important special case of linear set functions. One question that we address is whether any set function that approximately satisfies the modularity equation (linear functions satisfy the modularity equation exactly) is close to a linear function. The answer to this is positive (in a precise formal sense) as shown by Kalton and Roberts [1983] (and further improved by Bondarenko, Prymak, and Radchenko [2013]). We revisit their proof idea that is based on expander graphs, and provide significantly stronger upper bounds by combining it with new techniques. Furthermore, we provide improved lower bounds for this problem. Another question that we address is that of how to learn a linear function h that is close to an approximately linear function f, while querying the value of f on only a small number of sets. We present a deterministic algorithm that makes only linearly many (in the number of items) nonadaptive queries, by this improving over a previous algorithm of Chierichetti, Das, Dasgupta and Kumar [2015] that is randomized and makes more than a quadratic number of queries. Our learning algorithm is based on a Hadamard transform. @InProceedings{STOC17p1028, author = {Uriel Feige and Michal Feldman and Inbal Talgam-Cohen}, title = {Approximate Modularity Revisited}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1028--1041}, doi = {}, year = {2017}, } |
|
Filmus, Yuval |
STOC '17: "Twenty (Simple) Questions ..."
Twenty (Simple) Questions
Yuval Dagan, Yuval Filmus, Ariel Gabizon, and Shay Moran (Technion, Israel; Zerocoin Electronic Coin, USA; University of California at San Diego, USA; Simons Institute for the Theory of Computing Berkeley, USA) A basic combinatorial interpretation of Shannon’s entropy function is via the ”20 questions” game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the ”20 questions” game is given by a Huffman code for π: Bob’s questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately?* Our first main result shows that for every distribution π, Bob has a strategy that uses only questions of the form ”x < c?” and ”x = c?”, and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn1/r) questions that achieve a performance of at most H(π)+r, and show that Ω(rn1/r) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25n+o(n) questions such that for every distribution π, Bob can implement an optimal strategy for π using only questions from Q. We also show that 1.25n−o(n) questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)Θ(1/r) questions are necessary and sufficient. @InProceedings{STOC17p9, author = {Yuval Dagan and Yuval Filmus and Ariel Gabizon and Shay Moran}, title = {Twenty (Simple) Questions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {9--21}, doi = {}, year = {2017}, } |
|
Forbes, Michael A. |
STOC '17: "Succinct Hitting Sets and ..."
Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds
Michael A. Forbes, Amir Shpilka, and Ben Lee Volk (Simons Institute for the Theory of Computing Berkeley, USA; Tel Aviv University, Israel) We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining super-polynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits. Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)-degree polylog(N)-size circuits is a hitting set for the class of poly(N)-degree poly(N)-size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits. @InProceedings{STOC17p653, author = {Michael A. Forbes and Amir Shpilka and Ben Lee Volk}, title = {Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {653--664}, doi = {}, year = {2017}, } |
|
Foster, Nate |
STOC '17: "The Next 700 Network Programming ..."
The Next 700 Network Programming Languages (Invited Talk)
Nate Foster (Cornell University, USA) Specification and verification of computer networks has become a reality in recent years, with the emergence of domain-specific programming languages and automated reasoning tools. But the design of these frameworks has been largely ad hoc, driven more by the needs of applications and the capabilities of hardware than by any foundational principles. This talk will present NetKAT, a language for programming networks based on a well-studied mathematical foundation: regular languages and finite automata. The talk will describe the design of the language, discuss its semantic underpinnings, and present highlights from ongoing work extending the language with stateful and probabilistic features. @InProceedings{STOC17p7, author = {Nate Foster}, title = {The Next 700 Network Programming Languages (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {7--7}, doi = {}, year = {2017}, } |
|
Fu, Zhiguo |
STOC '17: "Holographic Algorithm with ..."
Holographic Algorithm with Matchgates Is Universal for Planar #CSP over Boolean Domain
Jin-Yi Cai and Zhiguo Fu (University of Wisconsin-Madison, USA; Jilin University, China) We prove a complexity classification theorem that classifies all counting constraint satisfaction problems (#CSP) over Boolean variables into exactly three classes: (1) Polynomial-time solvable; (2) #P-hard for general instances, but solvable in polynomial-time over planar structures; and (3) #P-hard over planar structures. The classification applies to all finite sets of complex-valued, not necessarily symmetric, constraint functions on Boolean variables. It is shown that Valiant's holographic algorithm with matchgates is universal strategy for all problems in class (2). @InProceedings{STOC17p842, author = {Jin-Yi Cai and Zhiguo Fu}, title = {Holographic Algorithm with Matchgates Is Universal for Planar #CSP over Boolean Domain}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {842--855}, doi = {}, year = {2017}, } |
|
Gabizon, Ariel |
STOC '17: "Twenty (Simple) Questions ..."
Twenty (Simple) Questions
Yuval Dagan, Yuval Filmus, Ariel Gabizon, and Shay Moran (Technion, Israel; Zerocoin Electronic Coin, USA; University of California at San Diego, USA; Simons Institute for the Theory of Computing Berkeley, USA) A basic combinatorial interpretation of Shannon’s entropy function is via the ”20 questions” game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the ”20 questions” game is given by a Huffman code for π: Bob’s questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately?* Our first main result shows that for every distribution π, Bob has a strategy that uses only questions of the form ”x < c?” and ”x = c?”, and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn1/r) questions that achieve a performance of at most H(π)+r, and show that Ω(rn1/r) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25n+o(n) questions such that for every distribution π, Bob can implement an optimal strategy for π using only questions from Q. We also show that 1.25n−o(n) questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)Θ(1/r) questions are necessary and sufficient. @InProceedings{STOC17p9, author = {Yuval Dagan and Yuval Filmus and Ariel Gabizon and Shay Moran}, title = {Twenty (Simple) Questions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {9--21}, doi = {}, year = {2017}, } |
|
Ganesh, Arun |
STOC '17: "Online Service with Delay ..."
Online Service with Delay
Yossi Azar, Arun Ganesh, Rong Ge, and Debmalya Panigrahi (Tel Aviv University, Israel; Duke University, USA) In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a poly-logarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems. @InProceedings{STOC17p551, author = {Yossi Azar and Arun Ganesh and Rong Ge and Debmalya Panigrahi}, title = {Online Service with Delay}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {551--563}, doi = {}, year = {2017}, } |
|
Garg, Ankit |
STOC '17: "Algorithmic and Optimization ..."
Algorithmic and Optimization Aspects of Brascamp-Lieb Inequalities, via Operator Scaling
Ankit Garg, Leonid Gurvits, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; City College of New York, USA; Princeton University, USA; IAS, USA) The celebrated Brascamp-Lieb (BL) inequalities [BL76, Lie90], and their reverse form of Barthe [Bar98], are an important mathematical tool, unifying and generalizing numerous in- equalities in analysis, convex geometry and information theory, with many used in computer science. While their structural theory is very well understood, far less is known about computing their main parameters below (which we later define). Prior to this work, the best known algorithms for any of these optimization tasks required at least exponential time. In this work, we give polynomial time algorithms to compute: (1) Feasibility of BL-datum, (2) Optimal BL- constant, (3) Weak separation oracle for BL-polytopes. What is particularly exciting about this progress, beyond the better understanding of BL- inequalities, is that the objects above naturally encode rich families of optimization problems which had no prior efficient algorithms. In particular, the BL-constants (which we efficiently compute) are solutions to non-convex optimization problems, and the BL-polytopes (for which we provide efficient membership and separation oracles) are linear programs with exponentially many facets. Thus we hope that new combinatorial optimization problems can be solved via reductions to the ones above, and make modest initial steps in exploring this possibility. Our algorithms are obtained by a simple efficient reduction of a given BL-datum to an instance of the Operator Scaling problem defined by [Gur04]. To obtain the results above, we utilize the two (very recent and different) algorithms for the operator scaling problem [GGOW16, IQS15a]. Our reduction implies algorithmic versions of many of the known structural results on BL-inequalities, and in some cases provide proofs that are different or simpler than existing ones. Further, the analytic properties of the [GGOW16] algorithm provide new, effective bounds on the magnitude and continuity of BL-constants, with applications to non-linear versions of BL-inequalities; prior work relied on compactness, and thus provided no bounds. On a higher level, our application of operator scaling algorithm to BL-inequalities further connects analysis and optimization with the diverse mathematical areas used so far to mo- tivate and solve the operator scaling problem, which include commutative invariant theory, non-commutative algebra, computational complexity and quantum information theory. @InProceedings{STOC17p397, author = {Ankit Garg and Leonid Gurvits and Rafael Oliveira and Avi Wigderson}, title = {Algorithmic and Optimization Aspects of Brascamp-Lieb Inequalities, via Operator Scaling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {397--409}, doi = {}, year = {2017}, } |
|
Garg, Jugal |
STOC '17: "Settling the Complexity of ..."
Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria
Jugal Garg, Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod (University of Illinois at Urbana-Champaign, USA; Georgia Institute of Technology, USA) Our first result shows membership in PPAD for the problem of computing approximate equilibria for an Arrow-Debreu exchange market for piecewise-linear concave (PLC) utility functions. As a corollary we also obtain membership in PPAD for Leontief utility functions. This settles an open question of Vazirani and Yannakakis (2011). Next we show FIXP-hardness of computing equilibria in Arrow-Debreu exchange markets under Leontief utility functions, and Arrow-Debreu markets under linear utility functions and Leontief production sets, thereby settling these open questions of Vazirani and Yannakakis (2011). As corollaries, we obtain FIXP-hardness for PLC utilities and for Arrow-Debreu markets under linear utility functions and polyhedral production sets. In all cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be "yes" instances. If all instances are under consideration, then in all cases we prove that the problem of deciding if a given instance admits an equilibrium is ETR-complete, where ETR is the class Existential Theory of Reals. As a consequence of the results stated above, and the fact that membership in FIXP has been established for PLC utilities, the entire computational difficulty of Arrow-Debreu markets under PLC utility functions lies in the Leontief utility subcase. This is perhaps the most unexpected aspect of our result, since Leontief utilities are meant for the case that goods are perfect complements, whereas PLC utilities are very general, capturing not only the cases when goods are complements and substitutes, but also arbitrary combinations of these and much more. Finally, we give a polynomial time algorithm for finding an equilibrium in Arrow-Debreu exchange markets under Leontief utility functions provided the number of agents is a constant. This settles part of an open problem of Devanur and Kannan (2008). @InProceedings{STOC17p890, author = {Jugal Garg and Ruta Mehta and Vijay V. Vazirani and Sadra Yazdanbod}, title = {Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890--901}, doi = {}, year = {2017}, } |
|
Garg, Shashwat |
STOC '17: "Algorithmic Discrepancy Beyond ..."
Algorithmic Discrepancy Beyond Partial Coloring
Nikhil Bansal and Shashwat Garg (Eindhoven University of Technology, Netherlands) The partial coloring method is one of the most powerful and widely used method in combinatorial discrepancy problems. However, in many cases it leads to sub-optimal bounds as the partial coloring step must be iterated a logarithmic number of times, and the errors can add up in an adversarial way. We give a new and general algorithmic framework that overcomes the limitations of the partial coloring method and can be applied in a black-box manner to various problems. Using this framework, we give new improved bounds and algorithms for several classic problems in discrepancy. In particular, for Tusnady’s problem, we give an improved O(log2 n) bound for discrepancy of axis-parallel rectangles and more generally an Od(logdn) bound for d-dimensional boxes in ℝd. Previously, even non-constructively, the best bounds were O(log2.5 n) and Od(logd+0.5n) respectively. Similarly, for the Steinitz problem we give the first algorithm that matches the best known non-constructive bounds due to Banaszczyk in the ℓ∞ case, and improves the previous algorithmic bounds substantially in the ℓ2 case. Our framework is based upon a substantial generalization of the techniques developed recently in the context of the Komlós discrepancy problem. @InProceedings{STOC17p914, author = {Nikhil Bansal and Shashwat Garg}, title = {Algorithmic Discrepancy Beyond Partial Coloring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {914--926}, doi = {}, year = {2017}, } STOC '17: "Faster Space-Efficient Algorithms ..." Faster Space-Efficient Algorithms for Subset Sum and k-Sum Nikhil Bansal, Shashwat Garg, Jesper Nederlof, and Nikhil Vyas (Eindhoven University of Technology, Netherlands; IIT Bombay, India) We present randomized algorithms that solve Subset Sum and Knapsack instances with n items in O*(20.86n) time, where the O*(·) notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve Binary Linear Programming on n variables with few constraints in a similar running time. We also show that for any constant k≥ 2, random instances of k-Sum can be solved using O(nk−0.5(n)) time and O(logn) space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using O(logn) space significantly faster than the trivial O(n2) time algorithm if no value occurs too often in the same list. @InProceedings{STOC17p198, author = {Nikhil Bansal and Shashwat Garg and Jesper Nederlof and Nikhil Vyas}, title = {Faster Space-Efficient Algorithms for Subset Sum and k-Sum}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {198--209}, doi = {}, year = {2017}, } |
|
Ge, Rong |
STOC '17: "Online Service with Delay ..."
Online Service with Delay
Yossi Azar, Arun Ganesh, Rong Ge, and Debmalya Panigrahi (Tel Aviv University, Israel; Duke University, USA) In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a poly-logarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems. @InProceedings{STOC17p551, author = {Yossi Azar and Arun Ganesh and Rong Ge and Debmalya Panigrahi}, title = {Online Service with Delay}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {551--563}, doi = {}, year = {2017}, } STOC '17: "Provable Learning of Noisy-or ..." Provable Learning of Noisy-or Networks Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski (Princeton University, USA; Duke University, USA) Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Learning the model ---that is, the mapping from hidden variables to visible ones and vice versa---is NP-hard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structure: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the single-layer noisy-OR network, which is a textbook example of a bayes net, and used for example in the classic QMR-DT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable. @InProceedings{STOC17p1057, author = {Sanjeev Arora and Rong Ge and Tengyu Ma and Andrej Risteski}, title = {Provable Learning of Noisy-or Networks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1057--1066}, doi = {}, year = {2017}, } |
|
Ghaffari, Mohsen |
STOC '17: "On the Complexity of Local ..."
On the Complexity of Local Distributed Graph Problems
Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus (ETH Zurich, Switzerland; University of Freiburg, Germany) This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS ’87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)-vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2O(√logn). Understanding and potentially narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and (Δ+1)-coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmic-time randomized distributed algorithms in a simpler way. As a result, we provide a polylog-time distributed approximation scheme for arbitrary distributed covering and packing integer linear programs. @InProceedings{STOC17p784, author = {Mohsen Ghaffari and Fabian Kuhn and Yannic Maus}, title = {On the Complexity of Local Distributed Graph Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {784--797}, doi = {}, year = {2017}, } |
|
Gharan, Shayan Oveis |
STOC '17: "A Generalization of Permanent ..."
A Generalization of Permanent Inequalities and Applications in Counting and Optimization
Nima Anari and Shayan Oveis Gharan (Stanford University, USA; University of Washington, USA) A polynomial p∈ℝ[z1,…,zn] is real stable if it has no roots in the upper-half complex plane. Gurvits’s permanent inequality gives a lower bound on the coefficient of the z1z2… zn monomial of a real stable polynomial p with nonnegative coefficients. This fundamental inequality has been used to attack several counting and optimization problems. Here, we study a more general question: Given a stable multilinear polynomial p with nonnegative coefficients and a set of monomials S, we show that if the polynomial obtained by summing up all monomials in S is real stable, then we can lower bound the sum of coefficients of monomials of p that are in S. We also prove generalizations of this theorem to (real stable) polynomials that are not multilinear. We use our theorem to give a new proof of Schrijver’s inequality on the number of perfect matchings of a regular bipartite graph, generalize a recent result of Nikolov and Singh, and give deterministic polynomial time approximation algorithms for several counting problems. @InProceedings{STOC17p384, author = {Nima Anari and Shayan Oveis Gharan}, title = {A Generalization of Permanent Inequalities and Applications in Counting and Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {384--396}, doi = {}, year = {2017}, } |
|
Gishboliner, Lior |
STOC '17: "Removal Lemmas with Polynomial ..."
Removal Lemmas with Polynomial Bounds
Lior Gishboliner and Asaf Shapira (Tel Aviv University, Israel) We give new sufficient and necessary criteria guaranteeing that a hereditary graph property can be tested with a polynomial query complexity. Although both are simple combinatorial criteria, they imply almost all prior positive and negative results of this type, as well as many new ones. One striking application of our results is that every semi-algebraic graph property (e.g., being an interval graph, a unit-disc graph etc.) can be tested with a polynomial query complexity. This confirms a conjecture of Alon. The proofs combine probabilistic ideas together with a novel application of a conditional regularity lemma for matrices, due to Alon, Fischer and Newman. @InProceedings{STOC17p510, author = {Lior Gishboliner and Asaf Shapira}, title = {Removal Lemmas with Polynomial Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {510--522}, doi = {}, year = {2017}, } |
|
Gonczarowski, Yannai A. |
STOC '17: "Efficient Empirical Revenue ..."
Efficient Empirical Revenue Maximization in Single-Parameter Auction Environments
Yannai A. Gonczarowski and Noam Nisan (Hebrew University of Jerusalem, Israel; Microsoft Research, Israel) We present a polynomial-time algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of single-parameter auction environments including matroid environments, position environments, and the public project environment. The valuation distributions may be arbitrary bounded distributions (in particular, they may be irregular, and may differ for the various bidders), thus resolving a problem left open by previous papers. The analysis uses basic tools, is performed in its entirety in value-space, and simplifies the analysis of previously known results for special cases. Furthermore, the analysis extends to certain single-parameter auction environments where precise revenue maximization is known to be intractable, such as knapsack environments. @InProceedings{STOC17p856, author = {Yannai A. Gonczarowski and Noam Nisan}, title = {Efficient Empirical Revenue Maximization in Single-Parameter Auction Environments}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {856--868}, doi = {}, year = {2017}, } STOC '17: "The Menu-Size Complexity of ..." The Menu-Size Complexity of Revenue Approximation Moshe Babaioff, Yannai A. Gonczarowski, and Noam Nisan (Microsoft Research, Israel; Hebrew University of Jerusalem, Israel) We consider a monopolist that is selling n items to a single additive buyer, where the buyer’s values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly have unbounded support. It is well known that — unlike in the single item case — the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open. In this paper, we give an affirmative answer to this open question, showing that for every n and for every ε>0, there exists a complexity bound C=C(n,ε) such that auctions of menu size at most C suffice for obtaining a (1−ε) fraction of the optimal revenue from any F1,…,Fn. We prove upper and lower bounds on the revenue approximation complexity C(n,ε), as well as on the deterministic communication complexity required to run an auction that achieves such an approximation. @InProceedings{STOC17p869, author = {Moshe Babaioff and Yannai A. Gonczarowski and Noam Nisan}, title = {The Menu-Size Complexity of Revenue Approximation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {869--877}, doi = {}, year = {2017}, } |
|
Grandoni, Fabrizio |
STOC '17: "Surviving in Directed Graphs: ..."
Surviving in Directed Graphs: A Quasi-Polynomial-Time Polylogarithmic Approximation for Two-Connected Directed Steiner Tree
Fabrizio Grandoni and Bundit Laekhanukit (IDSIA, Switzerland; University of Lugano, Switzerland; Weizmann Institute of Science, Israel) Real-word networks are often prone to failures. A reliable network needs to cope with this situation and must provide a backup communication channel. This motivates the study of survivable network design, which has been a focus of research for a few decades. To date, survivable network design problems on undirected graphs are well-understood. For example, there is a 2 approximation in the case of edge failures [Jain, FOCS’98/Combinatorica’01]. The problems on directed graphs, in contrast, have seen very little progress. Most techniques for the undirected case like primal-dual and iterative rounding methods do not seem to extend to the directed case. Almost no non-trivial approximation algorithm is known even for a simple case where we wish to design a network that tolerates a single failure. In this paper, we study a survivable network design problem on directed graphs, 2-Connected Directed Steiner Tree (2-DST): given an n-vertex weighted directed graph, a root r, and a set of h terminals S, find a min-cost subgraph H that has two edge/vertex disjoint paths from r to any t∈ S. 2-DST is a natural generalization of the classical Directed Steiner Tree problem (DST), where we have an additional requirement that the network must tolerate one failure. No non-trivial approximation is known for 2-DST. This was left as an open problem by Feldman et al., [SODA’09; JCSS] and has then been studied by Cheriyan et al. [SODA’12; TALG] and Laekhanukit [SODA’14]. However, no positive result was known except for the special case of a D-shallow instance [Laekhanukit, ICALP’16]. We present an O(D3logD· h2/D· logn) approximation algorithm for 2-DST that runs in time O(nO(D)), for any D∈[log2h]. This implies a polynomial-time O(hєlogn) approximation for any constant є>0, and a poly-logarithmic approximation running in quasi-polynomial time. We remark that this is essentially the best-known even for the classical DST, and the latter problem is O(log2−єn)-hard to approximate [Halperin and Krauthgamer, STOC’03]. As a by product, we obtain an algorithm with the same approximation guarantee for the 2-Connected Directed Steiner Subgraph problem, where the goal is to find a min-cost subgraph such that every pair of terminals are 2-edge/vertex connected. Our approximation algorithm is based on a careful combination of several techniques. In more detail, we decompose an optimal solution into two (possibly not edge disjoint) divergent trees that induces two edge disjoint paths from the root to any given terminal. These divergent trees are then embedded into a shallow tree by means of Zelikovsky’s height reduction theorem. On the latter tree we solve a 2-Connected Group Steiner Tree problem and then map back this solution to the original graph. Crucially, our tree embedding is achieved via a probabilistic mapping guided by an LP: This is the main technical novelty of our approach, and might be useful for future work. @InProceedings{STOC17p420, author = {Fabrizio Grandoni and Bundit Laekhanukit}, title = {Surviving in Directed Graphs: A Quasi-Polynomial-Time Polylogarithmic Approximation for Two-Connected Directed Steiner Tree}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {420--428}, doi = {}, year = {2017}, } |
|
Guo, Heng |
STOC '17: "Uniform Sampling through the ..."
Uniform Sampling through the Lovász Local Lemma
Heng Guo, Mark Jerrum, and Jingcheng Liu (Queen Mary University of London, UK; University of California at Berkeley, USA) We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lovász Local Lemma and some clas- sical sampling algorithms such as the “cycle-popping” algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences. @InProceedings{STOC17p342, author = {Heng Guo and Mark Jerrum and Jingcheng Liu}, title = {Uniform Sampling through the Lovász Local Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {342--355}, doi = {}, year = {2017}, } |
|
Gupta, Anupam |
STOC '17: "Online and Dynamic Algorithms ..."
Online and Dynamic Algorithms for Set Cover
Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Debmalya Panigrahi (Carnegie Mellon University, USA; Microsoft Research, India; IIT Delhi, India; Duke University, USA) In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of “active” elements to be covered changes over time. The goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primal-dual offline algorithms for set cover. The former leads to a competitive ratio of O(lognt), where nt is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on ft, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research. @InProceedings{STOC17p537, author = {Anupam Gupta and Ravishankar Krishnaswamy and Amit Kumar and Debmalya Panigrahi}, title = {Online and Dynamic Algorithms for Set Cover}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {537--550}, doi = {}, year = {2017}, } |
|
Gurjar, Rohit |
STOC '17: "Linear Matroid Intersection ..."
Linear Matroid Intersection Is in Quasi-NC
Rohit Gurjar and Thomas Thierauf (Tel Aviv University, Israel; Aalen University, Germany) Given two matroids on the same ground set, the matroid intersection problem asks to find a common independent set of maximum size. We show that the linear matroid intersection problem is in quasi-NC2. That is, it has uniform circuits of quasi-polynomial size nO(logn), and O(log2 n) depth. This generalizes the similar result for the bipartite perfect matching problem. We do this by an almost complete derandomization of the Isolation lemma for matroid intersection. Our result also implies a blackbox singularity test for symbolic matrices of the form A0+A1 z1 +A2 z2+ ⋯+Am zm, where A0 is an arbitrary matrix and the matrices A1,A2,…,Am are of rank 1 over some field. @InProceedings{STOC17p821, author = {Rohit Gurjar and Thomas Thierauf}, title = {Linear Matroid Intersection Is in Quasi-NC}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {821--830}, doi = {}, year = {2017}, } |
|
Gurvits, Leonid |
STOC '17: "Algorithmic and Optimization ..."
Algorithmic and Optimization Aspects of Brascamp-Lieb Inequalities, via Operator Scaling
Ankit Garg, Leonid Gurvits, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; City College of New York, USA; Princeton University, USA; IAS, USA) The celebrated Brascamp-Lieb (BL) inequalities [BL76, Lie90], and their reverse form of Barthe [Bar98], are an important mathematical tool, unifying and generalizing numerous in- equalities in analysis, convex geometry and information theory, with many used in computer science. While their structural theory is very well understood, far less is known about computing their main parameters below (which we later define). Prior to this work, the best known algorithms for any of these optimization tasks required at least exponential time. In this work, we give polynomial time algorithms to compute: (1) Feasibility of BL-datum, (2) Optimal BL- constant, (3) Weak separation oracle for BL-polytopes. What is particularly exciting about this progress, beyond the better understanding of BL- inequalities, is that the objects above naturally encode rich families of optimization problems which had no prior efficient algorithms. In particular, the BL-constants (which we efficiently compute) are solutions to non-convex optimization problems, and the BL-polytopes (for which we provide efficient membership and separation oracles) are linear programs with exponentially many facets. Thus we hope that new combinatorial optimization problems can be solved via reductions to the ones above, and make modest initial steps in exploring this possibility. Our algorithms are obtained by a simple efficient reduction of a given BL-datum to an instance of the Operator Scaling problem defined by [Gur04]. To obtain the results above, we utilize the two (very recent and different) algorithms for the operator scaling problem [GGOW16, IQS15a]. Our reduction implies algorithmic versions of many of the known structural results on BL-inequalities, and in some cases provide proofs that are different or simpler than existing ones. Further, the analytic properties of the [GGOW16] algorithm provide new, effective bounds on the magnitude and continuity of BL-constants, with applications to non-linear versions of BL-inequalities; prior work relied on compactness, and thus provided no bounds. On a higher level, our application of operator scaling algorithm to BL-inequalities further connects analysis and optimization with the diverse mathematical areas used so far to mo- tivate and solve the operator scaling problem, which include commutative invariant theory, non-commutative algebra, computational complexity and quantum information theory. @InProceedings{STOC17p397, author = {Ankit Garg and Leonid Gurvits and Rafael Oliveira and Avi Wigderson}, title = {Algorithmic and Optimization Aspects of Brascamp-Lieb Inequalities, via Operator Scaling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {397--409}, doi = {}, year = {2017}, } |
|
Haeupler, Bernhard |
STOC '17: "Synchronization Strings: Codes ..."
Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound
Bernhard Haeupler and Amirbehshad Shahrasbi (Carnegie Mellon University, USA) We introduce synchronization strings, which provide a novel way of efficiently dealing with synchronization errors, i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to deal with than more commonly considered half-errors, i.e., symbol corruptions and erasures. For every є >0, synchronization strings allow to index a sequence with an є−O(1) size alphabet such that one can efficiently transform k synchronization errors into (1 + є)k half-errors. This powerful new technique has many applications. In this paper we focus on designing insdel codes, i.e., error correcting block codes (ECCs) for insertion deletion channels. While ECCs for both half-errors and synchronization errors have been intensely studied, the later has largely resisted progress. As Mitzenmacher puts it in his 2009 survey: “Channels with synchronization errors ... are simply not adequately understood by current theory. Given the near-complete knowledge we have for channels with erasures and errors ... our lack of understanding about channels with synchronization errors is truly remarkable.” Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed and only since 2016 are there constructions of constant rate indel codes for asymptotically large noise rates. Even in the asymptotically large or small noise regime these codes are polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A straight forward application of our synchronization strings based indexing method gives a simple black-box construction which transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs over large constant alphabets into the realm of insdel codes. Most notably, for the complete noise spectrum we obtain efficient “near-MDS” insdel codes which get arbitrarily close to the optimal rate-distance tradeoff given by the Singleton bound. In particular, for any δ ∈ (0,1) and >0 we give insdel codes achieving a rate of 1 − δ − є over a constant size alphabet that efficiently correct a δ fraction of insertions or deletions. @InProceedings{STOC17p33, author = {Bernhard Haeupler and Amirbehshad Shahrasbi}, title = {Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {33--46}, doi = {}, year = {2017}, } |
|
HajiAghayi, MohammadTaghi |
STOC '17: "Beating 1-1/e for Ordered ..."
Beating 1-1/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 1-1/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {61--71}, doi = {}, year = {2017}, } |
|
Hartline, Jason D. |
STOC '17: "Bernoulli Factories and Black-Box ..."
Bernoulli Factories and Black-Box Reductions in Mechanism Design
Shaddin Dughmi, Jason D. Hartline, Robert Kleinberg, and Rad Niazadeh (University of Southern California, USA; Northwestern University, USA; Cornell University, USA) We provide a polynomial-time reduction from Bayesian incentive-compatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multi-dimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior black-box reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism’s output, which is typically #P-hard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on ”Bernoulli Factories”. In a Bernoulli factory problem, one is given a function mapping the bias of an “input coin” to that of an “output coin”, and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the ”expectations from samples” computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by ”exponential weights”: expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of Hartline-Malekian-Kleinberg [2015] exactly incentive compatible. @InProceedings{STOC17p158, author = {Shaddin Dughmi and Jason D. Hartline and Robert Kleinberg and Rad Niazadeh}, title = {Bernoulli Factories and Black-Box Reductions in Mechanism Design}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {158--169}, doi = {}, year = {2017}, } |
|
Hazan, Elad |
STOC '17: "Finding Approximate Local ..."
Finding Approximate Local Minima Faster than Gradient Descent
Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma (Princeton University, USA; IAS, USA) We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning. @InProceedings{STOC17p1195, author = {Naman Agarwal and Zeyuan Allen-Zhu and Brian Bullins and Elad Hazan and Tengyu Ma}, title = {Finding Approximate Local Minima Faster than Gradient Descent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1195--1199}, doi = {}, year = {2017}, } |
|
Holmgren, Justin |
STOC '17: "Non-interactive Delegation ..."
Non-interactive Delegation and Batch NP Verification from Standard Computational Assumptions
Zvika Brakerski, Justin Holmgren, and Yael Kalai (Weizmann Institute of Science, Israel; Massachusetts Institute of Technology, USA; Microsoft Research, USA) We present an adaptive and non-interactive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomial-time cryptographic assumptions (e.g. the worst case hardness of polynomial-factor approximation of short-vector lattice problems). In our protocol, the verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements by simpling sending a proof to the verifier. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer non-adaptive two-message protocols (where the first message could not be re-used), and required either obfuscation-based assumptions or super-polynomial hardness assumptions. We show that our techniques can also be applied to construct a new type of (non-adaptive) 2-message argument for batch NP-statements. Specifically, we can simultaneously prove (with computational soundness) the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness. @InProceedings{STOC17p474, author = {Zvika Brakerski and Justin Holmgren and Yael Kalai}, title = {Non-interactive Delegation and Batch NP Verification from Standard Computational Assumptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {474--482}, doi = {}, year = {2017}, } |
|
Holroyd, Alexander E. |
STOC '17: "Stability of Service under ..."
Stability of Service under Time-of-Use Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of Wisconsin-Madison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under Time-of-Use Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184--197}, doi = {}, year = {2017}, } |
|
Hoza, William M. |
STOC '17: "Targeted Pseudorandom Generators, ..."
Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace
William M. Hoza and Chris Umans (University of Texas at Austin, USA; California Institute of Technology, USA) Assume that for every derandomization result for logspace algorithms, there is a pseudorandom generator strong enough to nearly recover the derandomization by iterating over all seeds and taking a majority vote. We prove under a precise version of this assumption that BPL ⊆ ∩α > 0 DSPACE(log1 + α n). We strengthen the theorem to an equivalence by considering two generalizations of the concept of a pseudorandom generator against logspace. A targeted pseudorandom generator against logspace takes as input a short uniform random seed and a finite automaton; it outputs a long bitstring that looks random to that particular automaton. A simulation advice generator for logspace stretches a small uniform random seed into a long advice string; the requirement is that there is some logspace algorithm that, given a finite automaton and this advice string, simulates the automaton reading a long uniform random input. We prove that ∩α > 0 prBPSPACE(log1 + α n) = ∩α > 0 prDSPACE(log1 + α n) if and only if for every targeted pseudorandom generator against logspace, there is a simulation advice generator for logspace with similar parameters. Finally, we observe that in a certain uniform setting (namely, if we only worry about sequences of automata that can be generated in logspace), targeted pseudorandom generators against logspace can be transformed into simulation advice generators with similar parameters. @InProceedings{STOC17p629, author = {William M. Hoza and Chris Umans}, title = {Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {629--640}, doi = {}, year = {2017}, } |
|
Im, Sungjin |
STOC '17: "Efficient Massively Parallel ..."
Efficient Massively Parallel Methods for Dynamic Programming
Sungjin Im, Benjamin Moseley, and Xiaorui Sun (University of California at Merced, USA; Washington University at St. Louis, USA; Simons Institute for the Theory of Computing Berkeley, USA) Modern science and engineering is driven by massively large data sets and its advance heavily relies on massively parallel computing platforms such as Spark, MapReduce, and Hadoop. Theoretical models have been proposed to understand the power and limitations of such platforms. Recent study of developed theoretical models has led to the discovery of new algorithms that are fast and efficient in both theory and practice, thereby beginning to unlock their underlying power. Given recent promising results, the area has turned its focus on discovering widely applicable algorithmic techniques for solving problems efficiently. In this paper we make progress towards this goal by giving a principled framework for simulating sequential dynamic programs in the distributed setting. In particular, we identify two key properties, monotonicity and decomposability, which allow us to derive efficient distributed algorithms for problems possessing the properties. We showcase our framework by considering several core dynamic programming applications, Longest Increasing Subsequence, Optimal Binary Search Tree, and Weighted Interval Selection. For these problems, we derive algorithms yielding solutions that are arbitrarily close to the optimum, using O(1) rounds and Õ(n/m) memory on each machine where n is the input size and m is the number of machines available. @InProceedings{STOC17p798, author = {Sungjin Im and Benjamin Moseley and Xiaorui Sun}, title = {Efficient Massively Parallel Methods for Dynamic Programming}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {798--811}, doi = {}, year = {2017}, } |
|
Italiano, Giuseppe F. |
STOC '17: "Decremental Single-Source ..."
Decremental Single-Source Reachability in Planar Digraphs
Giuseppe F. Italiano, Adam Karczmarz, Jakub Łącki, and Piotr Sankowski (University of Rome Tor Vergata, Italy; University of Warsaw, Poland; Google Research, USA) In this paper we show a new algorithm for the decremental single-source reachability problem in directed planar graphs. It processes any sequence of edge deletions in O(nlog2nloglogn) total time and explicitly maintains the set of vertices reachable from a fixed source vertex. Hence, if all edges are eventually deleted, the amortized time of processing each edge deletion is only O(log2 n loglogn), which improves upon a previously known O(√n ) solution. We also show an algorithm for decremental maintenance of strongly connected components in directed planar graphs with the same total update time. These results constitute the first almost optimal (up to polylogarithmic factors) algorithms for both problems. To the best of our knowledge, these are the first dynamic algorithms with polylogarithmic update times on general directed planar graphs for non-trivial reachability-type problems, for which only polynomial bounds are known in general graphs. @InProceedings{STOC17p1108, author = {Giuseppe F. Italiano and Adam Karczmarz and Jakub Łącki and Piotr Sankowski}, title = {Decremental Single-Source Reachability in Planar Digraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1108--1121}, doi = {}, year = {2017}, } |
|
Iwata, Satoru |
STOC '17: "A Weighted Linear Matroid ..."
A Weighted Linear Matroid Parity Algorithm
Satoru Iwata and Yusuke Kobayashi (University of Tokyo, Japan; University of Tsukuba, Japan) The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Lovász (1980) showed that this problem admits a min-max formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem. In this paper, we present a combinatorial, deterministic, strongly polynomial algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primal-dual approach with the aid of the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem. @InProceedings{STOC17p264, author = {Satoru Iwata and Yusuke Kobayashi}, title = {A Weighted Linear Matroid Parity Algorithm}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {264--276}, doi = {}, year = {2017}, } |
|
Jain, Sanjay |
STOC '17: "Deciding Parity Games in Quasipolynomial ..."
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan (University of Auckland, New Zealand; National University of Singapore, Singapore) It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime nO(√n) to O(nlog(m)+6) and from an XP-algorithm with runtime O(nΘ(m)) for fixed parameter m to an FPT-algorithm with runtime O(n5)+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((mm · n)5); it is also shown that this bound cannot be improved to O((2m · n)c), for any c, unless FPT = W[1]. @InProceedings{STOC17p252, author = {Cristian S. Calude and Sanjay Jain and Bakhadyr Khoussainov and Wei Li and Frank Stephan}, title = {Deciding Parity Games in Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {252--263}, doi = {}, year = {2017}, } |
|
Jerrum, Mark |
STOC '17: "Uniform Sampling through the ..."
Uniform Sampling through the Lovász Local Lemma
Heng Guo, Mark Jerrum, and Jingcheng Liu (Queen Mary University of London, UK; University of California at Berkeley, USA) We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lovász Local Lemma and some clas- sical sampling algorithms such as the “cycle-popping” algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences. @InProceedings{STOC17p342, author = {Heng Guo and Mark Jerrum and Jingcheng Liu}, title = {Uniform Sampling through the Lovász Local Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {342--355}, doi = {}, year = {2017}, } |
|
Ji, Zhengfeng |
STOC '17: "Compression of Quantum Multi-prover ..."
Compression of Quantum Multi-prover Interactive Proofs
Zhengfeng Ji (University of Technology Sydney, Australia) We present a protocol that transforms any quantum multi-prover interactive proof into a nonlocal game in which questions consist of logarithmic number of bits and answers of constant number of bits. As a corollary, it follows that the promise problem corresponding to the approximation of the nonlocal value to inverse polynomial accuracy is complete for QMIP*, and therefore NEXP-hard. This establishes that nonlocal games are provably harder than classical games without any complexity theory assumptions. Our result also indicates that gap amplification for nonlocal games may be impossible in general and provides a negative evidence for the feasibility of the gap amplification approach to the multi-prover variant of the quantum PCP conjecture. @InProceedings{STOC17p289, author = {Zhengfeng Ji}, title = {Compression of Quantum Multi-prover Interactive Proofs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {289--302}, doi = {}, year = {2017}, } |
|
Joglekar, Pushkar S |
STOC '17: "Randomized Polynomial Time ..."
Randomized Polynomial Time Identity Testing for Noncommutative Circuits
V. Arvind, Pushkar S Joglekar, Partha Mukhopadhyay, and S. Raja (Institute of Mathematical Sciences, India; Vishwakarma Institute of Technology Pune, India; Chennai Mathematical Institute, India) In this paper we show that black-box polynomial identity testing for noncommutative polynomials f∈F⟨ z1,z2,⋯,zn ⟩ of degree D and sparsity t, can be done in randomized (n,logt,logD) time. As a consequence, given a circuit C of size s computing a polynomial f∈F⟨ z1,z2,⋯,zn ⟩ with at most t non-zero monomials, then testing if f is identically zero can be done by a randomized algorithm with running time polynomial in s and n and logt. This makes significant progress on a question that has been open for over ten years. Our algorithm is based on automata-theoretic ideas that can efficiently isolate a monomial in the given polynomial. In particular, we carry out the monomial isolation using nondeterministic automata. In general, noncommutative circuits of size s can compute polynomials of degree exponential in s and number of monomials double-exponential in s. In this paper, we consider a natural class of homogeneous noncommutative circuits, that we call +-regular circuits, and give a white-box polynomial time deterministic polynomial identity test. These circuits can compute noncommutative polynomials with number of monomials double-exponential in the circuit size. Our algorithm combines some new structural results for +-regular circuits with known results for noncommutative ABP identity testing, rank bound of commutative depth three identities, and equivalence testing problem for words. Finally, we consider the black-box identity testing problem for depth three +-regular circuits and give a randomized polynomial time identity test. In particular, we show if f∈⟨ Z⟩ is a nonzero noncommutative polynomial computed by a depth three +-regular circuit of size s, then f cannot be a polynomial identity for the matrix algebra Ms(F) when F is sufficiently large depending on the degree of f. @InProceedings{STOC17p831, author = {V. Arvind and Pushkar S Joglekar and Partha Mukhopadhyay and S. Raja}, title = {Randomized Polynomial Time Identity Testing for Noncommutative Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {831--841}, doi = {}, year = {2017}, } |
|
Kabanets, Valentine |
STOC '17: "A Polynomial Restriction Lemma ..."
A Polynomial Restriction Lemma with Applications
Valentine Kabanets, Daniel M. Kane, and Zhenjian Lu (Simon Fraser University, Canada; University of California at San Diego, USA) A polynomial threshold function (PTF) of degree d is a boolean function of the form f=sgn(p), where p is a degree-d polynomial, and sgn is the sign function. The main result of the paper is an almost optimal bound on the probability that a random restriction of a PTF is not close to a constant function, where a boolean function g is called δ-close to constant if, for some v∈{1,−1}, we have g(x)=v for all but at most δ fraction of inputs. We show for every PTF f of degree d≥ 1, and parameters 0<δ, r≤ 1/16, that Prρ∼ Rr [fρ is not δ -close to constant] ≤ √r · (logr−1 · logδ−1)O(d2), where ρ∼ Rr is a random restriction leaving each variable, independently, free with probability r, and otherwise assigning it 1 or −1 uniformly at random. In fact, we show a more general result for random block restrictions: given an arbitrary partitioning of input variables into m blocks, a random block restriction picks a uniformly random block ℓ∈ [m] and assigns 1 or −1, uniformly at random, to all variable outside the chosen block ℓ. We prove the Block Restriction Lemma saying that a PTF f of degree d becomes δ-close to constant when hit with a random block restriction, except with probability at most m−1/2 · (logm· logδ−1)O(d2). As an application of our Restriction Lemma, we prove lower bounds against constant-depth circuits with PTF gates of any degree 1≤ d≪ √logn/loglogn, generalizing the recent bounds against constant-depth circuits with linear threshold gates (LTF gates) proved by Kane and Williams (STOC, 2016) and Chen, Santhanam, and Srinivasan (CCC, 2016). In particular, we show that there is an n-variate boolean function Fn ∈ P such that every depth-2 circuit with PTF gates of degree d≥ 1 that computes Fn must have at least (n3/2+1/d)· (logn)−O(d2) wires. For constant depths greater than 2, we also show average-case lower bounds for such circuits with super-linear number of wires. These are the first super-linear bounds on the number of wires for circuits with PTF gates. We also give short proofs of the optimal-exponent average sensitivity bound for degree-d PTFs due to Kane (Computational Complexity, 2014), and the Littlewood-Offord type anticoncentration bound for degree-d multilinear polynomials due to Meka, Nguyen, and Vu (Theory of Computing, 2016). Finally, we give derandomized versions of our Block Restriction Lemma and Littlewood-Offord type anticoncentration bounds, using a pseudorandom generator for PTFs due to Meka and Zuckerman (SICOMP, 2013). @InProceedings{STOC17p615, author = {Valentine Kabanets and Daniel M. Kane and Zhenjian Lu}, title = {A Polynomial Restriction Lemma with Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {615--628}, doi = {}, year = {2017}, } |
|
Kalai, Yael |
STOC '17: "Non-interactive Delegation ..."
Non-interactive Delegation and Batch NP Verification from Standard Computational Assumptions
Zvika Brakerski, Justin Holmgren, and Yael Kalai (Weizmann Institute of Science, Israel; Massachusetts Institute of Technology, USA; Microsoft Research, USA) We present an adaptive and non-interactive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomial-time cryptographic assumptions (e.g. the worst case hardness of polynomial-factor approximation of short-vector lattice problems). In our protocol, the verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements by simpling sending a proof to the verifier. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer non-adaptive two-message protocols (where the first message could not be re-used), and required either obfuscation-based assumptions or super-polynomial hardness assumptions. We show that our techniques can also be applied to construct a new type of (non-adaptive) 2-message argument for batch NP-statements. Specifically, we can simultaneously prove (with computational soundness) the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness. @InProceedings{STOC17p474, author = {Zvika Brakerski and Justin Holmgren and Yael Kalai}, title = {Non-interactive Delegation and Batch NP Verification from Standard Computational Assumptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {474--482}, doi = {}, year = {2017}, } |
|
Kane, Daniel M. |
STOC '17: "A Polynomial Restriction Lemma ..."
A Polynomial Restriction Lemma with Applications
Valentine Kabanets, Daniel M. Kane, and Zhenjian Lu (Simon Fraser University, Canada; University of California at San Diego, USA) A polynomial threshold function (PTF) of degree d is a boolean function of the form f=sgn(p), where p is a degree-d polynomial, and sgn is the sign function. The main result of the paper is an almost optimal bound on the probability that a random restriction of a PTF is not close to a constant function, where a boolean function g is called δ-close to constant if, for some v∈{1,−1}, we have g(x)=v for all but at most δ fraction of inputs. We show for every PTF f of degree d≥ 1, and parameters 0<δ, r≤ 1/16, that Prρ∼ Rr [fρ is not δ -close to constant] ≤ √r · (logr−1 · logδ−1)O(d2), where ρ∼ Rr is a random restriction leaving each variable, independently, free with probability r, and otherwise assigning it 1 or −1 uniformly at random. In fact, we show a more general result for random block restrictions: given an arbitrary partitioning of input variables into m blocks, a random block restriction picks a uniformly random block ℓ∈ [m] and assigns 1 or −1, uniformly at random, to all variable outside the chosen block ℓ. We prove the Block Restriction Lemma saying that a PTF f of degree d becomes δ-close to constant when hit with a random block restriction, except with probability at most m−1/2 · (logm· logδ−1)O(d2). As an application of our Restriction Lemma, we prove lower bounds against constant-depth circuits with PTF gates of any degree 1≤ d≪ √logn/loglogn, generalizing the recent bounds against constant-depth circuits with linear threshold gates (LTF gates) proved by Kane and Williams (STOC, 2016) and Chen, Santhanam, and Srinivasan (CCC, 2016). In particular, we show that there is an n-variate boolean function Fn ∈ P such that every depth-2 circuit with PTF gates of degree d≥ 1 that computes Fn must have at least (n3/2+1/d)· (logn)−O(d2) wires. For constant depths greater than 2, we also show average-case lower bounds for such circuits with super-linear number of wires. These are the first super-linear bounds on the number of wires for circuits with PTF gates. We also give short proofs of the optimal-exponent average sensitivity bound for degree-d PTFs due to Kane (Computational Complexity, 2014), and the Littlewood-Offord type anticoncentration bound for degree-d multilinear polynomials due to Meka, Nguyen, and Vu (Theory of Computing, 2016). Finally, we give derandomized versions of our Block Restriction Lemma and Littlewood-Offord type anticoncentration bounds, using a pseudorandom generator for PTFs due to Meka and Zuckerman (SICOMP, 2013). @InProceedings{STOC17p615, author = {Valentine Kabanets and Daniel M. Kane and Zhenjian Lu}, title = {A Polynomial Restriction Lemma with Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {615--628}, doi = {}, year = {2017}, } |
|
Kapralov, Michael |
STOC '17: "An Adaptive Sublinear-Time ..."
An Adaptive Sublinear-Time Block Sparse Fourier Transform
Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh (EPFL, Switzerland) The problem of approximately computing the k dominant Fourier coefficients of a vector X quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with O(klognlog(n/k)) runtime [Hassanieh et al., STOC’12] and O(klogn) sample complexity [Indyk et al., FOCS’14]. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a (k0,k1)-block sparse model. In this model, signal frequencies are clustered in k0 intervals with width k1 in Fourier space, and k= k0k1 is the total sparsity. Our main result is the first sparse FFT algorithm for (k0, k1)-block sparse signals with a sample complexity of O*(k0k1 + k0log(1+ k0)logn) at constant signal-to-noise ratios, and sublinear runtime. Our algorithm crucially uses adaptivity to achieve the improved sample complexity bound, and we provide a lower bound showing that this is essential in the Fourier setting: Any non-adaptive algorithm must use Ω(k0k1logn/k0k1) samples for the (k0,k1)-block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energy-based importance sampling technique that may be of independent interest. @InProceedings{STOC17p702, author = {Volkan Cevher and Michael Kapralov and Jonathan Scarlett and Amir Zandieh}, title = {An Adaptive Sublinear-Time Block Sparse Fourier Transform}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {702--715}, doi = {}, year = {2017}, } |
|
Karczmarz, Adam |
STOC '17: "Decremental Single-Source ..."
Decremental Single-Source Reachability in Planar Digraphs
Giuseppe F. Italiano, Adam Karczmarz, Jakub Łącki, and Piotr Sankowski (University of Rome Tor Vergata, Italy; University of Warsaw, Poland; Google Research, USA) In this paper we show a new algorithm for the decremental single-source reachability problem in directed planar graphs. It processes any sequence of edge deletions in O(nlog2nloglogn) total time and explicitly maintains the set of vertices reachable from a fixed source vertex. Hence, if all edges are eventually deleted, the amortized time of processing each edge deletion is only O(log2 n loglogn), which improves upon a previously known O(√n ) solution. We also show an algorithm for decremental maintenance of strongly connected components in directed planar graphs with the same total update time. These results constitute the first almost optimal (up to polylogarithmic factors) algorithms for both problems. To the best of our knowledge, these are the first dynamic algorithms with polylogarithmic update times on general directed planar graphs for non-trivial reachability-type problems, for which only polynomial bounds are known in general graphs. @InProceedings{STOC17p1108, author = {Giuseppe F. Italiano and Adam Karczmarz and Jakub Łącki and Piotr Sankowski}, title = {Decremental Single-Source Reachability in Planar Digraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1108--1121}, doi = {}, year = {2017}, } |
|
Karlin, Anna R. |
STOC '17: "Stability of Service under ..."
Stability of Service under Time-of-Use Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of Wisconsin-Madison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under Time-of-Use Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184--197}, doi = {}, year = {2017}, } |
|
Kelner, Jonathan |
STOC '17: "Almost-Linear-Time Algorithms ..."
Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graph-theoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almost-linear-time directed Laplacian system solver, and, by leveraging the recent framework of [Cohen-Kelner-Peebles-Peng-Sidford-Vladu, FOCS’16], we also obtain almost-linear-time algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm3/4 + n2/3 m) logO(1) (n κ є−1)) to O((m + n2O(√lognloglogn)) logO(1) (n κ є−1)) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410--419}, doi = {}, year = {2017}, } |
|
Khot, Subhash |
STOC '17: "On Independent Sets, 2-to-2 ..."
On Independent Sets, 2-to-2 Games, and Grassmann Graphs
Subhash Khot, Dor Minzer, and Muli Safra (New York University, USA; Tel Aviv University, Israel) We present a candidate reduction from the 3-Lin problem to the 2-to-2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain non-standard sense. A reduction that is sound in this non-standard sense implies that it is NP-hard to distinguish whether an n-vertex graph has an independent set of size ( 1− 1/√2 ) n − o(n) or whether every independent set has size o(n), and consequently, that it is NP-hard to approximate the Vertex Cover problem within a factor √2−o(1). @InProceedings{STOC17p576, author = {Subhash Khot and Dor Minzer and Muli Safra}, title = {On Independent Sets, 2-to-2 Games, and Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {576--589}, doi = {}, year = {2017}, } |
|
Khoussainov, Bakhadyr |
STOC '17: "Deciding Parity Games in Quasipolynomial ..."
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan (University of Auckland, New Zealand; National University of Singapore, Singapore) It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime nO(√n) to O(nlog(m)+6) and from an XP-algorithm with runtime O(nΘ(m)) for fixed parameter m to an FPT-algorithm with runtime O(n5)+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((mm · n)5); it is also shown that this bound cannot be improved to O((2m · n)c), for any c, unless FPT = W[1]. @InProceedings{STOC17p252, author = {Cristian S. Calude and Sanjay Jain and Bakhadyr Khoussainov and Wei Li and Frank Stephan}, title = {Deciding Parity Games in Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {252--263}, doi = {}, year = {2017}, } |
|
Kim, David H. K. |
STOC '17: "New Hardness Results for Routing ..."
New Hardness Results for Routing on Disjoint Paths
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical Node-Disjoint Paths (NDP) problem, the input consists of an undirected n-vertex graph G, and a collection M={(s1,t1),…,(sk,tk)} of pairs of its vertices, called source-destination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via node-disjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is O(√n), while the best current negative result is an Ω(log1/2−δn)-hardness of approximation for any constant δ, under standard complexity assumptions. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves an Õ(n1/4)-approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is Õ(n9/19). The best currently known lower bound for both these versions of the problem is APX-hardness. In this paper we prove that NDP is 2Ω(√logn)-hard to approximate, unless all problems in NP have algorithms with running time nO(logn). Our result holds even when the underlying graph is a planar graph with maximum vertex degree 4, and all source vertices lie on the boundary of a single face (but the destination vertices may lie anywhere in the graph). We extend this result to the closely related Edge-Disjoint Paths problem, showing the same hardness of approximation ratio even for sub-cubic planar graphs with all sources lying on the boundary of a single face. @InProceedings{STOC17p86, author = {Julia Chuzhoy and David H. K. Kim and Rachit Nimavat}, title = {New Hardness Results for Routing on Disjoint Paths}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {86--99}, doi = {}, year = {2017}, } |
|
Kleinberg, Robert |
STOC '17: "Bernoulli Factories and Black-Box ..."
Bernoulli Factories and Black-Box Reductions in Mechanism Design
Shaddin Dughmi, Jason D. Hartline, Robert Kleinberg, and Rad Niazadeh (University of Southern California, USA; Northwestern University, USA; Cornell University, USA) We provide a polynomial-time reduction from Bayesian incentive-compatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multi-dimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior black-box reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism’s output, which is typically #P-hard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on ”Bernoulli Factories”. In a Bernoulli factory problem, one is given a function mapping the bias of an “input coin” to that of an “output coin”, and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the ”expectations from samples” computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by ”exponential weights”: expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of Hartline-Malekian-Kleinberg [2015] exactly incentive compatible. @InProceedings{STOC17p158, author = {Shaddin Dughmi and Jason D. Hartline and Robert Kleinberg and Rad Niazadeh}, title = {Bernoulli Factories and Black-Box Reductions in Mechanism Design}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {158--169}, doi = {}, year = {2017}, } STOC '17: "Beating 1-1/e for Ordered ..." Beating 1-1/e for Ordered Prophets Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 1-1/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {61--71}, doi = {}, year = {2017}, } |
|
Knudsen, Mathias Bæk Tejs |
STOC '17: "Finding Even Cycles Faster ..."
Finding Even Cycles Faster via Capped k-Walks
Søren Dahlgaard, Mathias Bæk Tejs Knudsen, and Morten Stöckel (University of Copenhagen, Denmark) Finding cycles in graphs is a fundamental problem in algorithmic graph theory. In this paper, we consider the problem of finding and reporting a cycle of length 2k in an undirected graph G with n nodes and m edges for constant k≥ 2. A classic result by Bondy and Simonovits [J. Combinatorial Theory, 1974] implies that if m ≥ 100k n1+1/k, then G contains a 2k-cycle, further implying that one needs to consider only graphs with m = O(n1+1/k). Previously the best known algorithms were an O(n2) algorithm due to Yuster and Zwick [J. Discrete Math 1997] as well as a O(m2−(1+⌈ k/2 ⌉−1)/(k+1)) algorithm by Alon et. al. [Algorithmica 1997]. We present an algorithm that uses O( m2k/(k+1) ) time and finds a 2k-cycle if one exists. This bound is O(n2) exactly when m = Θ(n1+1/k). When finding 4-cycles our new bound coincides with Alon et. al., while for every k>2 our new bound yields a polynomial improvement in m. Yuster and Zwick noted that it is “plausible to conjecture that O(n2) is the best possible bound in terms of n”. We show “conditional optimality”: if this hypothesis holds then our O(m2k/(k+1)) algorithm is tight as well. Furthermore, a folklore reduction implies that no combinatorial algorithm can determine if a graph contains a 6-cycle in time O(m3/2−ε) for any ε>0 unless boolean matrix multiplication can be solved combinatorially in time O(n3−ε′) for some ε′ > 0, which is widely believed to be false. Coupled with our main result, this gives tight bounds for finding 6-cycles combinatorially and also separates the complexity of finding 4- and 6-cycles giving evidence that the exponent of m in the running time should indeed increase with k. The key ingredient in our algorithm is a new notion of capped k-walks, which are walks of length k that visit only nodes according to a fixed ordering. Our main technical contribution is an involved analysis proving several properties of such walks which may be of independent interest. @InProceedings{STOC17p112, author = {Søren Dahlgaard and Mathias Bæk Tejs Knudsen and Morten Stöckel}, title = {Finding Even Cycles Faster via Capped k-Walks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {112--120}, doi = {}, year = {2017}, } |
|
Kobayashi, Yusuke |
STOC '17: "A Weighted Linear Matroid ..."
A Weighted Linear Matroid Parity Algorithm
Satoru Iwata and Yusuke Kobayashi (University of Tokyo, Japan; University of Tsukuba, Japan) The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Lovász (1980) showed that this problem admits a min-max formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem. In this paper, we present a combinatorial, deterministic, strongly polynomial algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primal-dual approach with the aid of the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem. @InProceedings{STOC17p264, author = {Satoru Iwata and Yusuke Kobayashi}, title = {A Weighted Linear Matroid Parity Algorithm}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {264--276}, doi = {}, year = {2017}, } |
|
Kokainis, Martins |
STOC '17: "Quantum Algorithm for Tree ..."
Quantum Algorithm for Tree Size Estimation, with Applications to Backtracking and 2-Player Games
Andris Ambainis and Martins Kokainis (University of Latvia, Latvia) We study quantum algorithms on search trees of unknown structure, in a model where the tree can be discovered by local exploration. That is, we are given the root of the tree and access to a black box which, given a vertex v, outputs the children of v. We construct a quantum algorithm which, given such access to a search tree of depth at most n, estimates the size of the tree T within a factor of 1± δ in Õ(√nT) steps. More generally, the same algorithm can be used to estimate size of directed acyclic graphs (DAGs) in a similar model. We then show two applications of this result: a) We show how to transform a classical backtracking search algorithm which examines T nodes of a search tree into an Õ(√Tn3/2) time quantum algorithm, improving over an earlier quantum backtracking algorithm of Montanaro (arXiv:1509.02374). b) We give a quantum algorithm for evaluating AND-OR formulas in a model where the formula can be discovered by local exploration (modeling position trees in 2-player games) which evaluates formulas of size T and depth To(1) in time O(T1/2+o(1)). Thus, the quantum speedup is essentially the same as in the case when the formula is known in advance. @InProceedings{STOC17p989, author = {Andris Ambainis and Martins Kokainis}, title = {Quantum Algorithm for Tree Size Estimation, with Applications to Backtracking and 2-Player Games}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {989--1002}, doi = {}, year = {2017}, } |
|
Kol, Gillat |
STOC '17: "Time-Space Hardness of Learning ..."
Time-Space Hardness of Learning Sparse Parities
Gillat Kol, Ran Raz, and Avishay Tal (Princeton University, USA; IAS, USA) We define a concept class F to be time-space hard (or memory-samples hard) if any learning algorithm for F requires either a memory of size super-linear in n or a number of samples super-polynomial in n, where n is the length of one sample. A recent work shows that the class of all parity functions is time-space hard [Raz, FOCS’16]. Building on [Raz, FOCS’16], we show that the class of all sparse parities of Hamming weight ℓ is time-space hard, as long as ℓ ≥ ω(logn / loglogn). Consequently, linear-size DNF Formulas, linear-size Decision Trees and logarithmic-size Juntas are all time-space hard. Our result is more general and provides time-space lower bounds for learning any concept class of parity functions. We give applications of our results in the field of bounded-storage cryptography. For example, for every ω(logn) ≤ k ≤ n, we obtain an encryption scheme that requires a private key of length k, and time complexity of n per encryption/decryption of each bit, and is provably and unconditionally secure as long as the attacker uses at most o(nk) memory bits and the scheme is used at most 2o(k) times. Previously, this was known only for k=n [Raz, FOCS’16]. @InProceedings{STOC17p1067, author = {Gillat Kol and Ran Raz and Avishay Tal}, title = {Time-Space Hardness of Learning Sparse Parities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1067--1080}, doi = {}, year = {2017}, } |
|
Kopelowitz, Tsvi |
STOC '17: "Exponential Separations in ..."
Exponential Separations in the Energy Complexity of Leader Election
Yi-Jun Chang, Tsvi Kopelowitz, Seth Pettie, Ruosong Wang, and Wei Zhan (University of Michigan, USA; Tsinghua University, China) Energy is often the most constrained resource for battery-powered wireless devices and the lion’s share of energy is often spent on transceiver usage (sending/receiving packets), not on computation. In this paper we study the energy complexity of Leader Election and Approximate Counting in several models of wireless radio networks. It turns out that energy complexity is very sensitive to whether the devices can generate random bits and their ability to detect collisions. We consider four collision-detection models: Strong-CD (in which transmitters and listeners detect collisions), Sender-CD and Receiver-CD (in which only transmitters or only listeners detect collisions), and No-CD (in which no one detects collisions.) The take-away message of our results is quite surprising. For randomized Leader Election algorithms, there is an exponential gap between the energy complexity of Sender-CD and Receiver-CD: No-CD = Sender-CD ≫ Receiver-CD = Strong-CD and for deterministic Leader Election algorithms, there is another exponential gap in energy complexity, but in the reverse direction: No-CD = Receiver-CD ≫ Sender-CD = Strong-CD In particular, the randomized energy complexity of Leader Election is Θ(log* n) in Sender-CD but Θ(log(log* n)) in Receiver-CD, where n is the (unknown) number of devices. Its deterministic complexity is Θ(logN) in Receiver-CD but Θ(loglogN) in Sender-CD, where N is the (known) size of the devices’ ID space. There is a tradeoff between time and energy. We give a new upper bound on the time-energy tradeoff curve for randomized Leader Election and Approximate Counting. A critical component of this algorithm is a new deterministic Leader Election algorithm for dense instances, when n=Θ(N), with inverse-Ackermann-type (O(α(N))) energy complexity. @InProceedings{STOC17p771, author = {Yi-Jun Chang and Tsvi Kopelowitz and Seth Pettie and Ruosong Wang and Wei Zhan}, title = {Exponential Separations in the Energy Complexity of Leader Election}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {771--783}, doi = {}, year = {2017}, } |
|
Kothari, Pravesh K. |
STOC '17: "Quantum Entanglement, Sum ..."
Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture
Boaz Barak, Pravesh K. Kothari, and David Steurer (Harvard University, USA; Princeton University, USA; IAS, USA; Cornell University, USA) For every constant є>0, we give an exp(Õ(√n))-time algorithm for the 1 vs 1−є Best Separable State (BSS) problem of distinguishing, given an n2× n2 matrix corresponding to a quantum measurement, between the case that there is a separable (i.e., non-entangled) state ρ that accepts with probability 1, and the case that every separable state is accepted with probability at most 1−є. Equivalently, our algorithm takes the description of a subspace ⊆ Fn2 (where F can be either the real or complex field) and distinguishes between the case that contains a rank one matrix, and the case that every rank one matrix is at least є far (in ℓ2 distance) from . To the best of our knowledge, this is the first improvement over the brute-force exp(n)-time algorithm for this problem. Our algorithm is based on the sum-of-squares hierarchy and its analysis is inspired by Lovett’s proof (STOC ’14, JACM ’16) that the communication complexity of every rank-n Boolean matrix is bounded by Õ(√n). @InProceedings{STOC17p975, author = {Boaz Barak and Pravesh K. Kothari and David Steurer}, title = {Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {975--988}, doi = {}, year = {2017}, } STOC '17: "Approximating Rectangles by ..." Approximating Rectangles by Juntas and Weakly-Exponential Lower Bounds for LP Relaxations of CSPs Pravesh K. Kothari, Raghu Meka, and Prasad Raghavendra (Princeton University, USA; IAS, USA; University of California at Los Angeles, USA; University of California at Berkeley, USA) We show that for constraint satisfaction problems (CSPs), sub-exponential size linear programming relaxations are as powerful as nΩ(1)-rounds of the Sherali-Adams linear programming hierarchy. As a corollary, we obtain sub-exponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAX-CUT and MAX-3SAT. This is a nearly-exponential improvement over previous results; previously, the best known lower bounds were quasi-polynomial in n (Chan, Lee, Raghavendra, Steurer 2013). Our bounds are obtained by exploiting and extending the recent progress in communication complexity for ”lifting” query lower bounds to communication problems. The main ingredient in our results is a new structural result on “high-entropy rectangles” that may of independent interest in communication complexity. @InProceedings{STOC17p590, author = {Pravesh K. Kothari and Raghu Meka and Prasad Raghavendra}, title = {Approximating Rectangles by Juntas and Weakly-Exponential Lower Bounds for LP Relaxations of CSPs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {590--603}, doi = {}, year = {2017}, } STOC '17: "Sum of Squares Lower Bounds ..." Sum of Squares Lower Bounds for Refuting any CSP Pravesh K. Kothari, Ryuhei Mori, Ryan O'Donnell, and David Witmer (Princeton University, USA; IAS, USA; Tokyo Institute of Technology, Japan; Carnegie Mellon University, USA) Let P:{0,1}k → {0,1} be a nontrivial k-ary predicate. Consider a random instance of the constraint satisfaction problem (P) on n variables with Δ n constraints, each being P applied to k randomly chosen literals. Provided the constraint density satisfies Δ ≫ 1, such an instance is unsatisfiable with high probability. The refutation problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate P supports a t-wise uniform probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d = Θ(n/Δ2/(t−1) logΔ) (which runs in time nO(d)) cannot refute a random instance of (P). In particular, the polynomial-time SOS algorithm requires Ω(n(t+1)/2) constraints to refute random instances of CSP(P) when P supports a t-wise uniform distribution on its satisfying assignments. Together with recent work of Lee et al.(Lee, Raghavendra, Steurer 2015), our result also implies that any polynomial-size semidefinite programming relaxation for refutation requires at least Ω(n(t+1)/2) constraints. More generally, we consider the δ-refutation problem, in which the goal is to certify that at most a (1−δ)-fraction of constraints can be simultaneously satisfied. We show that if P is δ-close to supporting a t-wise uniform distribution on satisfying assignments, then the degree-Ω(n/Δ2/(t−1) logΔ) SOS algorithm cannot (δ+o(1))-refute a random instance of CSP(P). This is the first result to show a distinction between the degree SOS needs to solve the refutation problem and the degree it needs to solve the harder δ-refutation problem. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate P, they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen, O’Donnell, Witmer (2015) and Raghavendra, Rao, Schramm (2016), this full three-way tradeoff is tight, up to lower-order factors. @InProceedings{STOC17p132, author = {Pravesh K. Kothari and Ryuhei Mori and Ryan O'Donnell and David Witmer}, title = {Sum of Squares Lower Bounds for Refuting any CSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {132--145}, doi = {}, year = {2017}, } Info |
|
Krauthgamer, Robert |
STOC '17: "Streaming Symmetric Norms ..."
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang (Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel) We characterize the streaming space complexity of every symmetric norm l (a norm on ℝn invariant under sign-flips and coordinate-permutations), by relating this space complexity to the measure-concentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±є)-approximation to the norm of the stream, for every 0<є≤ 1/2. (The bounds match up to (є−1 logn) factors.) We further extend those bounds to any large approximation ratio D≥ 1.1, showing that the decrease in space complexity is proportional to D2, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l2 unit sphere. The same median governs many phenomena in high-dimensional spaces, such as large-deviation bounds and the critical dimension in Dvoretzky’s Theorem. The family of symmetric norms contains several well-studied norms, such as all lp norms, and indeed we provide a new explanation for the disparity in space complexity between p≤ 2 and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the top-k norm and the k-support norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in http://sublinear.info). @InProceedings{STOC17p716, author = {Jarosław Błasiok and Vladimir Braverman and Stephen R. Chestnut and Robert Krauthgamer and Lin F. Yang}, title = {Streaming Symmetric Norms via Measure Concentration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716--729}, doi = {}, year = {2017}, } |
|
Krishnaswamy, Ravishankar |
STOC '17: "Online and Dynamic Algorithms ..."
Online and Dynamic Algorithms for Set Cover
Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Debmalya Panigrahi (Carnegie Mellon University, USA; Microsoft Research, India; IIT Delhi, India; Duke University, USA) In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of “active” elements to be covered changes over time. The goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primal-dual offline algorithms for set cover. The former leads to a competitive ratio of O(lognt), where nt is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on ft, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research. @InProceedings{STOC17p537, author = {Anupam Gupta and Ravishankar Krishnaswamy and Amit Kumar and Debmalya Panigrahi}, title = {Online and Dynamic Algorithms for Set Cover}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {537--550}, doi = {}, year = {2017}, } |
|
Krzakala, Florent |
STOC '17: "Information-Theoretic Thresholds ..."
Information-Theoretic Thresholds from the Cavity Method
Amin Coja-Oghlan, Florent Krzakala, Will Perkins, and Lenka Zdeborova (Goethe University Frankfurt, Germany; CNRS, France; PSL Research University, France; ENS, France; UPMC, France; University of Birmingham, UK; CEA, France; University of Paris-Saclay, France) Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications. @InProceedings{STOC17p146, author = {Amin Coja-Oghlan and Florent Krzakala and Will Perkins and Lenka Zdeborova}, title = {Information-Theoretic Thresholds from the Cavity Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {146--157}, doi = {}, year = {2017}, } |
|
Kuhn, Fabian |
STOC '17: "On the Complexity of Local ..."
On the Complexity of Local Distributed Graph Problems
Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus (ETH Zurich, Switzerland; University of Freiburg, Germany) This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS ’87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)-vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2O(√logn). Understanding and potentially narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and (Δ+1)-coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmic-time randomized distributed algorithms in a simpler way. As a result, we provide a polylog-time distributed approximation scheme for arbitrary distributed covering and packing integer linear programs. @InProceedings{STOC17p784, author = {Mohsen Ghaffari and Fabian Kuhn and Yannic Maus}, title = {On the Complexity of Local Distributed Graph Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {784--797}, doi = {}, year = {2017}, } |
|
Kumar, Amit |
STOC '17: "Online and Dynamic Algorithms ..."
Online and Dynamic Algorithms for Set Cover
Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Debmalya Panigrahi (Carnegie Mellon University, USA; Microsoft Research, India; IIT Delhi, India; Duke University, USA) In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of “active” elements to be covered changes over time. The goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primal-dual offline algorithms for set cover. The former leads to a competitive ratio of O(lognt), where nt is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on ft, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research. @InProceedings{STOC17p537, author = {Anupam Gupta and Ravishankar Krishnaswamy and Amit Kumar and Debmalya Panigrahi}, title = {Online and Dynamic Algorithms for Set Cover}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {537--550}, doi = {}, year = {2017}, } |
|
Kuperberg, Greg |
STOC '17: "The Computational Complexity ..."
The Computational Complexity of Ball Permutations
Scott Aaronson, Adam Bouland, Greg Kuperberg, and Saeed Mehraban (University of Texas at Austin, USA; Massachusetts Institute of Technology, USA; University of California at Davis, USA) We define several models of computation based on permuting distinguishable particles (which we call balls) and characterize their computational complexity. In the quantum setting, we use the representation theory of the symmetric group to find variants of this model which are intermediate between BPP and DQC1 (the class of problems solvable with one clean qubit) and between DQC1 and BQP. Furthermore, we consider a restricted version of this model based on an exactly solvable scattering problem of particles moving on a line. Despite the simplicity of this model from the perspective of mathematical physics, we show that if we allow intermediate destructive measurements and specific input states, then the model cannot be efficiently simulated classically up to multiplicative error unless the polynomial hierarchy collapses. Finally, we define a classical version of this model in which one can probabilistically permute balls. We find this yields a complexity class which is intermediate between L and BPP, and that a nondeterministic version of this model is NP-complete. @InProceedings{STOC17p317, author = {Scott Aaronson and Adam Bouland and Greg Kuperberg and Saeed Mehraban}, title = {The Computational Complexity of Ball Permutations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {317--327}, doi = {}, year = {2017}, } |
|
Kupferman, Orna |
STOC '17: "Examining Classical Graph-Theory ..."
Examining Classical Graph-Theory Problems from the Viewpoint of Formal-Verification Methods (Invited Talk)
Orna Kupferman (Hebrew University of Jerusalem, Israel) The talk surveys a series of works that lift the rich semantics and structure of graphs, and the experience of the formal-verification community in reasoning about them, to classical graph-theoretical problems. @InProceedings{STOC17p6, author = {Orna Kupferman}, title = {Examining Classical Graph-Theory Problems from the Viewpoint of Formal-Verification Methods (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {6--6}, doi = {}, year = {2017}, } |
|
Kyng, Rasmus |
STOC '17: "Sampling Random Spanning Trees ..."
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva (Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA) We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in (n5/3 m1/3) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(nω). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{nω,m√n,m4/3}) for m ≫ n7/4 (Colbourn et al. ’96, Kelner-Madry ’09, Madry et al. ’15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute -approximate effective resistances for a set S of vertex pairs via approximate Schur complements in (m+(n + |S|)−2) time, without using the Johnson-Lindenstrauss lemma which requires ( min{(m + |S|)−2, m+n−4 +|S|−2}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn’t sufficiently accurate. @InProceedings{STOC17p730, author = {David Durfee and Rasmus Kyng and John Peebles and Anup B. Rao and Sushant Sachdeva}, title = {Sampling Random Spanning Trees Faster Than Matrix Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {730--742}, doi = {}, year = {2017}, } Info |
|
Łącki, Jakub |
STOC '17: "Decremental Single-Source ..."
Decremental Single-Source Reachability in Planar Digraphs
Giuseppe F. Italiano, Adam Karczmarz, Jakub Łącki, and Piotr Sankowski (University of Rome Tor Vergata, Italy; University of Warsaw, Poland; Google Research, USA) In this paper we show a new algorithm for the decremental single-source reachability problem in directed planar graphs. It processes any sequence of edge deletions in O(nlog2nloglogn) total time and explicitly maintains the set of vertices reachable from a fixed source vertex. Hence, if all edges are eventually deleted, the amortized time of processing each edge deletion is only O(log2 n loglogn), which improves upon a previously known O(√n ) solution. We also show an algorithm for decremental maintenance of strongly connected components in directed planar graphs with the same total update time. These results constitute the first almost optimal (up to polylogarithmic factors) algorithms for both problems. To the best of our knowledge, these are the first dynamic algorithms with polylogarithmic update times on general directed planar graphs for non-trivial reachability-type problems, for which only polynomial bounds are known in general graphs. @InProceedings{STOC17p1108, author = {Giuseppe F. Italiano and Adam Karczmarz and Jakub Łącki and Piotr Sankowski}, title = {Decremental Single-Source Reachability in Planar Digraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1108--1121}, doi = {}, year = {2017}, } |
|
Laekhanukit, Bundit |
STOC '17: "Surviving in Directed Graphs: ..."
Surviving in Directed Graphs: A Quasi-Polynomial-Time Polylogarithmic Approximation for Two-Connected Directed Steiner Tree
Fabrizio Grandoni and Bundit Laekhanukit (IDSIA, Switzerland; University of Lugano, Switzerland; Weizmann Institute of Science, Israel) Real-word networks are often prone to failures. A reliable network needs to cope with this situation and must provide a backup communication channel. This motivates the study of survivable network design, which has been a focus of research for a few decades. To date, survivable network design problems on undirected graphs are well-understood. For example, there is a 2 approximation in the case of edge failures [Jain, FOCS’98/Combinatorica’01]. The problems on directed graphs, in contrast, have seen very little progress. Most techniques for the undirected case like primal-dual and iterative rounding methods do not seem to extend to the directed case. Almost no non-trivial approximation algorithm is known even for a simple case where we wish to design a network that tolerates a single failure. In this paper, we study a survivable network design problem on directed graphs, 2-Connected Directed Steiner Tree (2-DST): given an n-vertex weighted directed graph, a root r, and a set of h terminals S, find a min-cost subgraph H that has two edge/vertex disjoint paths from r to any t∈ S. 2-DST is a natural generalization of the classical Directed Steiner Tree problem (DST), where we have an additional requirement that the network must tolerate one failure. No non-trivial approximation is known for 2-DST. This was left as an open problem by Feldman et al., [SODA’09; JCSS] and has then been studied by Cheriyan et al. [SODA’12; TALG] and Laekhanukit [SODA’14]. However, no positive result was known except for the special case of a D-shallow instance [Laekhanukit, ICALP’16]. We present an O(D3logD· h2/D· logn) approximation algorithm for 2-DST that runs in time O(nO(D)), for any D∈[log2h]. This implies a polynomial-time O(hєlogn) approximation for any constant є>0, and a poly-logarithmic approximation running in quasi-polynomial time. We remark that this is essentially the best-known even for the classical DST, and the latter problem is O(log2−єn)-hard to approximate [Halperin and Krauthgamer, STOC’03]. As a by product, we obtain an algorithm with the same approximation guarantee for the 2-Connected Directed Steiner Subgraph problem, where the goal is to find a min-cost subgraph such that every pair of terminals are 2-edge/vertex connected. Our approximation algorithm is based on a careful combination of several techniques. In more detail, we decompose an optimal solution into two (possibly not edge disjoint) divergent trees that induces two edge disjoint paths from the root to any given terminal. These divergent trees are then embedded into a shallow tree by means of Zelikovsky’s height reduction theorem. On the latter tree we solve a 2-Connected Group Steiner Tree problem and then map back this solution to the original graph. Crucially, our tree embedding is achieved via a probabilistic mapping guided by an LP: This is the main technical novelty of our approach, and might be useful for future work. @InProceedings{STOC17p420, author = {Fabrizio Grandoni and Bundit Laekhanukit}, title = {Surviving in Directed Graphs: A Quasi-Polynomial-Time Polylogarithmic Approximation for Two-Connected Directed Steiner Tree}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {420--428}, doi = {}, year = {2017}, } |
|
Larsen, Kasper Green |
STOC '17: "DecreaseKeys Are Expensive ..."
DecreaseKeys Are Expensive for External Memory Priority Queues
Kasper Eenberg, Kasper Green Larsen, and Huacheng Yu (Aarhus University, Denmark; Stanford University, USA) One of the biggest open problems in external memory data structures is the priority queue problem with DecreaseKey operations. If only Insert and ExtractMin operations need to be supported, one can design a comparison-based priority queue performing O((N/B)lgM/B N) I/Os over a sequence of N operations, where B is the disk block size in number of words and M is the main memory size in number of words. This matches the lower bound for comparison-based sorting and is hence optimal for comparison-based priority queues. However, if we also need to support DecreaseKeys, the performance of the best known priority queue is only O((N/B) lg2 N) I/Os. The big open question is whether a degradation in performance really is necessary. We answer this question affirmatively by proving a lower bound of Ω((N/B) lglgN B) I/Os for processing a sequence of N intermixed Insert, ExtraxtMin and DecreaseKey operations. Our lower bound is proved in the cell probe model and thus holds also for non-comparison-based priority queues. @InProceedings{STOC17p1081, author = {Kasper Eenberg and Kasper Green Larsen and Huacheng Yu}, title = {DecreaseKeys Are Expensive for External Memory Priority Queues}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1081--1093}, doi = {}, year = {2017}, } |
|
Lee, Yin Tat |
STOC '17: "Kernel-Based Methods for Bandit ..."
Kernel-Based Methods for Bandit Convex Optimization
Sébastien Bubeck, Yin Tat Lee, and Ronen Eldan (Microsoft Research, USA; Weizmann Institute of Science, Israel) We consider the adversarial convex bandit problem and we build the first poly(T)-time algorithm with poly(n) √T-regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves Õ(n9.5 √T)-regret, and we show that a simple variant of this algorithm can be run in poly(n log(T))-time per step at the cost of an additional poly(n) To(1) factor in the regret. These results improve upon the Õ(n11 √T)-regret and exp(poly(T))-time result of the first two authors, and the log(T)poly(n) √T-regret and log(T)poly(n)-time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve Õ(n1.5 √T)-regret, and moreover that this regret is unimprovable (the current best lower bound being Ω(n √T) and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order n3 / є2. @InProceedings{STOC17p72, author = {Sébastien Bubeck and Yin Tat Lee and Ronen Eldan}, title = {Kernel-Based Methods for Bandit Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {72--85}, doi = {}, year = {2017}, } Video Info STOC '17: "An SDP-Based Algorithm for ..." An SDP-Based Algorithm for Linear-Sized Spectral Sparsification Yin Tat Lee and He Sun (Microsoft Research, USA; University of Bristol, UK) For any undirected and weighted graph G=(V,E,w) with n vertices and m edges, we call a sparse subgraph H of G, with proper reweighting of the edges, a (1+ε)-spectral sparsifier if (1−ε)xTLGx≤ xT LH x≤ (1+ε) xT LGx holds for any x∈ℝn, where LG and LH are the respective Laplacian matrices of G and H. Noticing that Ω(m) time is needed for any algorithm to construct a spectral sparsifier and a spectral sparsifier of G requires Ω(n) edges, a natural question is to investigate, for any constant ε, if a (1+ε)-spectral sparsifier of G with O(n) edges can be constructed in Õ(m) time, where the Õ notation suppresses polylogarithmic factors. All previous constructions on spectral sparsification require either super-linear number of edges or m1+Ω(1) time. In this work we answer this question affirmatively by presenting an algorithm that, for any undirected graph G and ε>0, outputs a (1+ε)-spectral sparsifier of G with O(n/ε2) edges in Õ(m/εO(1)) time. Our algorithm is based on three novel techniques: (1) a new potential function which is much easier to compute yet has similar guarantees as the potential functions used in previous references; (2) an efficient reduction from a two-sided spectral sparsifier to a one-sided spectral sparsifier; (3) constructing a one-sided spectral sparsifier by a semi-definite program. @InProceedings{STOC17p678, author = {Yin Tat Lee and He Sun}, title = {An SDP-Based Algorithm for Linear-Sized Spectral Sparsification}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {678--687}, doi = {}, year = {2017}, } STOC '17: "Subquadratic Submodular Function ..." Subquadratic Submodular Function Minimization Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, and Sam Chiu-wai Wong (Dartmouth College, USA; Microsoft Research, USA; Stanford University, USA; University of California at Berkeley, USA) Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n2lognM·+n3logO(1)nM) time and O(n3log2n·+n4logO(1)n) time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in O(nM3logn·) time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in [−1,1], we give an algorithm which in Õ(n5/3·/ε2) time returns an ε-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n2 bound – we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast Fujishige-Wolfe heuristic [Fujishige, 1980; Wolfe, 1976] @InProceedings{STOC17p1220, author = {Deeparnab Chakrabarty and Yin Tat Lee and Aaron Sidford and Sam Chiu-wai Wong}, title = {Subquadratic Submodular Function Minimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1220--1231}, doi = {}, year = {2017}, } STOC '17: "Geodesic Walks in Polytopes ..." Geodesic Walks in Polytopes Yin Tat Lee and Santosh S. Vempala (Microsoft Research, USA; University of Washington, USA; Georgia Institute of Technology, USA) We introduce the geodesic walk for sampling Riemannian manifolds and apply it to the problem of generating uniform random points from the interior of polytopes in R^n specified by m inequalities. The walk is a discrete-time simulation of a stochastic differential equation (SDE) on the Riemannian manifold equipped with the metric induced by the Hessian of a convex function; each step is the solution of an ordinary differential equation (ODE). The resulting sampling algorithm for polytopes mixes in O^*(mn^3/4) steps. This is the first walk that breaks the quadratic barrier for mixing in high dimension, improving on the previous best bound of O^*(mn) by Kannan and Narayanan for the Dikin walk. We also show that each step of the geodesic walk (solving an ODE) can be implemented efficiently, thus improving the time complexity for sampling polytopes. Our analysis of the geodesic walk for general Hessian manifolds does not assume positive curvature and might be of independent interest. @InProceedings{STOC17p927, author = {Yin Tat Lee and Santosh S. Vempala}, title = {Geodesic Walks in Polytopes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {927--940}, doi = {}, year = {2017}, } |
|
Li, Wei |
STOC '17: "Deciding Parity Games in Quasipolynomial ..."
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan (University of Auckland, New Zealand; National University of Singapore, Singapore) It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime nO(√n) to O(nlog(m)+6) and from an XP-algorithm with runtime O(nΘ(m)) for fixed parameter m to an FPT-algorithm with runtime O(n5)+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((mm · n)5); it is also shown that this bound cannot be improved to O((2m · n)c), for any c, unless FPT = W[1]. @InProceedings{STOC17p252, author = {Cristian S. Calude and Sanjay Jain and Bakhadyr Khoussainov and Wei Li and Frank Stephan}, title = {Deciding Parity Games in Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {252--263}, doi = {}, year = {2017}, } |
|
Li, Xin |
STOC '17: "Non-malleable Codes and Extractors ..."
Non-malleable Codes and Extractors for Small-Depth Circuits, and Affine Functions
Eshan Chattopadhyay and Xin Li (IAS, USA; Johns Hopkins University, USA) Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs as an elegant relaxation of error correcting codes, where the motivation is to handle more general forms of tampering while still providing meaningful guarantees. This has led to many elegant constructions and applications in cryptography. However, most works so far only studied tampering in the split-state model where different parts of the codeword are tampered independently, and thus do not apply to many other natural classes of tampering functions. The only exceptions are the work of Agrawal et al. which studied non-malleable codes against bit permutation composed with bit-wise tampering, and the works of Faust et al/ and Ball et al. which studied non-malleable codes against local functions. However, in both cases each tampered bit only depends on a subset of input bits. In this work, we study the problem of constructing non-malleable codes against more general tampering functions that act on the entire codeword. We give the first efficient constructions of non-malleable codes against tampering functions and affine tampering functions. These are the first explicit non-malleable codes against tampering functions where each tampered bit can depend on all input bits. We also give efficient non-malleable codes against t-local functions for t=o(√n), where a t-local function has the property that any output bit depends on at most t input bits. In the case of deterministic decoders, this improves upon the results of Ball et al, which can handle t≤ n1/4. All our results on non-malleable codes are obtained by using the connection between non-malleable codes and seedless non-malleable extractors discovered by Cheraghchi and Guruswami. Therefore, we also give the first efficient constructions of seedless non-malleable extractors against tampering functions, t-local tampering functions for t=o(√n), and affine tampering functions. To derive our results on non-malleable codes, we design efficient algorithms to almost uniformly sample from the pre-image of any given output of our non-malleable extractor. @InProceedings{STOC17p1171, author = {Eshan Chattopadhyay and Xin Li}, title = {Non-malleable Codes and Extractors for Small-Depth Circuits, and Affine Functions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1171--1184}, doi = {}, year = {2017}, } Video STOC '17: "Improved Non-malleable Extractors, ..." Improved Non-malleable Extractors, Non-malleable Codes and Independent Source Extractors Xin Li (Johns Hopkins University, USA) In this paper we give improved constructions of several central objects in the literature of randomness extraction and tamper-resilient cryptography. Our main results are: (1) An explicit seeded non-malleable extractor with error є and seed length d=O(logn)+O(log(1/є)loglog(1/є)), that supports min-entropy k=Ω(d) and outputs Ω(k) bits. Combined with the protocol by Dodis and Wichs, this gives a two round privacy amplification protocol with optimal entropy loss in the presence of an active adversary, for all security parameters up to Ω(k/logk), where k is the min-entropy of the shared weak random source. Previously, the best known seeded non-malleable extractors require seed length and min-entropy O(logn)+log(1/є)2O√loglog(1/є), and only give two round privacy amplification protocols with optimal entropy loss for security parameter up to k/2O(√logk). (2) An explicit non-malleable two-source extractor for min entropy k ≥ (1−γ)n, some constant γ>0, that outputs Ω(k) bits with error 2−Ω(n/logn). We further show that we can efficiently uniformly sample from the pre-image of any output of the extractor. Combined with the connection found by Cheraghchi and Guruswami this gives a non-malleable code in the two-split-state model with relative rate Ω(1/logn). This exponentially improves previous constructions, all of which only achieve rate n−Ω(1). (3) Combined with the techniques by Ben-Aroya et. al, our non-malleable extractors give a two-source extractor for min-entropy O(logn loglogn), which also implies a K-Ramsey graph on N vertices with K=(logN)O(logloglogN). Previously the best known two-source extractor by Ben-Aroya et. al requires min-entropy logn 2O(√logn), which gives a Ramsey graph with K=(logN)2O(√logloglogN). We further show a way to reduce the problem of constructing seeded non-malleable extractors to the problem of constructing non-malleable independent source extractors. Using the non-malleable 10-source extractor with optimal error by Chattopadhyay and Zuckerman, we give a 10-source extractor for min-entropy O(logn). Previously the best known extractor for such min-entropy by Cohen and Schulman requires O(loglogn) sources. Independent of our work, Cohen obtained similar results to (1) and the two-source extractor, except the dependence on є is log(1/є)poly loglog(1/є) and the two-source extractor requires min-entropy logn poly loglogn. @InProceedings{STOC17p1144, author = {Xin Li}, title = {Improved Non-malleable Extractors, Non-malleable Codes and Independent Source Extractors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1144--1156}, doi = {}, year = {2017}, } |
|
Liu, Jingcheng |
STOC '17: "Uniform Sampling through the ..."
Uniform Sampling through the Lovász Local Lemma
Heng Guo, Mark Jerrum, and Jingcheng Liu (Queen Mary University of London, UK; University of California at Berkeley, USA) We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lovász Local Lemma and some clas- sical sampling algorithms such as the “cycle-popping” algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences. @InProceedings{STOC17p342, author = {Heng Guo and Mark Jerrum and Jingcheng Liu}, title = {Uniform Sampling through the Lovász Local Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {342--355}, doi = {}, year = {2017}, } |
|
Lokshtanov, Daniel |
STOC '17: "Lossy Kernelization ..."
Lossy Kernelization
Daniel Lokshtanov, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh (University of Bergen, Norway; Vienna University of Technology, Austria; Institute of Mathematical Sciences, India) In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions com- bine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α-approximate kernel. Loosely speaking, a polynomial size α-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance (I′,k′) to the same problem, such that |I′| + k′ ≤ kO(1). Additionally, for every c≥ 1, a c-approximate solution s′ to the pre-processed instance (I′, k′) can be turned in polynomial time into a (c · α)-approximate solution s to the original instance (I,k). Amongst our main technical contributions are α-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆ coNP/Poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α-approximate kernel of polynomial size, for any α≥ 1, unless NP ⊆ coNP/Poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation. @InProceedings{STOC17p224, author = {Daniel Lokshtanov and Fahad Panolan and M. S. Ramanujan and Saket Saurabh}, title = {Lossy Kernelization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {224--237}, doi = {}, year = {2017}, } |
|
Lu, Zhenjian |
STOC '17: "A Polynomial Restriction Lemma ..."
A Polynomial Restriction Lemma with Applications
Valentine Kabanets, Daniel M. Kane, and Zhenjian Lu (Simon Fraser University, Canada; University of California at San Diego, USA) A polynomial threshold function (PTF) of degree d is a boolean function of the form f=sgn(p), where p is a degree-d polynomial, and sgn is the sign function. The main result of the paper is an almost optimal bound on the probability that a random restriction of a PTF is not close to a constant function, where a boolean function g is called δ-close to constant if, for some v∈{1,−1}, we have g(x)=v for all but at most δ fraction of inputs. We show for every PTF f of degree d≥ 1, and parameters 0<δ, r≤ 1/16, that Prρ∼ Rr [fρ is not δ -close to constant] ≤ √r · (logr−1 · logδ−1)O(d2), where ρ∼ Rr is a random restriction leaving each variable, independently, free with probability r, and otherwise assigning it 1 or −1 uniformly at random. In fact, we show a more general result for random block restrictions: given an arbitrary partitioning of input variables into m blocks, a random block restriction picks a uniformly random block ℓ∈ [m] and assigns 1 or −1, uniformly at random, to all variable outside the chosen block ℓ. We prove the Block Restriction Lemma saying that a PTF f of degree d becomes δ-close to constant when hit with a random block restriction, except with probability at most m−1/2 · (logm· logδ−1)O(d2). As an application of our Restriction Lemma, we prove lower bounds against constant-depth circuits with PTF gates of any degree 1≤ d≪ √logn/loglogn, generalizing the recent bounds against constant-depth circuits with linear threshold gates (LTF gates) proved by Kane and Williams (STOC, 2016) and Chen, Santhanam, and Srinivasan (CCC, 2016). In particular, we show that there is an n-variate boolean function Fn ∈ P such that every depth-2 circuit with PTF gates of degree d≥ 1 that computes Fn must have at least (n3/2+1/d)· (logn)−O(d2) wires. For constant depths greater than 2, we also show average-case lower bounds for such circuits with super-linear number of wires. These are the first super-linear bounds on the number of wires for circuits with PTF gates. We also give short proofs of the optimal-exponent average sensitivity bound for degree-d PTFs due to Kane (Computational Complexity, 2014), and the Littlewood-Offord type anticoncentration bound for degree-d multilinear polynomials due to Meka, Nguyen, and Vu (Theory of Computing, 2016). Finally, we give derandomized versions of our Block Restriction Lemma and Littlewood-Offord type anticoncentration bounds, using a pseudorandom generator for PTFs due to Meka and Zuckerman (SICOMP, 2013). @InProceedings{STOC17p615, author = {Valentine Kabanets and Daniel M. Kane and Zhenjian Lu}, title = {A Polynomial Restriction Lemma with Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {615--628}, doi = {}, year = {2017}, } |
|
Lucier, Brendan |
STOC '17: "Beating 1-1/e for Ordered ..."
Beating 1-1/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 1-1/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {61--71}, doi = {}, year = {2017}, } |
|
Ma, Tengyu |
STOC '17: "Provable Learning of Noisy-or ..."
Provable Learning of Noisy-or Networks
Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski (Princeton University, USA; Duke University, USA) Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Learning the model ---that is, the mapping from hidden variables to visible ones and vice versa---is NP-hard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structure: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the single-layer noisy-OR network, which is a textbook example of a bayes net, and used for example in the classic QMR-DT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable. @InProceedings{STOC17p1057, author = {Sanjeev Arora and Rong Ge and Tengyu Ma and Andrej Risteski}, title = {Provable Learning of Noisy-or Networks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1057--1066}, doi = {}, year = {2017}, } STOC '17: "Finding Approximate Local ..." Finding Approximate Local Minima Faster than Gradient Descent Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma (Princeton University, USA; IAS, USA) We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning. @InProceedings{STOC17p1195, author = {Naman Agarwal and Zeyuan Allen-Zhu and Brian Bullins and Elad Hazan and Tengyu Ma}, title = {Finding Approximate Local Minima Faster than Gradient Descent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1195--1199}, doi = {}, year = {2017}, } |
|
Makarychev, Konstantin |
STOC '17: "Algorithms for Stable and ..."
Algorithms for Stable and Perturbation-Resilient Problems
Haris Angelidakis, Konstantin Makarychev, and Yury Makarychev (Toyota Technological Institute at Chicago, USA; Northwestern University, USA) We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√2≈2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2−ε)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2−2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2−2/k+δ)-weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomial-time algorithms for n1−ε-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP). @InProceedings{STOC17p438, author = {Haris Angelidakis and Konstantin Makarychev and Yury Makarychev}, title = {Algorithms for Stable and Perturbation-Resilient Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {438--451}, doi = {}, year = {2017}, } |
|
Makarychev, Yury |
STOC '17: "Algorithms for Stable and ..."
Algorithms for Stable and Perturbation-Resilient Problems
Haris Angelidakis, Konstantin Makarychev, and Yury Makarychev (Toyota Technological Institute at Chicago, USA; Northwestern University, USA) We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√2≈2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2−ε)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2−2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2−2/k+δ)-weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomial-time algorithms for n1−ε-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP). @InProceedings{STOC17p438, author = {Haris Angelidakis and Konstantin Makarychev and Yury Makarychev}, title = {Algorithms for Stable and Perturbation-Resilient Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {438--451}, doi = {}, year = {2017}, } |
|
Manurangsi, Pasin |
STOC '17: "Almost-Polynomial Ratio ETH-Hardness ..."
Almost-Polynomial Ratio ETH-Hardness of Approximating Densest k-Subgraph
Pasin Manurangsi (University of California at Berkeley, USA) In the Densest k-Subgraph (DkS) problem, given an undirected graph G and an integer k, the goal is to find a subgraph of G on k vertices that contains maximum number of edges. Even though Bhaskara et al.’s state-of-the-art algorithm for the problem achieves only O(n1/4 + ε) approximation ratio, previous attempts at proving hardness of approximation, including those under average case assumptions, fail to achieve a polynomial ratio; the best ratios ruled out under any worst case assumption and any average case assumption are only any constant (Raghavendra and Steurer) and 2Ω(log2/3 n) (Alon et al.) respectively. In this work, we show, assuming the exponential time hypothesis (ETH), that there is no polynomial-time algorithm that approximates Densest k-Subgraph to within n1/(loglogn)c factor of the optimum, where c > 0 is a universal constant independent of n. In addition, our result has perfect completeness, meaning that we prove that it is ETH-hard to even distinguish between the case in which G contains a k-clique and the case in which every induced k-subgraph of G has density at most 1/n−1/(loglogn)c in polynomial time. Moreover, if we make a stronger assumption that there is some constant ε > 0 such that no subexponential-time algorithm can distinguish between a satisfiable 3SAT formula and one which is only (1 − ε)-satisfiable (also known as Gap-ETH), then the ratio above can be improved to nf(n) for any function f whose limit is zero as n goes to infinity (i.e. f ∈ o(1)). @InProceedings{STOC17p954, author = {Pasin Manurangsi}, title = {Almost-Polynomial Ratio ETH-Hardness of Approximating Densest k-Subgraph}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {954--961}, doi = {}, year = {2017}, } |
|
Martens, Wim |
STOC '17: "Optimizing Tree Pattern Queries: ..."
Optimizing Tree Pattern Queries: Why Cutting Is Not Enough (Invited Talk)
Wim Martens (University of Bayreuth, Germany) Tree pattern queries are a natural language for querying graph- and tree-structured data. A central question for understanding their optimization problem was whether they can be minimized by cutting away redundant parts. This question has been studied since the early 2000's and was recently resolved. @InProceedings{STOC17p3, author = {Wim Martens}, title = {Optimizing Tree Pattern Queries: Why Cutting Is Not Enough (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3--3}, doi = {}, year = {2017}, } |
|
Martin, James B. |
STOC '17: "Stability of Service under ..."
Stability of Service under Time-of-Use Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of Wisconsin-Madison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under Time-of-Use Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184--197}, doi = {}, year = {2017}, } |
|
Marx, Dániel |
STOC '17: "Homomorphisms Are a Good Basis ..."
Homomorphisms Are a Good Basis for Counting Small Subgraphs
Radu Curticapean, Holger Dell, and Dániel Marx (Hungarian Academy of Sciences, Hungary; Saarland University, Germany) We introduce graph motif parameters, a class of graph parameters that depend only on the frequencies of constant-size induced subgraphs. Classical works by Lovász show that many interesting quantities have this form, including, for fixed graphs H, the number of H-copies (induced or not) in an input graph G, and the number of homomorphisms from H to G. We use the framework of graph motif parameters to obtain faster algorithms for counting subgraph copies of fixed graphs H in host graphs G. More precisely, for graphs H on k edges, we show how to count subgraph copies of H in time kO(k)· n0.174k + o(k) by a surprisingly simple algorithm. This improves upon previously known running times, such as O(n0.91k + c) time for k-edge matchings or O(n0.46k + c) time for k-cycles. Furthermore, we prove a general complexity dichotomy for evaluating graph motif parameters: Given a class C of such parameters, we consider the problem of evaluating f∈ C on input graphs G, parameterized by the number of induced subgraphs that f depends upon. For every recursively enumerable class C, we prove the above problem to be either FPT or #W[1]-hard, with an explicit dichotomy criterion. This allows us to recover known dichotomies for counting subgraphs, induced subgraphs, and homomorphisms in a uniform and simplified way, together with improved lower bounds. Finally, we extend graph motif parameters to colored subgraphs and prove a complexity trichotomy: For vertex-colored graphs H and G, where H is from a fixed class of graphs, we want to count color-preserving H-copies in G. We show that this problem is either polynomial-time solvable or FPT or #W[1]-hard, and that the FPT cases indeed need FPT time under reasonable assumptions. @InProceedings{STOC17p210, author = {Radu Curticapean and Holger Dell and Dániel Marx}, title = {Homomorphisms Are a Good Basis for Counting Small Subgraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {210--223}, doi = {}, year = {2017}, } |
|
Maus, Yannic |
STOC '17: "On the Complexity of Local ..."
On the Complexity of Local Distributed Graph Problems
Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus (ETH Zurich, Switzerland; University of Freiburg, Germany) This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS ’87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)-vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2O(√logn). Understanding and potentially narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and (Δ+1)-coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmic-time randomized distributed algorithms in a simpler way. As a result, we provide a polylog-time distributed approximation scheme for arbitrary distributed covering and packing integer linear programs. @InProceedings{STOC17p784, author = {Mohsen Ghaffari and Fabian Kuhn and Yannic Maus}, title = {On the Complexity of Local Distributed Graph Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {784--797}, doi = {}, year = {2017}, } |
|
Mehraban, Saeed |
STOC '17: "The Computational Complexity ..."
The Computational Complexity of Ball Permutations
Scott Aaronson, Adam Bouland, Greg Kuperberg, and Saeed Mehraban (University of Texas at Austin, USA; Massachusetts Institute of Technology, USA; University of California at Davis, USA) We define several models of computation based on permuting distinguishable particles (which we call balls) and characterize their computational complexity. In the quantum setting, we use the representation theory of the symmetric group to find variants of this model which are intermediate between BPP and DQC1 (the class of problems solvable with one clean qubit) and between DQC1 and BQP. Furthermore, we consider a restricted version of this model based on an exactly solvable scattering problem of particles moving on a line. Despite the simplicity of this model from the perspective of mathematical physics, we show that if we allow intermediate destructive measurements and specific input states, then the model cannot be efficiently simulated classically up to multiplicative error unless the polynomial hierarchy collapses. Finally, we define a classical version of this model in which one can probabilistically permute balls. We find this yields a complexity class which is intermediate between L and BPP, and that a nondeterministic version of this model is NP-complete. @InProceedings{STOC17p317, author = {Scott Aaronson and Adam Bouland and Greg Kuperberg and Saeed Mehraban}, title = {The Computational Complexity of Ball Permutations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {317--327}, doi = {}, year = {2017}, } |
|
Mehta, Ruta |
STOC '17: "Settling the Complexity of ..."
Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria
Jugal Garg, Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod (University of Illinois at Urbana-Champaign, USA; Georgia Institute of Technology, USA) Our first result shows membership in PPAD for the problem of computing approximate equilibria for an Arrow-Debreu exchange market for piecewise-linear concave (PLC) utility functions. As a corollary we also obtain membership in PPAD for Leontief utility functions. This settles an open question of Vazirani and Yannakakis (2011). Next we show FIXP-hardness of computing equilibria in Arrow-Debreu exchange markets under Leontief utility functions, and Arrow-Debreu markets under linear utility functions and Leontief production sets, thereby settling these open questions of Vazirani and Yannakakis (2011). As corollaries, we obtain FIXP-hardness for PLC utilities and for Arrow-Debreu markets under linear utility functions and polyhedral production sets. In all cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be "yes" instances. If all instances are under consideration, then in all cases we prove that the problem of deciding if a given instance admits an equilibrium is ETR-complete, where ETR is the class Existential Theory of Reals. As a consequence of the results stated above, and the fact that membership in FIXP has been established for PLC utilities, the entire computational difficulty of Arrow-Debreu markets under PLC utility functions lies in the Leontief utility subcase. This is perhaps the most unexpected aspect of our result, since Leontief utilities are meant for the case that goods are perfect complements, whereas PLC utilities are very general, capturing not only the cases when goods are complements and substitutes, but also arbitrary combinations of these and much more. Finally, we give a polynomial time algorithm for finding an equilibrium in Arrow-Debreu exchange markets under Leontief utility functions provided the number of agents is a constant. This settles part of an open problem of Devanur and Kannan (2008). @InProceedings{STOC17p890, author = {Jugal Garg and Ruta Mehta and Vijay V. Vazirani and Sadra Yazdanbod}, title = {Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890--901}, doi = {}, year = {2017}, } |
|
Meka, Raghu |
STOC '17: "Approximating Rectangles by ..."
Approximating Rectangles by Juntas and Weakly-Exponential Lower Bounds for LP Relaxations of CSPs
Pravesh K. Kothari, Raghu Meka, and Prasad Raghavendra (Princeton University, USA; IAS, USA; University of California at Los Angeles, USA; University of California at Berkeley, USA) We show that for constraint satisfaction problems (CSPs), sub-exponential size linear programming relaxations are as powerful as nΩ(1)-rounds of the Sherali-Adams linear programming hierarchy. As a corollary, we obtain sub-exponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAX-CUT and MAX-3SAT. This is a nearly-exponential improvement over previous results; previously, the best known lower bounds were quasi-polynomial in n (Chan, Lee, Raghavendra, Steurer 2013). Our bounds are obtained by exploiting and extending the recent progress in communication complexity for ”lifting” query lower bounds to communication problems. The main ingredient in our results is a new structural result on “high-entropy rectangles” that may of independent interest in communication complexity. @InProceedings{STOC17p590, author = {Pravesh K. Kothari and Raghu Meka and Prasad Raghavendra}, title = {Approximating Rectangles by Juntas and Weakly-Exponential Lower Bounds for LP Relaxations of CSPs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {590--603}, doi = {}, year = {2017}, } |
|
Meunier, Pierre-Étienne |
STOC '17: "The Non-cooperative Tile Assembly ..."
The Non-cooperative Tile Assembly Model Is Not Intrinsically Universal or Capable of Bounded Turing Machine Simulation
Pierre-Étienne Meunier and Damien Woods (Inria, France) The field of algorithmic self-assembly is concerned with the computational and expressive power of nanoscale self-assembling molecular systems. In the well-studied cooperative, or temperature 2, abstract tile assembly model it is known that there is a tile set to simulate any Turing machine and an intrinsically universal tile set that simulates the shapes and dynamics of any instance of the model, up to spatial rescaling. It has been an open question as to whether the seemingly simpler noncooperative, or temperature 1, model is capable of such behaviour. Here we show that this is not the case by showing that there is no tile set in the noncooperative model that is intrinsically universal, nor one capable of time-bounded Turing machine simulation within a bounded region of the plane. Although the noncooperative model intuitively seems to lack the complexity and power of the cooperative model it has been exceedingly hard to prove this. One reason is that there have been few tools to analyse the structure of complicated paths in the plane. This paper provides a number of such tools. A second reason is that almost every obvious and small generalisation to the model (e.g. allowing error, 3D, non-square tiles, signals/wires on tiles, tiles that repel each other, parallel synchronous growth) endows it with great computational, and sometimes simulation, power. Our main results show that all of these generalisations provably increase computational and/or simulation power. Our results hold for both deterministic and nondeterministic noncooperative systems. Our first main result stands in stark contrast with the fact that for both the cooperative tile assembly model, and for 3D noncooperative tile assembly, there are respective intrinsically universal tilesets. Our second main result gives a new technique (reduction to simulation) for proving negative results about computation in tile assembly. @InProceedings{STOC17p328, author = {Pierre-Étienne Meunier and Damien Woods}, title = {The Non-cooperative Tile Assembly Model Is Not Intrinsically Universal or Capable of Bounded Turing Machine Simulation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {328--341}, doi = {}, year = {2017}, } |
|
Minzer, Dor |
STOC '17: "On Independent Sets, 2-to-2 ..."
On Independent Sets, 2-to-2 Games, and Grassmann Graphs
Subhash Khot, Dor Minzer, and Muli Safra (New York University, USA; Tel Aviv University, Israel) We present a candidate reduction from the 3-Lin problem to the 2-to-2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain non-standard sense. A reduction that is sound in this non-standard sense implies that it is NP-hard to distinguish whether an n-vertex graph has an independent set of size ( 1− 1/√2 ) n − o(n) or whether every independent set has size o(n), and consequently, that it is NP-hard to approximate the Vertex Cover problem within a factor √2−o(1). @InProceedings{STOC17p576, author = {Subhash Khot and Dor Minzer and Muli Safra}, title = {On Independent Sets, 2-to-2 Games, and Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {576--589}, doi = {}, year = {2017}, } |
|
Moitra, Ankur |
STOC '17: "Approximate Counting, the ..."
Approximate Counting, the Lovász Local Lemma, and Inference in Graphical Models
Ankur Moitra (Massachusetts Institute of Technology, USA) In this paper we introduce a new approach for approximately counting in bounded degree systems with higher-order constraints. Our main result is an algorithm to approximately count the number of solutions to a CNF formula Φ when the width is logarithmic in the maximum degree. This closes an exponential gap between the known upper and lower bounds. Moreover our algorithm extends straightforwardly to approximate sampling, which shows that under Lovasz Local Lemma-like conditions it is not only possible to find a satisfying assignment, it is also possible to generate one approximately uniformly at random from the set of all satisfying assignments. Our approach is a significant departure from earlier techniques in approximate counting, and is based on a framework to bootstrap an oracle for computing marginal probabilities on individual variables. Finally, we give an application of our results to show that it is algorithmically possible to sample from the posterior distribution in an interesting class of graphical models. @InProceedings{STOC17p356, author = {Ankur Moitra}, title = {Approximate Counting, the Lovász Local Lemma, and Inference in Graphical Models}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {356--369}, doi = {}, year = {2017}, } |
|
Montanari, Andrea |
STOC '17: "How Well Do Local Algorithms ..."
How Well Do Local Algorithms Solve Semidefinite Programs?
Zhou Fan and Andrea Montanari (Stanford University, USA) Several probabilistic models from high-dimensional statistics and machine learning reveal an intriguing and yet poorly understood dichotomy. Either simple local algorithms succeed in estimating the object of interest, or even sophisticated semi-definite programming (SDP) relaxations fail. In order to explore this phenomenon, we study a classical SDP relaxation of the minimum graph bisection problem, when applied to Erdos-Rényi random graphs with bounded average degree d > 1, and obtain several types of results. First, we use a dual witness construction (using the so-called non-backtracking matrix of the graph) to upper bound the SDP value. Second, we prove that a simple local algorithm approximately solves the SDP to within a factor 2d^2/(2d^2 + d - 1) of the upper bound. In particular, the local algorithm is at most 8/9 suboptimal, and 1 + O(d^{-1}) suboptimal for large degree. We then analyze a more sophisticated local algorithm, which aggregates information according to the harmonic measure on the limiting Galton-Watson (GW) tree. The resulting lower bound is expressed in terms of the conductance of the GW tree and matches surprisingly well the empirically determined SDP values on large-scale Erdos-Rényi graphs. We finally consider the planted partition model. In this case, purely local algorithms are known to fail, but they do succeed if a small amount of side information is available. Our results imply quantitative bounds on the threshold for partial recovery using SDP in this model. @InProceedings{STOC17p604, author = {Zhou Fan and Andrea Montanari}, title = {How Well Do Local Algorithms Solve Semidefinite Programs?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {604--614}, doi = {}, year = {2017}, } |
|
Moran, Shay |
STOC '17: "Twenty (Simple) Questions ..."
Twenty (Simple) Questions
Yuval Dagan, Yuval Filmus, Ariel Gabizon, and Shay Moran (Technion, Israel; Zerocoin Electronic Coin, USA; University of California at San Diego, USA; Simons Institute for the Theory of Computing Berkeley, USA) A basic combinatorial interpretation of Shannon’s entropy function is via the ”20 questions” game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the ”20 questions” game is given by a Huffman code for π: Bob’s questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately?* Our first main result shows that for every distribution π, Bob has a strategy that uses only questions of the form ”x < c?” and ”x = c?”, and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn1/r) questions that achieve a performance of at most H(π)+r, and show that Ω(rn1/r) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25n+o(n) questions such that for every distribution π, Bob can implement an optimal strategy for π using only questions from Q. We also show that 1.25n−o(n) questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)Θ(1/r) questions are necessary and sufficient. @InProceedings{STOC17p9, author = {Yuval Dagan and Yuval Filmus and Ariel Gabizon and Shay Moran}, title = {Twenty (Simple) Questions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {9--21}, doi = {}, year = {2017}, } |
|
Mori, Ryuhei |
STOC '17: "Sum of Squares Lower Bounds ..."
Sum of Squares Lower Bounds for Refuting any CSP
Pravesh K. Kothari, Ryuhei Mori, Ryan O'Donnell, and David Witmer (Princeton University, USA; IAS, USA; Tokyo Institute of Technology, Japan; Carnegie Mellon University, USA) Let P:{0,1}k → {0,1} be a nontrivial k-ary predicate. Consider a random instance of the constraint satisfaction problem (P) on n variables with Δ n constraints, each being P applied to k randomly chosen literals. Provided the constraint density satisfies Δ ≫ 1, such an instance is unsatisfiable with high probability. The refutation problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate P supports a t-wise uniform probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d = Θ(n/Δ2/(t−1) logΔ) (which runs in time nO(d)) cannot refute a random instance of (P). In particular, the polynomial-time SOS algorithm requires Ω(n(t+1)/2) constraints to refute random instances of CSP(P) when P supports a t-wise uniform distribution on its satisfying assignments. Together with recent work of Lee et al.(Lee, Raghavendra, Steurer 2015), our result also implies that any polynomial-size semidefinite programming relaxation for refutation requires at least Ω(n(t+1)/2) constraints. More generally, we consider the δ-refutation problem, in which the goal is to certify that at most a (1−δ)-fraction of constraints can be simultaneously satisfied. We show that if P is δ-close to supporting a t-wise uniform distribution on satisfying assignments, then the degree-Ω(n/Δ2/(t−1) logΔ) SOS algorithm cannot (δ+o(1))-refute a random instance of CSP(P). This is the first result to show a distinction between the degree SOS needs to solve the refutation problem and the degree it needs to solve the harder δ-refutation problem. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate P, they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen, O’Donnell, Witmer (2015) and Raghavendra, Rao, Schramm (2016), this full three-way tradeoff is tight, up to lower-order factors. @InProceedings{STOC17p132, author = {Pravesh K. Kothari and Ryuhei Mori and Ryan O'Donnell and David Witmer}, title = {Sum of Squares Lower Bounds for Refuting any CSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {132--145}, doi = {}, year = {2017}, } Info |
|
Moseley, Benjamin |
STOC '17: "Efficient Massively Parallel ..."
Efficient Massively Parallel Methods for Dynamic Programming
Sungjin Im, Benjamin Moseley, and Xiaorui Sun (University of California at Merced, USA; Washington University at St. Louis, USA; Simons Institute for the Theory of Computing Berkeley, USA) Modern science and engineering is driven by massively large data sets and its advance heavily relies on massively parallel computing platforms such as Spark, MapReduce, and Hadoop. Theoretical models have been proposed to understand the power and limitations of such platforms. Recent study of developed theoretical models has led to the discovery of new algorithms that are fast and efficient in both theory and practice, thereby beginning to unlock their underlying power. Given recent promising results, the area has turned its focus on discovering widely applicable algorithmic techniques for solving problems efficiently. In this paper we make progress towards this goal by giving a principled framework for simulating sequential dynamic programs in the distributed setting. In particular, we identify two key properties, monotonicity and decomposability, which allow us to derive efficient distributed algorithms for problems possessing the properties. We showcase our framework by considering several core dynamic programming applications, Longest Increasing Subsequence, Optimal Binary Search Tree, and Weighted Interval Selection. For these problems, we derive algorithms yielding solutions that are arbitrarily close to the optimum, using O(1) rounds and Õ(n/m) memory on each machine where n is the input size and m is the number of machines available. @InProceedings{STOC17p798, author = {Sungjin Im and Benjamin Moseley and Xiaorui Sun}, title = {Efficient Massively Parallel Methods for Dynamic Programming}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {798--811}, doi = {}, year = {2017}, } |
|
Mukhopadhyay, Partha |
STOC '17: "Randomized Polynomial Time ..."
Randomized Polynomial Time Identity Testing for Noncommutative Circuits
V. Arvind, Pushkar S Joglekar, Partha Mukhopadhyay, and S. Raja (Institute of Mathematical Sciences, India; Vishwakarma Institute of Technology Pune, India; Chennai Mathematical Institute, India) In this paper we show that black-box polynomial identity testing for noncommutative polynomials f∈F⟨ z1,z2,⋯,zn ⟩ of degree D and sparsity t, can be done in randomized (n,logt,logD) time. As a consequence, given a circuit C of size s computing a polynomial f∈F⟨ z1,z2,⋯,zn ⟩ with at most t non-zero monomials, then testing if f is identically zero can be done by a randomized algorithm with running time polynomial in s and n and logt. This makes significant progress on a question that has been open for over ten years. Our algorithm is based on automata-theoretic ideas that can efficiently isolate a monomial in the given polynomial. In particular, we carry out the monomial isolation using nondeterministic automata. In general, noncommutative circuits of size s can compute polynomials of degree exponential in s and number of monomials double-exponential in s. In this paper, we consider a natural class of homogeneous noncommutative circuits, that we call +-regular circuits, and give a white-box polynomial time deterministic polynomial identity test. These circuits can compute noncommutative polynomials with number of monomials double-exponential in the circuit size. Our algorithm combines some new structural results for +-regular circuits with known results for noncommutative ABP identity testing, rank bound of commutative depth three identities, and equivalence testing problem for words. Finally, we consider the black-box identity testing problem for depth three +-regular circuits and give a randomized polynomial time identity test. In particular, we show if f∈⟨ Z⟩ is a nonzero noncommutative polynomial computed by a depth three +-regular circuit of size s, then f cannot be a polynomial identity for the matrix algebra Ms(F) when F is sufficiently large depending on the degree of f. @InProceedings{STOC17p831, author = {V. Arvind and Pushkar S Joglekar and Partha Mukhopadhyay and S. Raja}, title = {Randomized Polynomial Time Identity Testing for Noncommutative Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {831--841}, doi = {}, year = {2017}, } |
|
Nanongkai, Danupon |
STOC '17: "Dynamic Spanning Forest with ..."
Dynamic Spanning Forest with Worst-Case Update Time: Adaptive, Las Vegas, and O(n1/2 - ε)-Time
Danupon Nanongkai and Thatchaphol Saranurak (KTH, Sweden) We present two algorithms for dynamically maintaining a spanning forest of a graph undergoing edge insertions and deletions. Our algorithms guarantee worst-case update time and work against an adaptive adversary, meaning that an edge update can depend on previous outputs of the algorithms. We provide the first polynomial improvement over the long-standing O(√n) bound of [Frederickson STOC’84, Eppstein, Galil, Italiano and Nissenzweig FOCS’92] for such type of algorithms. The previously best improvement was O(√n (loglogn)2/logn) [Kejlberg-Rasmussen, Kopelowitz, Pettie and Thorup ESA’16]. We note however that these bounds were obtained by deterministic algorithms while our algorithms are randomized. Our first algorithm is Monte Carlo and guarantees an O(n0.4+o(1)) worst-case update time, where the o(1) term hides the O(√loglogn/logn) factor. Our second algorithm is Las Vegas and guarantee an O(n0.49306) worst-case update time with high probability. Algorithms with better update time either needed to assume that the adversary is oblivious (e.g. [Kapron, King and Mountjoy SODA’13]) or can only guarantee an amortized update time. Our second result answers an open problem by Kapron et al. To the best of our knowledge, our algorithms are among a few non-trivial randomized dynamic algorithms that work against adaptive adversaries. The key to our results is a decomposition of graphs into subgraphs that either have high expansion or sparse. This decomposition serves as an interface between recent developments on (static) flow computation and many old ideas in dynamic graph algorithms: On the one hand, we can combine previous dynamic graph techniques to get faster dynamic spanning forest algorithms if such decomposition is given. On the other hand, we can adapt flow-related techniques (e.g. those from [Khandekar, Rao and Vazirani STOC’06], [Peng SODA’16], and [Orecchia and Zhu SODA’14]) to maintain such decomposition. To the best of our knowledge, this is the first time these flow techniques are used in fully dynamic graph algorithms. @InProceedings{STOC17p1122, author = {Danupon Nanongkai and Thatchaphol Saranurak}, title = {Dynamic Spanning Forest with Worst-Case Update Time: Adaptive, Las Vegas, and O(n<sup>1/2 - ε</sup>)-Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1122--1129}, doi = {}, year = {2017}, } |
|
Naor, Assaf |
STOC '17: "The Integrality Gap of the ..."
The Integrality Gap of the Goemans-Linial SDP Relaxation for Sparsest Cut Is at Least a Constant Multiple of √log n
Assaf Naor and Robert Young (Princeton University, USA; New York University, USA) We prove that the integrality gap of the Goemans–Linial semidefinite programming relaxation for the Sparsest Cut Problem is Ω(√logn) on inputs with n vertices, thus matching the previously best known upper bound (logn)1/2+o(1) up to lower-order factors. This statement is a consequence of the following new isoperimetric-type inequality. Consider the 8-regular graph whose vertex set is the 5-dimensional integer grid ℤ5 and where each vertex (a,b,c,d,e)∈ ℤ5 is connected to the 8 vertices (a± 1,b,c,d,e), (a,b± 1,c,d,e), (a,b,c± 1,d,e± a), (a,b,c,d± 1,e± b). This graph is known as the Cayley graph of the 5-dimensional discrete Heisenberg group. Given Ω⊂ ℤ5, denote the size of its edge boundary in this graph (a.k.a. the horizontal perimeter of Ω) by |∂hΩ|. For t∈ ℕ, denote by |∂vtΩ| the number of (a,b,c,d,e)∈ ℤ5 such that exactly one of the two vectors (a,b,c,d,e),(a,b,c,d,e+t) is in Ω. The vertical perimeter of Ω is defined to be |∂vΩ|= √∑t=1∞|∂vtΩ|2/t2. We show that every subset Ω⊂ ℤ5 satisfies |∂vΩ|=O(|∂hΩ|). This vertical-versus-horizontal isoperimetric inequality yields the above-stated integrality gap for Sparsest Cut and answers several geometric and analytic questions of independent interest. The theorem stated above is the culmination of a program whose aim is to understand the performance of the Goemans–Linial semidefinite program through the embeddability properties of Heisenberg groups. These investigations have mathematical significance even beyond their established relevance to approximation algorithms and combinatorial optimization. In particular they contribute to a range of mathematical disciplines including functional analysis, geometric group theory, harmonic analysis, sub-Riemannian geometry, geometric measure theory, ergodic theory, group representations, and metric differentiation. This article builds on the above cited works, with the “twist” that while those works were equally valid for any finite dimensional Heisenberg group, our result holds for the Heisenberg group of dimension 5 (or higher) but fails for the 3-dimensional Heisenberg group. This insight leads to our core contribution, which is a deduction of an endpoint L1-boundedness of a certain singular integral on ℝ5 from the (local) L2-boundedness of the corresponding singular integral on ℝ3. To do this, we devise a corona-type decomposition of subsets of a Heisenberg group, in the spirit of the construction that David and Semmes performed in ℝn, but with two main conceptual differences (in addition to more technical differences that arise from the peculiarities of the geometry of Heisenberg group). Firstly, the“atoms” of our decomposition are perturbations of intrinsic Lipschitz graphs in the sense of Franchi, Serapioni, and Serra Cassano (plus the requisite “wild” regions that satisfy a Carleson packing condition). Secondly, we control the local overlap of our corona decomposition by using quantitative monotonicity rather than Jones-type β-numbers. @InProceedings{STOC17p564, author = {Assaf Naor and Robert Young}, title = {The Integrality Gap of the Goemans-Linial SDP Relaxation for Sparsest Cut Is at Least a Constant Multiple of √log n}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564--575}, doi = {}, year = {2017}, } |
|
Natarajan, Anand |
STOC '17: "A Quantum Linearity Test for ..."
A Quantum Linearity Test for Robustly Verifying Entanglement
Anand Natarajan and Thomas Vidick (Massachusetts Institute of Technology, USA; California Institute of Technology, USA) We introduce a simple two-player test which certifies that the players apply tensor products of Pauli σX and σZ observables on the tensor product of n EPR pairs. The test has constant robustness: any strategy achieving success probability within an additive of the optimal must be poly(ε)-close, in the appropriate distance measure, to the honest n-qubit strategy. The test involves 2n-bit questions and 2-bit answers. The key technical ingredient is a quantum version of the classical linearity test of Blum, Luby, and Rubinfeld. As applications of our result we give (i) the first robust self-test for n EPR pairs; (ii) a quantum multiprover interactive proof system for the local Hamiltonian problem with a constant number of provers and classical questions and answers, and a constant completeness-soundness gap independent of system size; (iii) a robust protocol for verifiable delegated quantum computation with a constant number of quantum polynomial-time provers sharing entanglement. @InProceedings{STOC17p1003, author = {Anand Natarajan and Thomas Vidick}, title = {A Quantum Linearity Test for Robustly Verifying Entanglement}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1003--1015}, doi = {}, year = {2017}, } |
|
Nazarov, Fedor |
STOC '17: "Trace Reconstruction with ..."
Trace Reconstruction with exp(O(n1/3)) Samples
Fedor Nazarov and Yuval Peres (Kent State University, USA; Microsoft Research, USA) In the trace reconstruction problem, an unknown bit string x ∈ {0,1}n is observed through the deletion channel, which deletes each bit of x with some constant probability q, yielding a contracted string x. How many independent copies of x are needed to reconstruct x with high probability? Prior to this work, the best upper bound, due to Holenstein, Mitzenmacher, Panigrahy, and Wieder (2008), was exp(O(n1/2)). We improve this bound to exp(O(n1/3)) using statistics of individual bits in the output and show that this bound is sharp in the restricted model where this is the only information used. Our method, that uses elementary complex analysis, can also handle insertions. Similar results were obtained independently and simultaneously by Anindya De, Ryan O’Donnell and Rocco Servedio. @InProceedings{STOC17p1042, author = {Fedor Nazarov and Yuval Peres}, title = {Trace Reconstruction with exp(O(n<sup>1/3</sup>)) Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1042--1046}, doi = {}, year = {2017}, } |
|
Nederlof, Jesper |
STOC '17: "Faster Space-Efficient Algorithms ..."
Faster Space-Efficient Algorithms for Subset Sum and k-Sum
Nikhil Bansal, Shashwat Garg, Jesper Nederlof, and Nikhil Vyas (Eindhoven University of Technology, Netherlands; IIT Bombay, India) We present randomized algorithms that solve Subset Sum and Knapsack instances with n items in O*(20.86n) time, where the O*(·) notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve Binary Linear Programming on n variables with few constraints in a similar running time. We also show that for any constant k≥ 2, random instances of k-Sum can be solved using O(nk−0.5(n)) time and O(logn) space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using O(logn) space significantly faster than the trivial O(n2) time algorithm if no value occurs too often in the same list. @InProceedings{STOC17p198, author = {Nikhil Bansal and Shashwat Garg and Jesper Nederlof and Nikhil Vyas}, title = {Faster Space-Efficient Algorithms for Subset Sum and k-Sum}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {198--209}, doi = {}, year = {2017}, } |
|
Nguyen, Danny |
STOC '17: "Complexity of Short Presburger ..."
Complexity of Short Presburger Arithmetic
Danny Nguyen and Igor Pak (University of California at Los Angeles, USA) We study complexity of short sentences in Presburger arithmetic (Short-PA). Here by “short” we mean sentences with a bounded number of variables, quantifers, inequalities and Boolean operations; the input consists only of the integers involved in the inequalities. We prove that assuming Kannan’s partition can be found in polynomial time, the satisfability of Short-PA sentences can be decided in polynomial time. Furthermore, under the same assumption, we show that the numbers of satisfying assignments of short Presburger sentences can also be computed in polynomial time. @InProceedings{STOC17p812, author = {Danny Nguyen and Igor Pak}, title = {Complexity of Short Presburger Arithmetic}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {812--820}, doi = {}, year = {2017}, } |
|
Nguyen, Huy L. |
STOC '17: "Approximate Near Neighbors ..."
Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA) We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation. Specifically, for every n, d = no(1), and every d-dimensional symmetric norm ||·||, there exists a data structure for (loglogn)-approximate nearest neighbor search over ||·|| for n-point datasets achieving no(1) query time and n1+o(1) space. The main technical ingredient of the algorithm is a low-distortion embedding of a symmetric norm into a low-dimensional iterated product of top-k norms. We also show that our techniques cannot be extended to *general* norms. @InProceedings{STOC17p902, author = {Alexandr Andoni and Huy L. Nguyen and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Approximate Near Neighbors for General Symmetric Norms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902--913}, doi = {}, year = {2017}, } |
|
Niazadeh, Rad |
STOC '17: "Bernoulli Factories and Black-Box ..."
Bernoulli Factories and Black-Box Reductions in Mechanism Design
Shaddin Dughmi, Jason D. Hartline, Robert Kleinberg, and Rad Niazadeh (University of Southern California, USA; Northwestern University, USA; Cornell University, USA) We provide a polynomial-time reduction from Bayesian incentive-compatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multi-dimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior black-box reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism’s output, which is typically #P-hard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on ”Bernoulli Factories”. In a Bernoulli factory problem, one is given a function mapping the bias of an “input coin” to that of an “output coin”, and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the ”expectations from samples” computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by ”exponential weights”: expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of Hartline-Malekian-Kleinberg [2015] exactly incentive compatible. @InProceedings{STOC17p158, author = {Shaddin Dughmi and Jason D. Hartline and Robert Kleinberg and Rad Niazadeh}, title = {Bernoulli Factories and Black-Box Reductions in Mechanism Design}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {158--169}, doi = {}, year = {2017}, } |
|
Nikolaenko, Valeria |
STOC '17: "Practical Post-quantum Key ..."
Practical Post-quantum Key Agreement from Generic Lattices (Invited Talk)
Valeria Nikolaenko (Stanford University, USA) Lattice-based cryptography offers some of the most attractive primitives believed to be resistant to quantum computers. This work introduces "Frodo" - a concrete instantiation of a key agreement mechanism based on hard problems in generic lattices. @InProceedings{STOC17p8, author = {Valeria Nikolaenko}, title = {Practical Post-quantum Key Agreement from Generic Lattices (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {8--8}, doi = {}, year = {2017}, } |
|
Nikolov, Aleksandar |
STOC '17: "Approximate Near Neighbors ..."
Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA) We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation. Specifically, for every n, d = no(1), and every d-dimensional symmetric norm ||·||, there exists a data structure for (loglogn)-approximate nearest neighbor search over ||·|| for n-point datasets achieving no(1) query time and n1+o(1) space. The main technical ingredient of the algorithm is a low-distortion embedding of a symmetric norm into a low-dimensional iterated product of top-k norms. We also show that our techniques cannot be extended to *general* norms. @InProceedings{STOC17p902, author = {Alexandr Andoni and Huy L. Nguyen and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Approximate Near Neighbors for General Symmetric Norms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902--913}, doi = {}, year = {2017}, } |
|
Nimavat, Rachit |
STOC '17: "New Hardness Results for Routing ..."
New Hardness Results for Routing on Disjoint Paths
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical Node-Disjoint Paths (NDP) problem, the input consists of an undirected n-vertex graph G, and a collection M={(s1,t1),…,(sk,tk)} of pairs of its vertices, called source-destination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via node-disjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is O(√n), while the best current negative result is an Ω(log1/2−δn)-hardness of approximation for any constant δ, under standard complexity assumptions. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves an Õ(n1/4)-approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is Õ(n9/19). The best currently known lower bound for both these versions of the problem is APX-hardness. In this paper we prove that NDP is 2Ω(√logn)-hard to approximate, unless all problems in NP have algorithms with running time nO(logn). Our result holds even when the underlying graph is a planar graph with maximum vertex degree 4, and all source vertices lie on the boundary of a single face (but the destination vertices may lie anywhere in the graph). We extend this result to the closely related Edge-Disjoint Paths problem, showing the same hardness of approximation ratio even for sub-cubic planar graphs with all sources lying on the boundary of a single face. @InProceedings{STOC17p86, author = {Julia Chuzhoy and David H. K. Kim and Rachit Nimavat}, title = {New Hardness Results for Routing on Disjoint Paths}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {86--99}, doi = {}, year = {2017}, } |
|
Nisan, Noam |
STOC '17: "Efficient Empirical Revenue ..."
Efficient Empirical Revenue Maximization in Single-Parameter Auction Environments
Yannai A. Gonczarowski and Noam Nisan (Hebrew University of Jerusalem, Israel; Microsoft Research, Israel) We present a polynomial-time algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of single-parameter auction environments including matroid environments, position environments, and the public project environment. The valuation distributions may be arbitrary bounded distributions (in particular, they may be irregular, and may differ for the various bidders), thus resolving a problem left open by previous papers. The analysis uses basic tools, is performed in its entirety in value-space, and simplifies the analysis of previously known results for special cases. Furthermore, the analysis extends to certain single-parameter auction environments where precise revenue maximization is known to be intractable, such as knapsack environments. @InProceedings{STOC17p856, author = {Yannai A. Gonczarowski and Noam Nisan}, title = {Efficient Empirical Revenue Maximization in Single-Parameter Auction Environments}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {856--868}, doi = {}, year = {2017}, } STOC '17: "The Menu-Size Complexity of ..." The Menu-Size Complexity of Revenue Approximation Moshe Babaioff, Yannai A. Gonczarowski, and Noam Nisan (Microsoft Research, Israel; Hebrew University of Jerusalem, Israel) We consider a monopolist that is selling n items to a single additive buyer, where the buyer’s values for the items are drawn according to independent distributions F1,F2,…,Fn that possibly have unbounded support. It is well known that — unlike in the single item case — the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open. In this paper, we give an affirmative answer to this open question, showing that for every n and for every ε>0, there exists a complexity bound C=C(n,ε) such that auctions of menu size at most C suffice for obtaining a (1−ε) fraction of the optimal revenue from any F1,…,Fn. We prove upper and lower bounds on the revenue approximation complexity C(n,ε), as well as on the deterministic communication complexity required to run an auction that achieves such an approximation. @InProceedings{STOC17p869, author = {Moshe Babaioff and Yannai A. Gonczarowski and Noam Nisan}, title = {The Menu-Size Complexity of Revenue Approximation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {869--877}, doi = {}, year = {2017}, } |
|
O'Donnell, Ryan |
STOC '17: "Optimal Mean-Based Algorithms ..."
Optimal Mean-Based Algorithms for Trace Reconstruction
Anindya De, Ryan O'Donnell, and Rocco A. Servedio (Northwestern University, USA; Carnegie Mellon University, USA; Columbia University, USA) In the (deletion-channel) trace reconstruction problem, there is an unknown n-bit source string x. An algorithm is given access to independent “traces” of x, where a trace is formed by deleting each bit of x independently with probability δ. The goal of the algorithm is to recover x exactly (with high probability), while minimizing samples (number of traces) and running time. Previously, the best known algorithm for the trace reconstruction problem was due to Holenstein et al. [HMPW08]; it uses exp(O(n1/2)) samples and running time for any fixed 0 < δ < 1. It is also what we call a “mean-based algorithm”, meaning that it only uses the empirical means of the individual bits of the traces. Holenstein et al. also gave a lower bound, showing that any mean-based algorithm must use at least nΩ(logn) samples. In this paper we improve both of these results, obtaining matching upper and lower bounds for mean-based trace reconstruction. For any constant deletion rate 0 < δ < 1, we give a mean-based algorithm that uses exp(O(n1/3)) time and traces; we also prove that any mean-based algorithm must use at least exp(Ω(n1/3)) traces. In fact, we obtain matching upper and lower bounds even for δ subconstant and ρ 1−δ subconstant: when (log3 n)/n ≪ δ ≤ 1/2 the bound is exp(−Θ(δ n)1/3), and when 1/√n ≪ ρ ≤ 1/2 the bound is exp(−Θ(n/ρ)1/3). Our proofs involve estimates for the maxima of Littlewood polynomials on complex disks. We show that these techniques can also be used to perform trace reconstruction with random insertions and bit-flips in addition to deletions. We also find a surprising result: for deletion probabilities δ > 1/2, the presence of insertions can actually help with trace reconstruction. @InProceedings{STOC17p1047, author = {Anindya De and Ryan O'Donnell and Rocco A. Servedio}, title = {Optimal Mean-Based Algorithms for Trace Reconstruction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1047--1056}, doi = {}, year = {2017}, } STOC '17: "Efficient Quantum Tomography ..." Efficient Quantum Tomography II Ryan O'Donnell and John Wright (Carnegie Mellon University, USA; Massachusetts Institute of Technology, USA) We continue our analysis of: (i) “Quantum tomography”, i.e., learning a quantum state, i.e., the quantum generalization of learning a discrete probability distribution; (ii) The distribution of Young diagrams output by the RSK algorithm on random words. Regarding (ii), we introduce two powerful new tools: first, a precise upper bound on the expected length of the longest union of k disjoint increasing subsequences in a random length-n word with letter distribution α1 ≥ α2 ≥ ⋯ ≥ αd. Our bound has the correct main term and second-order term, and holds for all n, not just in the large-n limit. Second, a new majorization property of the RSK algorithm that allows one to analyze the Young diagram formed by the lower rows λk, λk+1, … of its output. These tools allow us to prove several new theorems concerning the distribution of random Young diagrams in the nonasymptotic regime, giving concrete error bounds that are optimal, or nearly so, in all parameters. As one example, we give a fundamentally new proof of the celebrated fact that the expected length of the longest increasing sequence in a random length-n permutation is bounded by 2√n. This is the k = 1, αi ≡ 1/d, d → ∞ special case of a much more general result we prove: the expected length of the kth Young diagram row produced by an α-random word is αk n ± 2√αkd n. From our new analyses of random Young diagrams we derive several new results in quantum tomography, including: (i) learning the eigenvalues of an unknown state to є-accuracy in Hellinger-squared, chi-squared, or KL distance, using n = O(d2/є) copies; (ii) learning the top-k eigenvalues of an unknown state to є-accuracy in Hellinger-squared or chi-squared distance using n = O(kd/є) copies or in ℓ22 distance using n = O(k/є) copies; (iii) learning the optimal rank-k approximation of an unknown state to є-fidelity (Hellinger-squared distance) using n = O(kd/є) copies. We believe our new techniques will lead to further advances in quantum learning; indeed, they have already subsequently been used for efficient von Neumann entropy estimation. @InProceedings{STOC17p962, author = {Ryan O'Donnell and John Wright}, title = {Efficient Quantum Tomography II}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {962--974}, doi = {}, year = {2017}, } STOC '17: "Sum of Squares Lower Bounds ..." Sum of Squares Lower Bounds for Refuting any CSP Pravesh K. Kothari, Ryuhei Mori, Ryan O'Donnell, and David Witmer (Princeton University, USA; IAS, USA; Tokyo Institute of Technology, Japan; Carnegie Mellon University, USA) Let P:{0,1}k → {0,1} be a nontrivial k-ary predicate. Consider a random instance of the constraint satisfaction problem (P) on n variables with Δ n constraints, each being P applied to k randomly chosen literals. Provided the constraint density satisfies Δ ≫ 1, such an instance is unsatisfiable with high probability. The refutation problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate P supports a t-wise uniform probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d = Θ(n/Δ2/(t−1) logΔ) (which runs in time nO(d)) cannot refute a random instance of (P). In particular, the polynomial-time SOS algorithm requires Ω(n(t+1)/2) constraints to refute random instances of CSP(P) when P supports a t-wise uniform distribution on its satisfying assignments. Together with recent work of Lee et al.(Lee, Raghavendra, Steurer 2015), our result also implies that any polynomial-size semidefinite programming relaxation for refutation requires at least Ω(n(t+1)/2) constraints. More generally, we consider the δ-refutation problem, in which the goal is to certify that at most a (1−δ)-fraction of constraints can be simultaneously satisfied. We show that if P is δ-close to supporting a t-wise uniform distribution on satisfying assignments, then the degree-Ω(n/Δ2/(t−1) logΔ) SOS algorithm cannot (δ+o(1))-refute a random instance of CSP(P). This is the first result to show a distinction between the degree SOS needs to solve the refutation problem and the degree it needs to solve the harder δ-refutation problem. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate P, they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen, O’Donnell, Witmer (2015) and Raghavendra, Rao, Schramm (2016), this full three-way tradeoff is tight, up to lower-order factors. @InProceedings{STOC17p132, author = {Pravesh K. Kothari and Ryuhei Mori and Ryan O'Donnell and David Witmer}, title = {Sum of Squares Lower Bounds for Refuting any CSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {132--145}, doi = {}, year = {2017}, } Info |
|
Oliveira, Igor C. |
STOC '17: "Addition Is Exponentially ..."
Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits
Xi Chen, Igor C. Oliveira, and Rocco A. Servedio (Columbia University, USA; Charles University in Prague, Czechia) Let Addk,N denote the Boolean function which takes as input k strings of N bits each, representing k numbers a(1),…,a(k) in {0,1,…,2N−1}, and outputs 1 if and only if a(1) + ⋯ + a(k) ≥ 2N. Let MAJt,n denote a monotone unweighted threshold gate, i.e., the Boolean function which takes as input a single string x ∈ {0,1}n and outputs 1 if and only if x1 + ⋯ + xn ≥ t. The function Addk,N may be viewed as a monotone function that performs addition, and MAJt,n may be viewed as a monotone gate that performs counting. We refer to circuits that are composed of MAJ gates as monotone majority circuits. The main result of this paper is an exponential lower bound on the size of bounded-depth monotone majority circuits that compute Addk,N. More precisely, we show that for any constant d ≥ 2, any depth-d monotone majority circuit that computes Addd,N must have size 2Ω(N1/d). As Addk,N can be computed by a single monotone weighted threshold gate (that uses exponentially large weights), our lower bound implies that constant-depth monotone majority circuits require exponential size to simulate monotone weighted threshold gates. This answers a question posed by Goldmann and Karpinski (STOC’93) and recently restated by Håstad (2010, 2014). We also show that our lower bound is essentially best possible, by constructing a depth-d, size 2O(N1/d) monotone majority circuit for Addd,N. As a corollary of our lower bound, we significantly strengthen a classical theorem in circuit complexity due to Ajtai and Gurevich (JACM’87). They exhibited a monotone function that is in AC0 but requires super-polynomial size for any constant-depth monotone circuit composed of unbounded fan-in AND and OR gates. We describe a monotone function that is in depth-3 AC0 but requires exponential size monotone circuits of any constant depth, even if the circuits are composed of MAJ gates. @InProceedings{STOC17p1232, author = {Xi Chen and Igor C. Oliveira and Rocco A. Servedio}, title = {Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1232--1245}, doi = {}, year = {2017}, } STOC '17: "Pseudodeterministic Constructions ..." Pseudodeterministic Constructions in Subexponential Time Igor C. Oliveira and Rahul Santhanam (Charles University in Prague, Czechia; University of Oxford, UK) We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {pn} of primes and a randomized algorithm A running in expected sub-exponential time such that for each n, on input 1|pn|, A outputs pn with probability 1. In other words, our result provides a pseudodeterministic construction of primes in sub-exponential time which works infinitely often. This result follows from a more general theorem about pseudodeterministic constructions. A property Q ⊆ {0,1}* is γ-dense if for large enough n, |Q ∩ {0,1}n| ≥ γ 2n. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {Hn} of sets, Hn ⊆ {0,1}n, such that for each (1/nc)-dense property Q ∈ DTIME(nc) and every large enough n, Hn ∩ Q ≠ ∅; or (2) There is a deterministic sub-exponential time construction of a family {H′n} of sets, H′n ⊆ {0,1}n, such that for each (1/nc)-dense property Q ∈ DTIME(nc) and for infinitely many values of n, H′n ∩ Q ≠ ∅. We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a non-constructive element, arising from a sequence of applications of the hardness versus randomness paradigm. @InProceedings{STOC17p665, author = {Igor C. Oliveira and Rahul Santhanam}, title = {Pseudodeterministic Constructions in Subexponential Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {665--677}, doi = {}, year = {2017}, } |
|
Oliveira, Rafael |
STOC '17: "Algorithmic and Optimization ..."
Algorithmic and Optimization Aspects of Brascamp-Lieb Inequalities, via Operator Scaling
Ankit Garg, Leonid Gurvits, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; City College of New York, USA; Princeton University, USA; IAS, USA) The celebrated Brascamp-Lieb (BL) inequalities [BL76, Lie90], and their reverse form of Barthe [Bar98], are an important mathematical tool, unifying and generalizing numerous in- equalities in analysis, convex geometry and information theory, with many used in computer science. While their structural theory is very well understood, far less is known about computing their main parameters below (which we later define). Prior to this work, the best known algorithms for any of these optimization tasks required at least exponential time. In this work, we give polynomial time algorithms to compute: (1) Feasibility of BL-datum, (2) Optimal BL- constant, (3) Weak separation oracle for BL-polytopes. What is particularly exciting about this progress, beyond the better understanding of BL- inequalities, is that the objects above naturally encode rich families of optimization problems which had no prior efficient algorithms. In particular, the BL-constants (which we efficiently compute) are solutions to non-convex optimization problems, and the BL-polytopes (for which we provide efficient membership and separation oracles) are linear programs with exponentially many facets. Thus we hope that new combinatorial optimization problems can be solved via reductions to the ones above, and make modest initial steps in exploring this possibility. Our algorithms are obtained by a simple efficient reduction of a given BL-datum to an instance of the Operator Scaling problem defined by [Gur04]. To obtain the results above, we utilize the two (very recent and different) algorithms for the operator scaling problem [GGOW16, IQS15a]. Our reduction implies algorithmic versions of many of the known structural results on BL-inequalities, and in some cases provide proofs that are different or simpler than existing ones. Further, the analytic properties of the [GGOW16] algorithm provide new, effective bounds on the magnitude and continuity of BL-constants, with applications to non-linear versions of BL-inequalities; prior work relied on compactness, and thus provided no bounds. On a higher level, our application of operator scaling algorithm to BL-inequalities further connects analysis and optimization with the diverse mathematical areas used so far to mo- tivate and solve the operator scaling problem, which include commutative invariant theory, non-commutative algebra, computational complexity and quantum information theory. @InProceedings{STOC17p397, author = {Ankit Garg and Leonid Gurvits and Rafael Oliveira and Avi Wigderson}, title = {Algorithmic and Optimization Aspects of Brascamp-Lieb Inequalities, via Operator Scaling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {397--409}, doi = {}, year = {2017}, } |
|
Olver, Neil |
STOC '17: "A Simpler and Faster Strongly ..."
A Simpler and Faster Strongly Polynomial Algorithm for Generalized Flow Maximization
Neil Olver and László A. Végh (VU University Amsterdam, Netherlands; CWI, Netherlands; London School of Economics, UK) We present a new strongly polynomial algorithm for generalized flow maximization. The first strongly polynomial algorithm for this problem was given very recently by Végh; our new algorithm is much simpler, and much faster. The complexity bound O((m+nlogn)mnlog(n2/m)) improves on the previous estimate obtained by Végh by almost a factor O(n2). Even for small numerical parameter values, our algorithm is essentially as fast as the best weakly polynomial algorithms. The key new technical idea is relaxing primal feasibility conditions. This allows us to work almost exclusively with integral flows, in contrast to all previous algorithms. @InProceedings{STOC17p100, author = {Neil Olver and László A. Végh}, title = {A Simpler and Faster Strongly Polynomial Algorithm for Generalized Flow Maximization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {100--111}, doi = {}, year = {2017}, } |
|
Pagh, Rasmus |
STOC '17: "Set Similarity Search Beyond ..."
Set Similarity Search Beyond MinHash
Tobias Christiani and Rasmus Pagh (IT University of Copenhagen, Denmark) We consider the problem of approximate set similarity search under Braun-Blanquet similarity B(x, y) = |x ∩ y| / max(|x|, |y|). The (b1, b2)-approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets P such that, given a query set q, if there exists x ∈ P with B(q, x) ≥ b1, then we can efficiently return x′ ∈ P with B(q, x′) > b2. We present a simple data structure that solves this problem with space usage O(n1+ρlogn + ∑x ∈ P|x|) and query time O(|q|nρ logn) where n = |P| and ρ = log(1/b1)/log(1/b2). Making use of existing lower bounds for locality-sensitive hashing by O’Donnell et al. (TOCT 2014) we show that this value of ρ is tight across the parameter space, i.e., for every choice of constants 0 < b2 < b1 < 1. In the case where all sets have the same size our solution strictly improves upon the value of ρ that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broder’s MinHash (CCS 1997) for Jaccard similarity and Andoni et al.’s cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-dependent method by Andoni and Razenshteyn (STOC 2015). @InProceedings{STOC17p1094, author = {Tobias Christiani and Rasmus Pagh}, title = {Set Similarity Search Beyond MinHash}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1094--1107}, doi = {}, year = {2017}, } |
|
Pak, Igor |
STOC '17: "Complexity of Short Presburger ..."
Complexity of Short Presburger Arithmetic
Danny Nguyen and Igor Pak (University of California at Los Angeles, USA) We study complexity of short sentences in Presburger arithmetic (Short-PA). Here by “short” we mean sentences with a bounded number of variables, quantifers, inequalities and Boolean operations; the input consists only of the integers involved in the inequalities. We prove that assuming Kannan’s partition can be found in polynomial time, the satisfability of Short-PA sentences can be decided in polynomial time. Furthermore, under the same assumption, we show that the numbers of satisfying assignments of short Presburger sentences can also be computed in polynomial time. @InProceedings{STOC17p812, author = {Danny Nguyen and Igor Pak}, title = {Complexity of Short Presburger Arithmetic}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {812--820}, doi = {}, year = {2017}, } |
|
Pandurangan, Gopal |
STOC '17: "A Time- and Message-Optimal ..."
A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning Trees
Gopal Pandurangan, Peter Robinson, and Michele Scquizzato (University of Houston, USA; Royal Holloway University of London, UK) This paper presents a randomized (Las Vegas) distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in Õ(D + √n) time and exchanges Õ(m) messages (both with high probability), where n is the number of nodes of the network, D is the diameter, and m is the number of edges. This is the first distributed MST algorithm that matches simultaneously the time lower bound of Ω(D + √n) [Elkin, SIAM J. Comput. 2006] and the message lower bound of Ω(m) [Kutten et al., J. ACM 2015], which both apply to randomized Monte Carlo algorithms. The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires both Ω(D + √n) rounds and Ω(m) messages. @InProceedings{STOC17p743, author = {Gopal Pandurangan and Peter Robinson and Michele Scquizzato}, title = {A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning Trees}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {743--756}, doi = {}, year = {2017}, } |
|
Panigrahi, Debmalya |
STOC '17: "Online Service with Delay ..."
Online Service with Delay
Yossi Azar, Arun Ganesh, Rong Ge, and Debmalya Panigrahi (Tel Aviv University, Israel; Duke University, USA) In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a poly-logarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems. @InProceedings{STOC17p551, author = {Yossi Azar and Arun Ganesh and Rong Ge and Debmalya Panigrahi}, title = {Online Service with Delay}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {551--563}, doi = {}, year = {2017}, } STOC '17: "Online and Dynamic Algorithms ..." Online and Dynamic Algorithms for Set Cover Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Debmalya Panigrahi (Carnegie Mellon University, USA; Microsoft Research, India; IIT Delhi, India; Duke University, USA) In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of “active” elements to be covered changes over time. The goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primal-dual offline algorithms for set cover. The former leads to a competitive ratio of O(lognt), where nt is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on ft, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research. @InProceedings{STOC17p537, author = {Anupam Gupta and Ravishankar Krishnaswamy and Amit Kumar and Debmalya Panigrahi}, title = {Online and Dynamic Algorithms for Set Cover}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {537--550}, doi = {}, year = {2017}, } |
|
Panolan, Fahad |
STOC '17: "Lossy Kernelization ..."
Lossy Kernelization
Daniel Lokshtanov, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh (University of Bergen, Norway; Vienna University of Technology, Austria; Institute of Mathematical Sciences, India) In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions com- bine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α-approximate kernel. Loosely speaking, a polynomial size α-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance (I′,k′) to the same problem, such that |I′| + k′ ≤ kO(1). Additionally, for every c≥ 1, a c-approximate solution s′ to the pre-processed instance (I′, k′) can be turned in polynomial time into a (c · α)-approximate solution s to the original instance (I,k). Amongst our main technical contributions are α-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆ coNP/Poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α-approximate kernel of polynomial size, for any α≥ 1, unless NP ⊆ coNP/Poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation. @InProceedings{STOC17p224, author = {Daniel Lokshtanov and Fahad Panolan and M. S. Ramanujan and Saket Saurabh}, title = {Lossy Kernelization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {224--237}, doi = {}, year = {2017}, } |
|
Peebles, John |
STOC '17: "Sampling Random Spanning Trees ..."
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva (Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA) We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in (n5/3 m1/3) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(nω). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{nω,m√n,m4/3}) for m ≫ n7/4 (Colbourn et al. ’96, Kelner-Madry ’09, Madry et al. ’15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute -approximate effective resistances for a set S of vertex pairs via approximate Schur complements in (m+(n + |S|)−2) time, without using the Johnson-Lindenstrauss lemma which requires ( min{(m + |S|)−2, m+n−4 +|S|−2}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn’t sufficiently accurate. @InProceedings{STOC17p730, author = {David Durfee and Rasmus Kyng and John Peebles and Anup B. Rao and Sushant Sachdeva}, title = {Sampling Random Spanning Trees Faster Than Matrix Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {730--742}, doi = {}, year = {2017}, } Info STOC '17: "Almost-Linear-Time Algorithms ..." Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graph-theoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almost-linear-time directed Laplacian system solver, and, by leveraging the recent framework of [Cohen-Kelner-Peebles-Peng-Sidford-Vladu, FOCS’16], we also obtain almost-linear-time algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm3/4 + n2/3 m) logO(1) (n κ є−1)) to O((m + n2O(√lognloglogn)) logO(1) (n κ є−1)) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410--419}, doi = {}, year = {2017}, } |
|
Peikert, Chris |
STOC '17: "Pseudorandomness of Ring-LWE ..."
Pseudorandomness of Ring-LWE for Any Ring and Modulus
Chris Peikert, Oded Regev, and Noah Stephens-Davidowitz (University of Michigan, USA; New York University, USA) We give a polynomial-time quantum reduction from worst-case (ideal) lattice problems directly to decision (Ring-)LWE. This extends to decision all the worst-case hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision Ring-LWE with any number field and any modulus. @InProceedings{STOC17p461, author = {Chris Peikert and Oded Regev and Noah Stephens-Davidowitz}, title = {Pseudorandomness of Ring-LWE for Any Ring and Modulus}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {461--473}, doi = {}, year = {2017}, } |
|
Peng, Richard |
STOC '17: "Almost-Linear-Time Algorithms ..."
Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graph-theoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almost-linear-time directed Laplacian system solver, and, by leveraging the recent framework of [Cohen-Kelner-Peebles-Peng-Sidford-Vladu, FOCS’16], we also obtain almost-linear-time algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm3/4 + n2/3 m) logO(1) (n κ є−1)) to O((m + n2O(√lognloglogn)) logO(1) (n κ є−1)) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410--419}, doi = {}, year = {2017}, } |
|
Peres, Yuval |
STOC '17: "Trace Reconstruction with ..."
Trace Reconstruction with exp(O(n1/3)) Samples
Fedor Nazarov and Yuval Peres (Kent State University, USA; Microsoft Research, USA) In the trace reconstruction problem, an unknown bit string x ∈ {0,1}n is observed through the deletion channel, which deletes each bit of x with some constant probability q, yielding a contracted string x. How many independent copies of x are needed to reconstruct x with high probability? Prior to this work, the best upper bound, due to Holenstein, Mitzenmacher, Panigrahy, and Wieder (2008), was exp(O(n1/2)). We improve this bound to exp(O(n1/3)) using statistics of individual bits in the output and show that this bound is sharp in the restricted model where this is the only information used. Our method, that uses elementary complex analysis, can also handle insertions. Similar results were obtained independently and simultaneously by Anindya De, Ryan O’Donnell and Rocco Servedio. @InProceedings{STOC17p1042, author = {Fedor Nazarov and Yuval Peres}, title = {Trace Reconstruction with exp(O(n<sup>1/3</sup>)) Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1042--1046}, doi = {}, year = {2017}, } STOC '17: "Local Max-Cut in Smoothed ..." Local Max-Cut in Smoothed Polynomial Time Omer Angel, Sébastien Bubeck, Yuval Peres, and Fan Wei (University of British Columbia, Canada; Microsoft Research, USA; Stanford University, USA) In 1988, Johnson, Papadimitriou and Yannakakis wrote that “Practically all the empirical evidence would lead us to conclude that finding locally optimal solutions is much easier than solving NP-hard problems”. Since then the empirical evidence has continued to amass, but formal proofs of this phenomenon have remained elusive. A canonical (and indeed complete) example is the local max-cut problem, for which no polynomial time method is known. In a breakthrough paper, Etscheid and R'oglin proved that the smoothed complexity of local max-cut is quasi-polynomial, i.e., if arbitrary bounded weights are randomly perturbed, a local maximum can be found in φ nO(logn) steps where φ is an upper bound on the random edge weight density. In this paper we prove smoothed polynomial complexity for local max-cut, thus confirming that finding local optima for max-cut is much easier than solving it. @InProceedings{STOC17p429, author = {Omer Angel and Sébastien Bubeck and Yuval Peres and Fan Wei}, title = {Local Max-Cut in Smoothed Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {429--437}, doi = {}, year = {2017}, } |
|
Perkins, Will |
STOC '17: "Information-Theoretic Thresholds ..."
Information-Theoretic Thresholds from the Cavity Method
Amin Coja-Oghlan, Florent Krzakala, Will Perkins, and Lenka Zdeborova (Goethe University Frankfurt, Germany; CNRS, France; PSL Research University, France; ENS, France; UPMC, France; University of Birmingham, UK; CEA, France; University of Paris-Saclay, France) Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications. @InProceedings{STOC17p146, author = {Amin Coja-Oghlan and Florent Krzakala and Will Perkins and Lenka Zdeborova}, title = {Information-Theoretic Thresholds from the Cavity Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {146--157}, doi = {}, year = {2017}, } |
|
Pettie, Seth |
STOC '17: "Exponential Separations in ..."
Exponential Separations in the Energy Complexity of Leader Election
Yi-Jun Chang, Tsvi Kopelowitz, Seth Pettie, Ruosong Wang, and Wei Zhan (University of Michigan, USA; Tsinghua University, China) Energy is often the most constrained resource for battery-powered wireless devices and the lion’s share of energy is often spent on transceiver usage (sending/receiving packets), not on computation. In this paper we study the energy complexity of Leader Election and Approximate Counting in several models of wireless radio networks. It turns out that energy complexity is very sensitive to whether the devices can generate random bits and their ability to detect collisions. We consider four collision-detection models: Strong-CD (in which transmitters and listeners detect collisions), Sender-CD and Receiver-CD (in which only transmitters or only listeners detect collisions), and No-CD (in which no one detects collisions.) The take-away message of our results is quite surprising. For randomized Leader Election algorithms, there is an exponential gap between the energy complexity of Sender-CD and Receiver-CD: No-CD = Sender-CD ≫ Receiver-CD = Strong-CD and for deterministic Leader Election algorithms, there is another exponential gap in energy complexity, but in the reverse direction: No-CD = Receiver-CD ≫ Sender-CD = Strong-CD In particular, the randomized energy complexity of Leader Election is Θ(log* n) in Sender-CD but Θ(log(log* n)) in Receiver-CD, where n is the (unknown) number of devices. Its deterministic complexity is Θ(logN) in Receiver-CD but Θ(loglogN) in Sender-CD, where N is the (known) size of the devices’ ID space. There is a tradeoff between time and energy. We give a new upper bound on the time-energy tradeoff curve for randomized Leader Election and Approximate Counting. A critical component of this algorithm is a new deterministic Leader Election algorithm for dense instances, when n=Θ(N), with inverse-Ackermann-type (O(α(N))) energy complexity. @InProceedings{STOC17p771, author = {Yi-Jun Chang and Tsvi Kopelowitz and Seth Pettie and Ruosong Wang and Wei Zhan}, title = {Exponential Separations in the Energy Complexity of Leader Election}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {771--783}, doi = {}, year = {2017}, } |
|
Pitassi, Toniann |
STOC '17: "Strongly Exponential Lower ..."
Strongly Exponential Lower Bounds for Monotone Computation
Toniann Pitassi and Robert Robere (University of Toronto, Canada) We prove size lower bounds of 2Ω(n) for an explicit function in monotone NP in the following models of computation: monotone formulas, monotone switching networks, monotone span programs, and monotone comparator circuits, where n is the number of variables of the underlying function. Our lower bounds improve on the best previous bounds in each of these models, and are the best possible for any function up to constant factors in the exponent. Moreover, we give one unified proof that is short and fairly elementary. @InProceedings{STOC17p1246, author = {Toniann Pitassi and Robert Robere}, title = {Strongly Exponential Lower Bounds for Monotone Computation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1246--1255}, doi = {}, year = {2017}, } |
|
Poburinnaya, Oxana |
STOC '17: "Equivocating Yao: Constant-Round ..."
Equivocating Yao: Constant-Round Adaptively Secure Multiparty Computation in the Plain Model
Ran Canetti, Oxana Poburinnaya, and Muthuramakrishnan Venkitasubramaniam (Boston University, USA; Tel Aviv University, Israel; University of Rochester, USA) Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable two-message, two-party secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds? We provide a positive answer to this question. We define a new type of encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function. Combining our scheme with non-committing encryption, we obtain the first two-message, two-party computation protocol, and the first constant-rounds multiparty computation protocol, in the plain model, that are secure against semi-honest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within. @InProceedings{STOC17p497, author = {Ran Canetti and Oxana Poburinnaya and Muthuramakrishnan Venkitasubramaniam}, title = {Equivocating Yao: Constant-Round Adaptively Secure Multiparty Computation in the Plain Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {497--509}, doi = {}, year = {2017}, } Info |
|
Raghavendra, Prasad |
STOC '17: "Strongly Refuting Random CSPs ..."
Strongly Refuting Random CSPs Below the Spectral Threshold
Prasad Raghavendra, Satish Rao, and Tselil Schramm (University of California at Berkeley, USA) Random constraint satisfaction problems (CSPs) are known to exhibit threshold phenomena: given a uniformly random instance of a CSP with n variables and m clauses, there is a value of m = Ω(n) beyond which the CSP will be unsatisfiable with high probability. Strong refutation is the problem of certifying that no variable assignment satisfies more than a constant fraction of clauses; this is the natural algorithmic problem in the unsatisfiable regime (when m/n = ω(1)). Intuitively, strong refutation should become easier as the clause density m/n grows, because the contradictions introduced by the random clauses become more locally apparent. For CSPs such as k-SAT and k-XOR, there is a long-standing gap between the clause density at which efficient strong refutation algorithms are known, m/n ≥ Õ(nk/2−1), and the clause density at which instances become unsatisfiable with high probability, m/n = ω (1). In this paper, we give spectral and sum-of-squares algorithms for strongly refuting random k-XOR instances with clause density m/n ≥ Õ(n(k/2−1)(1−δ)) in time exp(Õ(nδ)) or in Õ(nδ) rounds of the sum-of-squares hierarchy, for any δ ∈ [0,1) and any integer k ≥ 3. Our algorithms provide a smooth transition between the clause density at which polynomial-time algorithms are known at δ = 0, and brute-force refutation at the satisfiability threshold when δ = 1. We also leverage our k-XOR results to obtain strong refutation algorithms for SAT (or any other Boolean CSP) at similar clause densities. Our algorithms match the known sum-of-squares lower bounds due to Grigoriev and Schonebeck, up to logarithmic factors. @InProceedings{STOC17p121, author = {Prasad Raghavendra and Satish Rao and Tselil Schramm}, title = {Strongly Refuting Random CSPs Below the Spectral Threshold}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {121--131}, doi = {}, year = {2017}, } STOC '17: "Approximating Rectangles by ..." Approximating Rectangles by Juntas and Weakly-Exponential Lower Bounds for LP Relaxations of CSPs Pravesh K. Kothari, Raghu Meka, and Prasad Raghavendra (Princeton University, USA; IAS, USA; University of California at Los Angeles, USA; University of California at Berkeley, USA) We show that for constraint satisfaction problems (CSPs), sub-exponential size linear programming relaxations are as powerful as nΩ(1)-rounds of the Sherali-Adams linear programming hierarchy. As a corollary, we obtain sub-exponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAX-CUT and MAX-3SAT. This is a nearly-exponential improvement over previous results; previously, the best known lower bounds were quasi-polynomial in n (Chan, Lee, Raghavendra, Steurer 2013). Our bounds are obtained by exploiting and extending the recent progress in communication complexity for ”lifting” query lower bounds to communication problems. The main ingredient in our results is a new structural result on “high-entropy rectangles” that may of independent interest in communication complexity. @InProceedings{STOC17p590, author = {Pravesh K. Kothari and Raghu Meka and Prasad Raghavendra}, title = {Approximating Rectangles by Juntas and Weakly-Exponential Lower Bounds for LP Relaxations of CSPs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {590--603}, doi = {}, year = {2017}, } |
|
Raja, S. |
STOC '17: "Randomized Polynomial Time ..."
Randomized Polynomial Time Identity Testing for Noncommutative Circuits
V. Arvind, Pushkar S Joglekar, Partha Mukhopadhyay, and S. Raja (Institute of Mathematical Sciences, India; Vishwakarma Institute of Technology Pune, India; Chennai Mathematical Institute, India) In this paper we show that black-box polynomial identity testing for noncommutative polynomials f∈F⟨ z1,z2,⋯,zn ⟩ of degree D and sparsity t, can be done in randomized (n,logt,logD) time. As a consequence, given a circuit C of size s computing a polynomial f∈F⟨ z1,z2,⋯,zn ⟩ with at most t non-zero monomials, then testing if f is identically zero can be done by a randomized algorithm with running time polynomial in s and n and logt. This makes significant progress on a question that has been open for over ten years. Our algorithm is based on automata-theoretic ideas that can efficiently isolate a monomial in the given polynomial. In particular, we carry out the monomial isolation using nondeterministic automata. In general, noncommutative circuits of size s can compute polynomials of degree exponential in s and number of monomials double-exponential in s. In this paper, we consider a natural class of homogeneous noncommutative circuits, that we call +-regular circuits, and give a white-box polynomial time deterministic polynomial identity test. These circuits can compute noncommutative polynomials with number of monomials double-exponential in the circuit size. Our algorithm combines some new structural results for +-regular circuits with known results for noncommutative ABP identity testing, rank bound of commutative depth three identities, and equivalence testing problem for words. Finally, we consider the black-box identity testing problem for depth three +-regular circuits and give a randomized polynomial time identity test. In particular, we show if f∈⟨ Z⟩ is a nonzero noncommutative polynomial computed by a depth three +-regular circuit of size s, then f cannot be a polynomial identity for the matrix algebra Ms(F) when F is sufficiently large depending on the degree of f. @InProceedings{STOC17p831, author = {V. Arvind and Pushkar S Joglekar and Partha Mukhopadhyay and S. Raja}, title = {Randomized Polynomial Time Identity Testing for Noncommutative Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {831--841}, doi = {}, year = {2017}, } |
|
Ramanujan, M. S. |
STOC '17: "Lossy Kernelization ..."
Lossy Kernelization
Daniel Lokshtanov, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh (University of Bergen, Norway; Vienna University of Technology, Austria; Institute of Mathematical Sciences, India) In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions com- bine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α-approximate kernel. Loosely speaking, a polynomial size α-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance (I′,k′) to the same problem, such that |I′| + k′ ≤ kO(1). Additionally, for every c≥ 1, a c-approximate solution s′ to the pre-processed instance (I′, k′) can be turned in polynomial time into a (c · α)-approximate solution s to the original instance (I,k). Amongst our main technical contributions are α-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆ coNP/Poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α-approximate kernel of polynomial size, for any α≥ 1, unless NP ⊆ coNP/Poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation. @InProceedings{STOC17p224, author = {Daniel Lokshtanov and Fahad Panolan and M. S. Ramanujan and Saket Saurabh}, title = {Lossy Kernelization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {224--237}, doi = {}, year = {2017}, } |
|
Rao, Anup B. |
STOC '17: "Sampling Random Spanning Trees ..."
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva (Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA) We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in (n5/3 m1/3) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(nω). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{nω,m√n,m4/3}) for m ≫ n7/4 (Colbourn et al. ’96, Kelner-Madry ’09, Madry et al. ’15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute -approximate effective resistances for a set S of vertex pairs via approximate Schur complements in (m+(n + |S|)−2) time, without using the Johnson-Lindenstrauss lemma which requires ( min{(m + |S|)−2, m+n−4 +|S|−2}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn’t sufficiently accurate. @InProceedings{STOC17p730, author = {David Durfee and Rasmus Kyng and John Peebles and Anup B. Rao and Sushant Sachdeva}, title = {Sampling Random Spanning Trees Faster Than Matrix Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {730--742}, doi = {}, year = {2017}, } Info STOC '17: "Almost-Linear-Time Algorithms ..." Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graph-theoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almost-linear-time directed Laplacian system solver, and, by leveraging the recent framework of [Cohen-Kelner-Peebles-Peng-Sidford-Vladu, FOCS’16], we also obtain almost-linear-time algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm3/4 + n2/3 m) logO(1) (n κ є−1)) to O((m + n2O(√lognloglogn)) logO(1) (n κ є−1)) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410--419}, doi = {}, year = {2017}, } |
|
Rao, Satish |
STOC '17: "Strongly Refuting Random CSPs ..."
Strongly Refuting Random CSPs Below the Spectral Threshold
Prasad Raghavendra, Satish Rao, and Tselil Schramm (University of California at Berkeley, USA) Random constraint satisfaction problems (CSPs) are known to exhibit threshold phenomena: given a uniformly random instance of a CSP with n variables and m clauses, there is a value of m = Ω(n) beyond which the CSP will be unsatisfiable with high probability. Strong refutation is the problem of certifying that no variable assignment satisfies more than a constant fraction of clauses; this is the natural algorithmic problem in the unsatisfiable regime (when m/n = ω(1)). Intuitively, strong refutation should become easier as the clause density m/n grows, because the contradictions introduced by the random clauses become more locally apparent. For CSPs such as k-SAT and k-XOR, there is a long-standing gap between the clause density at which efficient strong refutation algorithms are known, m/n ≥ Õ(nk/2−1), and the clause density at which instances become unsatisfiable with high probability, m/n = ω (1). In this paper, we give spectral and sum-of-squares algorithms for strongly refuting random k-XOR instances with clause density m/n ≥ Õ(n(k/2−1)(1−δ)) in time exp(Õ(nδ)) or in Õ(nδ) rounds of the sum-of-squares hierarchy, for any δ ∈ [0,1) and any integer k ≥ 3. Our algorithms provide a smooth transition between the clause density at which polynomial-time algorithms are known at δ = 0, and brute-force refutation at the satisfiability threshold when δ = 1. We also leverage our k-XOR results to obtain strong refutation algorithms for SAT (or any other Boolean CSP) at similar clause densities. Our algorithms match the known sum-of-squares lower bounds due to Grigoriev and Schonebeck, up to logarithmic factors. @InProceedings{STOC17p121, author = {Prasad Raghavendra and Satish Rao and Tselil Schramm}, title = {Strongly Refuting Random CSPs Below the Spectral Threshold}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {121--131}, doi = {}, year = {2017}, } |
|
Raz, Ran |
STOC '17: "Time-Space Hardness of Learning ..."
Time-Space Hardness of Learning Sparse Parities
Gillat Kol, Ran Raz, and Avishay Tal (Princeton University, USA; IAS, USA) We define a concept class F to be time-space hard (or memory-samples hard) if any learning algorithm for F requires either a memory of size super-linear in n or a number of samples super-polynomial in n, where n is the length of one sample. A recent work shows that the class of all parity functions is time-space hard [Raz, FOCS’16]. Building on [Raz, FOCS’16], we show that the class of all sparse parities of Hamming weight ℓ is time-space hard, as long as ℓ ≥ ω(logn / loglogn). Consequently, linear-size DNF Formulas, linear-size Decision Trees and logarithmic-size Juntas are all time-space hard. Our result is more general and provides time-space lower bounds for learning any concept class of parity functions. We give applications of our results in the field of bounded-storage cryptography. For example, for every ω(logn) ≤ k ≤ n, we obtain an encryption scheme that requires a private key of length k, and time complexity of n per encryption/decryption of each bit, and is provably and unconditionally secure as long as the attacker uses at most o(nk) memory bits and the scheme is used at most 2o(k) times. Previously, this was known only for k=n [Raz, FOCS’16]. @InProceedings{STOC17p1067, author = {Gillat Kol and Ran Raz and Avishay Tal}, title = {Time-Space Hardness of Learning Sparse Parities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1067--1080}, doi = {}, year = {2017}, } |
|
Razenshteyn, Ilya |
STOC '17: "Approximate Near Neighbors ..."
Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA) We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation. Specifically, for every n, d = no(1), and every d-dimensional symmetric norm ||·||, there exists a data structure for (loglogn)-approximate nearest neighbor search over ||·|| for n-point datasets achieving no(1) query time and n1+o(1) space. The main technical ingredient of the algorithm is a low-distortion embedding of a symmetric norm into a low-dimensional iterated product of top-k norms. We also show that our techniques cannot be extended to *general* norms. @InProceedings{STOC17p902, author = {Alexandr Andoni and Huy L. Nguyen and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Approximate Near Neighbors for General Symmetric Norms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902--913}, doi = {}, year = {2017}, } |
|
Regev, Oded |
STOC '17: "Pseudorandomness of Ring-LWE ..."
Pseudorandomness of Ring-LWE for Any Ring and Modulus
Chris Peikert, Oded Regev, and Noah Stephens-Davidowitz (University of Michigan, USA; New York University, USA) We give a polynomial-time quantum reduction from worst-case (ideal) lattice problems directly to decision (Ring-)LWE. This extends to decision all the worst-case hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision Ring-LWE with any number field and any modulus. @InProceedings{STOC17p461, author = {Chris Peikert and Oded Regev and Noah Stephens-Davidowitz}, title = {Pseudorandomness of Ring-LWE for Any Ring and Modulus}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {461--473}, doi = {}, year = {2017}, } STOC '17: "A Reverse Minkowski Theorem ..." A Reverse Minkowski Theorem Oded Regev and Noah Stephens-Davidowitz (New York University, USA) We prove a conjecture due to Dadush, showing that if L⊂ ℝn is a lattice such that det(L′) ≥ 1 for all sublattices L′ ⊆ L, then ∑→y ∈ L e−t2 ||→y ||2 ≤ 3/2 , where t := 10(logn + 2). This implies bounds on the number of lattice points in Euclidean balls for various different radii, which can be seen as a reverse form of Minkowski’s First Theorem. @InProceedings{STOC17p941, author = {Oded Regev and Noah Stephens-Davidowitz}, title = {A Reverse Minkowski Theorem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {941--953}, doi = {}, year = {2017}, } |
|
Risteski, Andrej |
STOC '17: "Provable Learning of Noisy-or ..."
Provable Learning of Noisy-or Networks
Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski (Princeton University, USA; Duke University, USA) Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Learning the model ---that is, the mapping from hidden variables to visible ones and vice versa---is NP-hard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structure: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the single-layer noisy-OR network, which is a textbook example of a bayes net, and used for example in the classic QMR-DT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable. @InProceedings{STOC17p1057, author = {Sanjeev Arora and Rong Ge and Tengyu Ma and Andrej Risteski}, title = {Provable Learning of Noisy-or Networks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1057--1066}, doi = {}, year = {2017}, } |
|
Robere, Robert |
STOC '17: "Strongly Exponential Lower ..."
Strongly Exponential Lower Bounds for Monotone Computation
Toniann Pitassi and Robert Robere (University of Toronto, Canada) We prove size lower bounds of 2Ω(n) for an explicit function in monotone NP in the following models of computation: monotone formulas, monotone switching networks, monotone span programs, and monotone comparator circuits, where n is the number of variables of the underlying function. Our lower bounds improve on the best previous bounds in each of these models, and are the best possible for any function up to constant factors in the exponent. Moreover, we give one unified proof that is short and fairly elementary. @InProceedings{STOC17p1246, author = {Toniann Pitassi and Robert Robere}, title = {Strongly Exponential Lower Bounds for Monotone Computation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1246--1255}, doi = {}, year = {2017}, } |
|
Robinson, Peter |
STOC '17: "A Time- and Message-Optimal ..."
A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning Trees
Gopal Pandurangan, Peter Robinson, and Michele Scquizzato (University of Houston, USA; Royal Holloway University of London, UK) This paper presents a randomized (Las Vegas) distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in Õ(D + √n) time and exchanges Õ(m) messages (both with high probability), where n is the number of nodes of the network, D is the diameter, and m is the number of edges. This is the first distributed MST algorithm that matches simultaneously the time lower bound of Ω(D + √n) [Elkin, SIAM J. Comput. 2006] and the message lower bound of Ω(m) [Kutten et al., J. ACM 2015], which both apply to randomized Monte Carlo algorithms. The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires both Ω(D + √n) rounds and Ω(m) messages. @InProceedings{STOC17p743, author = {Gopal Pandurangan and Peter Robinson and Michele Scquizzato}, title = {A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning Trees}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {743--756}, doi = {}, year = {2017}, } |
|
Rosen, Alon |
STOC '17: "Average-Case Fine-Grained ..."
Average-Case Fine-Grained Hardness
Marshall Ball, Alon Rosen, Manuel Sabin, and Prashant Nalini Vasudevan (Columbia University, USA; IDC Herzliya, Israel; University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL ’94), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO ’03). We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems – namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) – in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2−o(1) time to compute on average, and that of APSP gives us a function that requires n3−o(1) time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions. @InProceedings{STOC17p483, author = {Marshall Ball and Alon Rosen and Manuel Sabin and Prashant Nalini Vasudevan}, title = {Average-Case Fine-Grained Hardness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {483--496}, doi = {}, year = {2017}, } |
|
Roughgarden, Tim |
STOC '17: "Why Prices Need Algorithms ..."
Why Prices Need Algorithms (Invited Talk)
Tim Roughgarden and Inbal Talgam-Cohen (Stanford University, USA; Hebrew University of Jerusalem, Israel) Computational complexity has already had plenty to say about the computation of economic equilibria. However, understanding when equilibria are guaranteed to exist is a central theme in economic theory, seemingly unrelated to computation. In this talk we survey our main results presented at EC’15, which show that the existence of equilibria in markets is inextricably connected to the computational complexity of related optimization problems, such as revenue or welfare maximization. We demonstrate how this relationship implies, under suitable complexity assumptions, a host of impossibility results. We also suggest a complexity-theoretic explanation for the lack of useful extensions of the Walrasian equilibrium concept: such extensions seem to require the invention of novel polynomial-time algorithms for welfare maximization. @InProceedings{STOC17p2, author = {Tim Roughgarden and Inbal Talgam-Cohen}, title = {Why Prices Need Algorithms (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {2--2}, doi = {}, year = {2017}, } |
|
Rubinstein, Aviad |
STOC '17: "The Limitations of Optimization ..."
The Limitations of Optimization from Samples
Eric Balkanski, Aviad Rubinstein, and Yaron Singer (Harvard University, USA; University of California at Berkeley, USA) In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature. @InProceedings{STOC17p1016, author = {Eric Balkanski and Aviad Rubinstein and Yaron Singer}, title = {The Limitations of Optimization from Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1016--1027}, doi = {}, year = {2017}, } STOC '17: "Communication Complexity of ..." Communication Complexity of Approximate Nash Equilibria Yakov Babichenko and Aviad Rubinstein (Technion, Israel; University of California at Berkeley, USA) For a constant є, we prove a (N) lower bound on the (randomized) communication complexity of є-Nash equilibrium in two-player N× N games. For n-player binary-action games we prove an exp(n) lower bound for the (randomized) communication complexity of (є,є)-weak approximate Nash equilibrium, which is a profile of mixed actions such that at least (1−є)-fraction of the players are є-best replying. @InProceedings{STOC17p878, author = {Yakov Babichenko and Aviad Rubinstein}, title = {Communication Complexity of Approximate Nash Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {878--889}, doi = {}, year = {2017}, } |
|
Rudra, Atri |
STOC '17: "Answering FAQs in CSPs, Probabilistic ..."
Answering FAQs in CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix Operations (Invited Talk)
Atri Rudra (SUNY Buffalo, USA) In this talk we will discuss a general framework to solve certain sums of products of functions over semi-rings. This captures many well-known problems in disparate areas such as CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix Operations. This talk is based on joint work titled FAQ: Questions Asked Frequently with Mahmoud Abo Khamis and Hung Q. Ngo, which appeared in PODS 2016. @InProceedings{STOC17p4, author = {Atri Rudra}, title = {Answering FAQs in CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix Operations (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {4--4}, doi = {}, year = {2017}, } |
|
Sabin, Manuel |
STOC '17: "Average-Case Fine-Grained ..."
Average-Case Fine-Grained Hardness
Marshall Ball, Alon Rosen, Manuel Sabin, and Prashant Nalini Vasudevan (Columbia University, USA; IDC Herzliya, Israel; University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL ’94), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO ’03). We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems – namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) – in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2−o(1) time to compute on average, and that of APSP gives us a function that requires n3−o(1) time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions. @InProceedings{STOC17p483, author = {Marshall Ball and Alon Rosen and Manuel Sabin and Prashant Nalini Vasudevan}, title = {Average-Case Fine-Grained Hardness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {483--496}, doi = {}, year = {2017}, } |
|
Sachdeva, Sushant |
STOC '17: "Sampling Random Spanning Trees ..."
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva (Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA) We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in (n5/3 m1/3) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(nω). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{nω,m√n,m4/3}) for m ≫ n7/4 (Colbourn et al. ’96, Kelner-Madry ’09, Madry et al. ’15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute -approximate effective resistances for a set S of vertex pairs via approximate Schur complements in (m+(n + |S|)−2) time, without using the Johnson-Lindenstrauss lemma which requires ( min{(m + |S|)−2, m+n−4 +|S|−2}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn’t sufficiently accurate. @InProceedings{STOC17p730, author = {David Durfee and Rasmus Kyng and John Peebles and Anup B. Rao and Sushant Sachdeva}, title = {Sampling Random Spanning Trees Faster Than Matrix Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {730--742}, doi = {}, year = {2017}, } Info |
|
Safra, Muli |
STOC '17: "On Independent Sets, 2-to-2 ..."
On Independent Sets, 2-to-2 Games, and Grassmann Graphs
Subhash Khot, Dor Minzer, and Muli Safra (New York University, USA; Tel Aviv University, Israel) We present a candidate reduction from the 3-Lin problem to the 2-to-2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain non-standard sense. A reduction that is sound in this non-standard sense implies that it is NP-hard to distinguish whether an n-vertex graph has an independent set of size ( 1− 1/√2 ) n − o(n) or whether every independent set has size o(n), and consequently, that it is NP-hard to approximate the Vertex Cover problem within a factor √2−o(1). @InProceedings{STOC17p576, author = {Subhash Khot and Dor Minzer and Muli Safra}, title = {On Independent Sets, 2-to-2 Games, and Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {576--589}, doi = {}, year = {2017}, } |
|
Sankowski, Piotr |
STOC '17: "Decremental Single-Source ..."
Decremental Single-Source Reachability in Planar Digraphs
Giuseppe F. Italiano, Adam Karczmarz, Jakub Łącki, and Piotr Sankowski (University of Rome Tor Vergata, Italy; University of Warsaw, Poland; Google Research, USA) In this paper we show a new algorithm for the decremental single-source reachability problem in directed planar graphs. It processes any sequence of edge deletions in O(nlog2nloglogn) total time and explicitly maintains the set of vertices reachable from a fixed source vertex. Hence, if all edges are eventually deleted, the amortized time of processing each edge deletion is only O(log2 n loglogn), which improves upon a previously known O(√n ) solution. We also show an algorithm for decremental maintenance of strongly connected components in directed planar graphs with the same total update time. These results constitute the first almost optimal (up to polylogarithmic factors) algorithms for both problems. To the best of our knowledge, these are the first dynamic algorithms with polylogarithmic update times on general directed planar graphs for non-trivial reachability-type problems, for which only polynomial bounds are known in general graphs. @InProceedings{STOC17p1108, author = {Giuseppe F. Italiano and Adam Karczmarz and Jakub Łącki and Piotr Sankowski}, title = {Decremental Single-Source Reachability in Planar Digraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1108--1121}, doi = {}, year = {2017}, } |
|
Santhanam, Rahul |
STOC '17: "Pseudodeterministic Constructions ..."
Pseudodeterministic Constructions in Subexponential Time
Igor C. Oliveira and Rahul Santhanam (Charles University in Prague, Czechia; University of Oxford, UK) We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {pn} of primes and a randomized algorithm A running in expected sub-exponential time such that for each n, on input 1|pn|, A outputs pn with probability 1. In other words, our result provides a pseudodeterministic construction of primes in sub-exponential time which works infinitely often. This result follows from a more general theorem about pseudodeterministic constructions. A property Q ⊆ {0,1}* is γ-dense if for large enough n, |Q ∩ {0,1}n| ≥ γ 2n. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {Hn} of sets, Hn ⊆ {0,1}n, such that for each (1/nc)-dense property Q ∈ DTIME(nc) and every large enough n, Hn ∩ Q ≠ ∅; or (2) There is a deterministic sub-exponential time construction of a family {H′n} of sets, H′n ⊆ {0,1}n, such that for each (1/nc)-dense property Q ∈ DTIME(nc) and for infinitely many values of n, H′n ∩ Q ≠ ∅. We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a non-constructive element, arising from a sequence of applications of the hardness versus randomness paradigm. @InProceedings{STOC17p665, author = {Igor C. Oliveira and Rahul Santhanam}, title = {Pseudodeterministic Constructions in Subexponential Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {665--677}, doi = {}, year = {2017}, } |
|
Saranurak, Thatchaphol |
STOC '17: "Dynamic Spanning Forest with ..."
Dynamic Spanning Forest with Worst-Case Update Time: Adaptive, Las Vegas, and O(n1/2 - ε)-Time
Danupon Nanongkai and Thatchaphol Saranurak (KTH, Sweden) We present two algorithms for dynamically maintaining a spanning forest of a graph undergoing edge insertions and deletions. Our algorithms guarantee worst-case update time and work against an adaptive adversary, meaning that an edge update can depend on previous outputs of the algorithms. We provide the first polynomial improvement over the long-standing O(√n) bound of [Frederickson STOC’84, Eppstein, Galil, Italiano and Nissenzweig FOCS’92] for such type of algorithms. The previously best improvement was O(√n (loglogn)2/logn) [Kejlberg-Rasmussen, Kopelowitz, Pettie and Thorup ESA’16]. We note however that these bounds were obtained by deterministic algorithms while our algorithms are randomized. Our first algorithm is Monte Carlo and guarantees an O(n0.4+o(1)) worst-case update time, where the o(1) term hides the O(√loglogn/logn) factor. Our second algorithm is Las Vegas and guarantee an O(n0.49306) worst-case update time with high probability. Algorithms with better update time either needed to assume that the adversary is oblivious (e.g. [Kapron, King and Mountjoy SODA’13]) or can only guarantee an amortized update time. Our second result answers an open problem by Kapron et al. To the best of our knowledge, our algorithms are among a few non-trivial randomized dynamic algorithms that work against adaptive adversaries. The key to our results is a decomposition of graphs into subgraphs that either have high expansion or sparse. This decomposition serves as an interface between recent developments on (static) flow computation and many old ideas in dynamic graph algorithms: On the one hand, we can combine previous dynamic graph techniques to get faster dynamic spanning forest algorithms if such decomposition is given. On the other hand, we can adapt flow-related techniques (e.g. those from [Khandekar, Rao and Vazirani STOC’06], [Peng SODA’16], and [Orecchia and Zhu SODA’14]) to maintain such decomposition. To the best of our knowledge, this is the first time these flow techniques are used in fully dynamic graph algorithms. @InProceedings{STOC17p1122, author = {Danupon Nanongkai and Thatchaphol Saranurak}, title = {Dynamic Spanning Forest with Worst-Case Update Time: Adaptive, Las Vegas, and O(n<sup>1/2 - ε</sup>)-Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1122--1129}, doi = {}, year = {2017}, } |
|
Saurabh, Saket |
STOC '17: "Lossy Kernelization ..."
Lossy Kernelization
Daniel Lokshtanov, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh (University of Bergen, Norway; Vienna University of Technology, Austria; Institute of Mathematical Sciences, India) In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions com- bine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α-approximate kernel. Loosely speaking, a polynomial size α-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance (I′,k′) to the same problem, such that |I′| + k′ ≤ kO(1). Additionally, for every c≥ 1, a c-approximate solution s′ to the pre-processed instance (I′, k′) can be turned in polynomial time into a (c · α)-approximate solution s to the original instance (I,k). Amongst our main technical contributions are α-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆ coNP/Poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α-approximate kernel of polynomial size, for any α≥ 1, unless NP ⊆ coNP/Poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation. @InProceedings{STOC17p224, author = {Daniel Lokshtanov and Fahad Panolan and M. S. Ramanujan and Saket Saurabh}, title = {Lossy Kernelization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {224--237}, doi = {}, year = {2017}, } |
|
Scarlett, Jonathan |
STOC '17: "An Adaptive Sublinear-Time ..."
An Adaptive Sublinear-Time Block Sparse Fourier Transform
Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh (EPFL, Switzerland) The problem of approximately computing the k dominant Fourier coefficients of a vector X quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with O(klognlog(n/k)) runtime [Hassanieh et al., STOC’12] and O(klogn) sample complexity [Indyk et al., FOCS’14]. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a (k0,k1)-block sparse model. In this model, signal frequencies are clustered in k0 intervals with width k1 in Fourier space, and k= k0k1 is the total sparsity. Our main result is the first sparse FFT algorithm for (k0, k1)-block sparse signals with a sample complexity of O*(k0k1 + k0log(1+ k0)logn) at constant signal-to-noise ratios, and sublinear runtime. Our algorithm crucially uses adaptivity to achieve the improved sample complexity bound, and we provide a lower bound showing that this is essential in the Fourier setting: Any non-adaptive algorithm must use Ω(k0k1logn/k0k1) samples for the (k0,k1)-block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energy-based importance sampling technique that may be of independent interest. @InProceedings{STOC17p702, author = {Volkan Cevher and Michael Kapralov and Jonathan Scarlett and Amir Zandieh}, title = {An Adaptive Sublinear-Time Block Sparse Fourier Transform}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {702--715}, doi = {}, year = {2017}, } |
|
Schramm, Tselil |
STOC '17: "Strongly Refuting Random CSPs ..."
Strongly Refuting Random CSPs Below the Spectral Threshold
Prasad Raghavendra, Satish Rao, and Tselil Schramm (University of California at Berkeley, USA) Random constraint satisfaction problems (CSPs) are known to exhibit threshold phenomena: given a uniformly random instance of a CSP with n variables and m clauses, there is a value of m = Ω(n) beyond which the CSP will be unsatisfiable with high probability. Strong refutation is the problem of certifying that no variable assignment satisfies more than a constant fraction of clauses; this is the natural algorithmic problem in the unsatisfiable regime (when m/n = ω(1)). Intuitively, strong refutation should become easier as the clause density m/n grows, because the contradictions introduced by the random clauses become more locally apparent. For CSPs such as k-SAT and k-XOR, there is a long-standing gap between the clause density at which efficient strong refutation algorithms are known, m/n ≥ Õ(nk/2−1), and the clause density at which instances become unsatisfiable with high probability, m/n = ω (1). In this paper, we give spectral and sum-of-squares algorithms for strongly refuting random k-XOR instances with clause density m/n ≥ Õ(n(k/2−1)(1−δ)) in time exp(Õ(nδ)) or in Õ(nδ) rounds of the sum-of-squares hierarchy, for any δ ∈ [0,1) and any integer k ≥ 3. Our algorithms provide a smooth transition between the clause density at which polynomial-time algorithms are known at δ = 0, and brute-force refutation at the satisfiability threshold when δ = 1. We also leverage our k-XOR results to obtain strong refutation algorithms for SAT (or any other Boolean CSP) at similar clause densities. Our algorithms match the known sum-of-squares lower bounds due to Grigoriev and Schonebeck, up to logarithmic factors. @InProceedings{STOC17p121, author = {Prasad Raghavendra and Satish Rao and Tselil Schramm}, title = {Strongly Refuting Random CSPs Below the Spectral Threshold}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {121--131}, doi = {}, year = {2017}, } |
|
Scquizzato, Michele |
STOC '17: "A Time- and Message-Optimal ..."
A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning Trees
Gopal Pandurangan, Peter Robinson, and Michele Scquizzato (University of Houston, USA; Royal Holloway University of London, UK) This paper presents a randomized (Las Vegas) distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in Õ(D + √n) time and exchanges Õ(m) messages (both with high probability), where n is the number of nodes of the network, D is the diameter, and m is the number of edges. This is the first distributed MST algorithm that matches simultaneously the time lower bound of Ω(D + √n) [Elkin, SIAM J. Comput. 2006] and the message lower bound of Ω(m) [Kutten et al., J. ACM 2015], which both apply to randomized Monte Carlo algorithms. The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires both Ω(D + √n) rounds and Ω(m) messages. @InProceedings{STOC17p743, author = {Gopal Pandurangan and Peter Robinson and Michele Scquizzato}, title = {A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning Trees}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {743--756}, doi = {}, year = {2017}, } |
|
Servedio, Rocco A. |
STOC '17: "Addition Is Exponentially ..."
Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits
Xi Chen, Igor C. Oliveira, and Rocco A. Servedio (Columbia University, USA; Charles University in Prague, Czechia) Let Addk,N denote the Boolean function which takes as input k strings of N bits each, representing k numbers a(1),…,a(k) in {0,1,…,2N−1}, and outputs 1 if and only if a(1) + ⋯ + a(k) ≥ 2N. Let MAJt,n denote a monotone unweighted threshold gate, i.e., the Boolean function which takes as input a single string x ∈ {0,1}n and outputs 1 if and only if x1 + ⋯ + xn ≥ t. The function Addk,N may be viewed as a monotone function that performs addition, and MAJt,n may be viewed as a monotone gate that performs counting. We refer to circuits that are composed of MAJ gates as monotone majority circuits. The main result of this paper is an exponential lower bound on the size of bounded-depth monotone majority circuits that compute Addk,N. More precisely, we show that for any constant d ≥ 2, any depth-d monotone majority circuit that computes Addd,N must have size 2Ω(N1/d). As Addk,N can be computed by a single monotone weighted threshold gate (that uses exponentially large weights), our lower bound implies that constant-depth monotone majority circuits require exponential size to simulate monotone weighted threshold gates. This answers a question posed by Goldmann and Karpinski (STOC’93) and recently restated by Håstad (2010, 2014). We also show that our lower bound is essentially best possible, by constructing a depth-d, size 2O(N1/d) monotone majority circuit for Addd,N. As a corollary of our lower bound, we significantly strengthen a classical theorem in circuit complexity due to Ajtai and Gurevich (JACM’87). They exhibited a monotone function that is in AC0 but requires super-polynomial size for any constant-depth monotone circuit composed of unbounded fan-in AND and OR gates. We describe a monotone function that is in depth-3 AC0 but requires exponential size monotone circuits of any constant depth, even if the circuits are composed of MAJ gates. @InProceedings{STOC17p1232, author = {Xi Chen and Igor C. Oliveira and Rocco A. Servedio}, title = {Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1232--1245}, doi = {}, year = {2017}, } STOC '17: "Optimal Mean-Based Algorithms ..." Optimal Mean-Based Algorithms for Trace Reconstruction Anindya De, Ryan O'Donnell, and Rocco A. Servedio (Northwestern University, USA; Carnegie Mellon University, USA; Columbia University, USA) In the (deletion-channel) trace reconstruction problem, there is an unknown n-bit source string x. An algorithm is given access to independent “traces” of x, where a trace is formed by deleting each bit of x independently with probability δ. The goal of the algorithm is to recover x exactly (with high probability), while minimizing samples (number of traces) and running time. Previously, the best known algorithm for the trace reconstruction problem was due to Holenstein et al. [HMPW08]; it uses exp(O(n1/2)) samples and running time for any fixed 0 < δ < 1. It is also what we call a “mean-based algorithm”, meaning that it only uses the empirical means of the individual bits of the traces. Holenstein et al. also gave a lower bound, showing that any mean-based algorithm must use at least nΩ(logn) samples. In this paper we improve both of these results, obtaining matching upper and lower bounds for mean-based trace reconstruction. For any constant deletion rate 0 < δ < 1, we give a mean-based algorithm that uses exp(O(n1/3)) time and traces; we also prove that any mean-based algorithm must use at least exp(Ω(n1/3)) traces. In fact, we obtain matching upper and lower bounds even for δ subconstant and ρ 1−δ subconstant: when (log3 n)/n ≪ δ ≤ 1/2 the bound is exp(−Θ(δ n)1/3), and when 1/√n ≪ ρ ≤ 1/2 the bound is exp(−Θ(n/ρ)1/3). Our proofs involve estimates for the maxima of Littlewood polynomials on complex disks. We show that these techniques can also be used to perform trace reconstruction with random insertions and bit-flips in addition to deletions. We also find a surprising result: for deletion probabilities δ > 1/2, the presence of insertions can actually help with trace reconstruction. @InProceedings{STOC17p1047, author = {Anindya De and Ryan O'Donnell and Rocco A. Servedio}, title = {Optimal Mean-Based Algorithms for Trace Reconstruction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1047--1056}, doi = {}, year = {2017}, } |
|
Shahrasbi, Amirbehshad |
STOC '17: "Synchronization Strings: Codes ..."
Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound
Bernhard Haeupler and Amirbehshad Shahrasbi (Carnegie Mellon University, USA) We introduce synchronization strings, which provide a novel way of efficiently dealing with synchronization errors, i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to deal with than more commonly considered half-errors, i.e., symbol corruptions and erasures. For every є >0, synchronization strings allow to index a sequence with an є−O(1) size alphabet such that one can efficiently transform k synchronization errors into (1 + є)k half-errors. This powerful new technique has many applications. In this paper we focus on designing insdel codes, i.e., error correcting block codes (ECCs) for insertion deletion channels. While ECCs for both half-errors and synchronization errors have been intensely studied, the later has largely resisted progress. As Mitzenmacher puts it in his 2009 survey: “Channels with synchronization errors ... are simply not adequately understood by current theory. Given the near-complete knowledge we have for channels with erasures and errors ... our lack of understanding about channels with synchronization errors is truly remarkable.” Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed and only since 2016 are there constructions of constant rate indel codes for asymptotically large noise rates. Even in the asymptotically large or small noise regime these codes are polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A straight forward application of our synchronization strings based indexing method gives a simple black-box construction which transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs over large constant alphabets into the realm of insdel codes. Most notably, for the complete noise spectrum we obtain efficient “near-MDS” insdel codes which get arbitrarily close to the optimal rate-distance tradeoff given by the Singleton bound. In particular, for any δ ∈ (0,1) and >0 we give insdel codes achieving a rate of 1 − δ − є over a constant size alphabet that efficiently correct a δ fraction of insertions or deletions. @InProceedings{STOC17p33, author = {Bernhard Haeupler and Amirbehshad Shahrasbi}, title = {Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {33--46}, doi = {}, year = {2017}, } |
|
Shapira, Asaf |
STOC '17: "Removal Lemmas with Polynomial ..."
Removal Lemmas with Polynomial Bounds
Lior Gishboliner and Asaf Shapira (Tel Aviv University, Israel) We give new sufficient and necessary criteria guaranteeing that a hereditary graph property can be tested with a polynomial query complexity. Although both are simple combinatorial criteria, they imply almost all prior positive and negative results of this type, as well as many new ones. One striking application of our results is that every semi-algebraic graph property (e.g., being an interval graph, a unit-disc graph etc.) can be tested with a polynomial query complexity. This confirms a conjecture of Alon. The proofs combine probabilistic ideas together with a novel application of a conditional regularity lemma for matrices, due to Alon, Fischer and Newman. @InProceedings{STOC17p510, author = {Lior Gishboliner and Asaf Shapira}, title = {Removal Lemmas with Polynomial Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {510--522}, doi = {}, year = {2017}, } |
|
Sherman, Jonah |
STOC '17: "Area-Convexity, ℓ∞ ..."
Area-Convexity, ℓ∞ Regularization, and Undirected Multicommodity Flow
Jonah Sherman (University of California at Berkeley, USA) We show the strong-convexity assumption of regularization-based methods for solving bilinear saddle point problems may be relaxed to a weaker notion of area-convexity with respect to an alternating bilinear form. This allows bypassing the infamous ℓ∞ barrier for strongly convex regularizers that has stalled progress on a number of algorithmic problems. Applying area-convex regularization, we present a nearly-linear time approximation algorithm for solving matrix inequality systems A X ≤ B over right-stochastic matrices X. By combining that algorithm with existing work on preconditioning maximum-flow, we obtain a nearly-linear time approximation algorithm for maximum concurrent flow in undirected graphs: given an undirected, capacitated graph with m edges and k demand vectors, the algorithm takes Õ(mkє−1) time and outputs k flows routing the specified demands with total congestion at most (1+є) times optimal. @InProceedings{STOC17p452, author = {Jonah Sherman}, title = {Area-Convexity, ℓ<sub>∞</sub> Regularization, and Undirected Multicommodity Flow}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {452--460}, doi = {}, year = {2017}, } |
|
Shpilka, Amir |
STOC '17: "Succinct Hitting Sets and ..."
Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds
Michael A. Forbes, Amir Shpilka, and Ben Lee Volk (Simons Institute for the Theory of Computing Berkeley, USA; Tel Aviv University, Israel) We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining super-polynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits. Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)-degree polylog(N)-size circuits is a hitting set for the class of poly(N)-degree poly(N)-size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits. @InProceedings{STOC17p653, author = {Michael A. Forbes and Amir Shpilka and Ben Lee Volk}, title = {Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {653--664}, doi = {}, year = {2017}, } |
|
Sidford, Aaron |
STOC '17: "Subquadratic Submodular Function ..."
Subquadratic Submodular Function Minimization
Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, and Sam Chiu-wai Wong (Dartmouth College, USA; Microsoft Research, USA; Stanford University, USA; University of California at Berkeley, USA) Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n2lognM·+n3logO(1)nM) time and O(n3log2n·+n4logO(1)n) time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in O(nM3logn·) time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in [−1,1], we give an algorithm which in Õ(n5/3·/ε2) time returns an ε-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n2 bound – we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast Fujishige-Wolfe heuristic [Fujishige, 1980; Wolfe, 1976] @InProceedings{STOC17p1220, author = {Deeparnab Chakrabarty and Yin Tat Lee and Aaron Sidford and Sam Chiu-wai Wong}, title = {Subquadratic Submodular Function Minimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1220--1231}, doi = {}, year = {2017}, } STOC '17: "Almost-Linear-Time Algorithms ..." Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graph-theoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almost-linear-time directed Laplacian system solver, and, by leveraging the recent framework of [Cohen-Kelner-Peebles-Peng-Sidford-Vladu, FOCS’16], we also obtain almost-linear-time algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm3/4 + n2/3 m) logO(1) (n κ є−1)) to O((m + n2O(√lognloglogn)) logO(1) (n κ є−1)) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410--419}, doi = {}, year = {2017}, } |
|
Singer, Yaron |
STOC '17: "The Limitations of Optimization ..."
The Limitations of Optimization from Samples
Eric Balkanski, Aviad Rubinstein, and Yaron Singer (Harvard University, USA; University of California at Berkeley, USA) In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature. @InProceedings{STOC17p1016, author = {Eric Balkanski and Aviad Rubinstein and Yaron Singer}, title = {The Limitations of Optimization from Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1016--1027}, doi = {}, year = {2017}, } |
|
Sivan, Balasubramanian |
STOC '17: "Stability of Service under ..."
Stability of Service under Time-of-Use Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of Wisconsin-Madison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under Time-of-Use Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184--197}, doi = {}, year = {2017}, } |
|
Song, Zhao |
STOC '17: "Low Rank Approximation with ..."
Low Rank Approximation with Entrywise ℓ₁-Norm Error
Zhao Song, David P. Woodruff, and Peilin Zhong (University of Texas at Austin, USA; IBM Research, USA; Columbia University, USA) We study the ℓ1-low rank approximation problem, where for a given n × d matrix A and approximation factor α ≥ 1, the goal is to output a rank-k matrix A for which ||A−A||1 ≤ α · minrank-k matrices A′ ||A−A′||1, where for an n × d matrix C, we let ||C||1 = ∑i=1n ∑j=1d |Ci,j|. This error measure is known to be more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. The problem was shown to be NP-hard by Gillis and Vavasis and a number of heuristics have been proposed. It was asked in multiple places if there are any approximation algorithms. We give the first provable approximation algorithms for ℓ1-low rank approximation, showing that it is possible to achieve approximation factor α = (logd) · poly(k) in nnz(A) + (n+d) poly(k) time, where nnz(A) denotes the number of non-zero entries of A. If k is constant, we further improve the approximation ratio to O(1) with a poly(nd)-time algorithm. Under the Exponential Time Hypothesis, we show there is no poly(nd)-time algorithm achieving a (1+1/log1+γ(nd))-approximation, for γ > 0 an arbitrarily small constant, even when k = 1. We give a number of additional results for ℓ1-low rank approximation: nearly tight upper and lower bounds for column subset selection, CUR decompositions, extensions to low rank approximation with respect to ℓp-norms for 1 ≤ p < 2 and earthmover distance, low-communication distributed protocols and low-memory streaming algorithms, algorithms with limited randomness, and bicriteria algorithms. We also give a preliminary empirical evaluation. @InProceedings{STOC17p688, author = {Zhao Song and David P. Woodruff and Peilin Zhong}, title = {Low Rank Approximation with Entrywise ℓ₁-Norm Error}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {688--701}, doi = {}, year = {2017}, } Info |
|
Steinhardt, Jacob |
STOC '17: "Learning from Untrusted Data ..."
Learning from Untrusted Data
Moses Charikar, Jacob Steinhardt, and Gregory Valiant (Stanford University, USA) The vast majority of theoretical results in machine learning and statistics assume that the training data is a reliable reflection of the phenomena to be learned. Similarly, most learning techniques used in practice are brittle to the presence of large amounts of biased or malicious data. Motivated by this, we consider two frameworks for studying estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, list-decodable learning, asks whether it is possible to return a list of answers such that at least one is accurate. For example, given a dataset of n points for which an unknown subset of α n points are drawn from a distribution of interest, and no assumptions are made about the remaining (1−α)n points, is it possible to return a list of poly(1/α) answers? The second framework, which we term the semi-verified model, asks whether a small dataset of trusted data (drawn from the distribution in question) can be used to extract accurate information from a much larger but untrusted dataset (of which only an α-fraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This result has immediate implications for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary. @InProceedings{STOC17p47, author = {Moses Charikar and Jacob Steinhardt and Gregory Valiant}, title = {Learning from Untrusted Data}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {47--60}, doi = {}, year = {2017}, } |
|
Stephan, Frank |
STOC '17: "Deciding Parity Games in Quasipolynomial ..."
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan (University of Auckland, New Zealand; National University of Singapore, Singapore) It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime nO(√n) to O(nlog(m)+6) and from an XP-algorithm with runtime O(nΘ(m)) for fixed parameter m to an FPT-algorithm with runtime O(n5)+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((mm · n)5); it is also shown that this bound cannot be improved to O((2m · n)c), for any c, unless FPT = W[1]. @InProceedings{STOC17p252, author = {Cristian S. Calude and Sanjay Jain and Bakhadyr Khoussainov and Wei Li and Frank Stephan}, title = {Deciding Parity Games in Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {252--263}, doi = {}, year = {2017}, } |
|
Stephens-Davidowitz, Noah |
STOC '17: "Pseudorandomness of Ring-LWE ..."
Pseudorandomness of Ring-LWE for Any Ring and Modulus
Chris Peikert, Oded Regev, and Noah Stephens-Davidowitz (University of Michigan, USA; New York University, USA) We give a polynomial-time quantum reduction from worst-case (ideal) lattice problems directly to decision (Ring-)LWE. This extends to decision all the worst-case hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision Ring-LWE with any number field and any modulus. @InProceedings{STOC17p461, author = {Chris Peikert and Oded Regev and Noah Stephens-Davidowitz}, title = {Pseudorandomness of Ring-LWE for Any Ring and Modulus}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {461--473}, doi = {}, year = {2017}, } STOC '17: "A Reverse Minkowski Theorem ..." A Reverse Minkowski Theorem Oded Regev and Noah Stephens-Davidowitz (New York University, USA) We prove a conjecture due to Dadush, showing that if L⊂ ℝn is a lattice such that det(L′) ≥ 1 for all sublattices L′ ⊆ L, then ∑→y ∈ L e−t2 ||→y ||2 ≤ 3/2 , where t := 10(logn + 2). This implies bounds on the number of lattice points in Euclidean balls for various different radii, which can be seen as a reverse form of Minkowski’s First Theorem. @InProceedings{STOC17p941, author = {Oded Regev and Noah Stephens-Davidowitz}, title = {A Reverse Minkowski Theorem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {941--953}, doi = {}, year = {2017}, } |
|
Steurer, David |
STOC '17: "Quantum Entanglement, Sum ..."
Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture
Boaz Barak, Pravesh K. Kothari, and David Steurer (Harvard University, USA; Princeton University, USA; IAS, USA; Cornell University, USA) For every constant є>0, we give an exp(Õ(√n))-time algorithm for the 1 vs 1−є Best Separable State (BSS) problem of distinguishing, given an n2× n2 matrix corresponding to a quantum measurement, between the case that there is a separable (i.e., non-entangled) state ρ that accepts with probability 1, and the case that every separable state is accepted with probability at most 1−є. Equivalently, our algorithm takes the description of a subspace ⊆ Fn2 (where F can be either the real or complex field) and distinguishes between the case that contains a rank one matrix, and the case that every rank one matrix is at least є far (in ℓ2 distance) from . To the best of our knowledge, this is the first improvement over the brute-force exp(n)-time algorithm for this problem. Our algorithm is based on the sum-of-squares hierarchy and its analysis is inspired by Lovett’s proof (STOC ’14, JACM ’16) that the communication complexity of every rank-n Boolean matrix is bounded by Õ(√n). @InProceedings{STOC17p975, author = {Boaz Barak and Pravesh K. Kothari and David Steurer}, title = {Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {975--988}, doi = {}, year = {2017}, } |
|
Stöckel, Morten |
STOC '17: "Finding Even Cycles Faster ..."
Finding Even Cycles Faster via Capped k-Walks
Søren Dahlgaard, Mathias Bæk Tejs Knudsen, and Morten Stöckel (University of Copenhagen, Denmark) Finding cycles in graphs is a fundamental problem in algorithmic graph theory. In this paper, we consider the problem of finding and reporting a cycle of length 2k in an undirected graph G with n nodes and m edges for constant k≥ 2. A classic result by Bondy and Simonovits [J. Combinatorial Theory, 1974] implies that if m ≥ 100k n1+1/k, then G contains a 2k-cycle, further implying that one needs to consider only graphs with m = O(n1+1/k). Previously the best known algorithms were an O(n2) algorithm due to Yuster and Zwick [J. Discrete Math 1997] as well as a O(m2−(1+⌈ k/2 ⌉−1)/(k+1)) algorithm by Alon et. al. [Algorithmica 1997]. We present an algorithm that uses O( m2k/(k+1) ) time and finds a 2k-cycle if one exists. This bound is O(n2) exactly when m = Θ(n1+1/k). When finding 4-cycles our new bound coincides with Alon et. al., while for every k>2 our new bound yields a polynomial improvement in m. Yuster and Zwick noted that it is “plausible to conjecture that O(n2) is the best possible bound in terms of n”. We show “conditional optimality”: if this hypothesis holds then our O(m2k/(k+1)) algorithm is tight as well. Furthermore, a folklore reduction implies that no combinatorial algorithm can determine if a graph contains a 6-cycle in time O(m3/2−ε) for any ε>0 unless boolean matrix multiplication can be solved combinatorially in time O(n3−ε′) for some ε′ > 0, which is widely believed to be false. Coupled with our main result, this gives tight bounds for finding 6-cycles combinatorially and also separates the complexity of finding 4- and 6-cycles giving evidence that the exponent of m in the running time should indeed increase with k. The key ingredient in our algorithm is a new notion of capped k-walks, which are walks of length k that visit only nodes according to a fixed ordering. Our main technical contribution is an involved analysis proving several properties of such walks which may be of independent interest. @InProceedings{STOC17p112, author = {Søren Dahlgaard and Mathias Bæk Tejs Knudsen and Morten Stöckel}, title = {Finding Even Cycles Faster via Capped k-Walks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {112--120}, doi = {}, year = {2017}, } |
|
Straszak, Damian |
STOC '17: "Real Stable Polynomials and ..."
Real Stable Polynomials and Matroids: Optimization and Counting
Damian Straszak and Nisheeth K. Vishnoi (EPFL, Switzerland) Several fundamental optimization and counting problems arising in computer science, mathematics and physics can be reduced to one of the following computational tasks involving polynomials and set systems: given an oracle access to an m-variate real polynomial g and to a family of (multi-)subsets B of [m], (1) compute the sum of coefficients of monomials in g corresponding to all the sets that appear in B(1), or find S ∈ B such that the monomial in g corresponding to S has the largest coefficient in g. Special cases of these problems, such as computing permanents and mixed discriminants, sampling from determinantal point processes, and maximizing sub-determinants with combinatorial constraints have been topics of much recent interest in theoretical computer science. In this paper we present a general convex programming framework geared to solve both of these problems. Subsequently, we show that roughly, when g is a real stable polynomial with non-negative coefficients and B is a matroid, the integrality gap of our convex relaxation is finite and depends only on m (and not on the coefficients of g). Prior to this work, such results were known only in important but sporadic cases that relied heavily on the structure of either g or B; it was not even a priori clear if one could formulate a convex relaxation that has a finite integrality gap beyond these special cases. Two notable examples are a result by Gurvits for real stable polynomials g when B contains one element, and a result by Nikolov and Singh for a family of multi-linear real stable polynomials when B is the partition matroid. This work, which encapsulates almost all interesting cases of g and B, benefits from both – it is inspired by the latter in coming up with the right convex programming relaxation and the former in deriving the integrality gap. However, proving our results requires extensions of both; in that process we come up with new notions and connections between real stable polynomials and matroids which might be of independent interest. @InProceedings{STOC17p370, author = {Damian Straszak and Nisheeth K. Vishnoi}, title = {Real Stable Polynomials and Matroids: Optimization and Counting}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {370--383}, doi = {}, year = {2017}, } |
|
Sun, He |
STOC '17: "An SDP-Based Algorithm for ..."
An SDP-Based Algorithm for Linear-Sized Spectral Sparsification
Yin Tat Lee and He Sun (Microsoft Research, USA; University of Bristol, UK) For any undirected and weighted graph G=(V,E,w) with n vertices and m edges, we call a sparse subgraph H of G, with proper reweighting of the edges, a (1+ε)-spectral sparsifier if (1−ε)xTLGx≤ xT LH x≤ (1+ε) xT LGx holds for any x∈ℝn, where LG and LH are the respective Laplacian matrices of G and H. Noticing that Ω(m) time is needed for any algorithm to construct a spectral sparsifier and a spectral sparsifier of G requires Ω(n) edges, a natural question is to investigate, for any constant ε, if a (1+ε)-spectral sparsifier of G with O(n) edges can be constructed in Õ(m) time, where the Õ notation suppresses polylogarithmic factors. All previous constructions on spectral sparsification require either super-linear number of edges or m1+Ω(1) time. In this work we answer this question affirmatively by presenting an algorithm that, for any undirected graph G and ε>0, outputs a (1+ε)-spectral sparsifier of G with O(n/ε2) edges in Õ(m/εO(1)) time. Our algorithm is based on three novel techniques: (1) a new potential function which is much easier to compute yet has similar guarantees as the potential functions used in previous references; (2) an efficient reduction from a two-sided spectral sparsifier to a one-sided spectral sparsifier; (3) constructing a one-sided spectral sparsifier by a semi-definite program. @InProceedings{STOC17p678, author = {Yin Tat Lee and He Sun}, title = {An SDP-Based Algorithm for Linear-Sized Spectral Sparsification}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {678--687}, doi = {}, year = {2017}, } |
|
Sun, Xiaorui |
STOC '17: "Efficient Massively Parallel ..."
Efficient Massively Parallel Methods for Dynamic Programming
Sungjin Im, Benjamin Moseley, and Xiaorui Sun (University of California at Merced, USA; Washington University at St. Louis, USA; Simons Institute for the Theory of Computing Berkeley, USA) Modern science and engineering is driven by massively large data sets and its advance heavily relies on massively parallel computing platforms such as Spark, MapReduce, and Hadoop. Theoretical models have been proposed to understand the power and limitations of such platforms. Recent study of developed theoretical models has led to the discovery of new algorithms that are fast and efficient in both theory and practice, thereby beginning to unlock their underlying power. Given recent promising results, the area has turned its focus on discovering widely applicable algorithmic techniques for solving problems efficiently. In this paper we make progress towards this goal by giving a principled framework for simulating sequential dynamic programs in the distributed setting. In particular, we identify two key properties, monotonicity and decomposability, which allow us to derive efficient distributed algorithms for problems possessing the properties. We showcase our framework by considering several core dynamic programming applications, Longest Increasing Subsequence, Optimal Binary Search Tree, and Weighted Interval Selection. For these problems, we derive algorithms yielding solutions that are arbitrarily close to the optimum, using O(1) rounds and Õ(n/m) memory on each machine where n is the input size and m is the number of machines available. @InProceedings{STOC17p798, author = {Sungjin Im and Benjamin Moseley and Xiaorui Sun}, title = {Efficient Massively Parallel Methods for Dynamic Programming}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {798--811}, doi = {}, year = {2017}, } |
|
Syrgkanis, Vasilis |
STOC '17: "Fast Convergence of Learning ..."
Fast Convergence of Learning in Games (Invited Talk)
Vasilis Syrgkanis (Microsoft Research, USA) A plethora of recent work has analyzed properties of outcomes in games when each player employs a no-regret learning algorithm. Many algorithms achieve regret against the best fixed action in hindisght that decays at a rate of O(1/√T), when the game is played for T iterations. The latter rate is optimal in adversarial settings. However, in a game a player’s opponents are minimizing their own regret, rather than maximizing the player’s regret. (Daskalakis et al. 2014) and (Rakhlin and Sridharan 2013) showed that in two player zero-sum games O(1/T) rates are achievable. In (Syrgkanis et al. 2015), we show that O(1/T3/4) rates are achievable in general multi-player games and also analyze convergence of the dynamics to approximately optimal social welfare, where we show a convergence rate of O(1/T). The latter result was subsequently generalized to a broader class of learning algorithms by (Foster et al. 2016). This is based on joint work with Alekh Agarwal, Haipeng Luo and Robert E. Schapire. @InProceedings{STOC17p5, author = {Vasilis Syrgkanis}, title = {Fast Convergence of Learning in Games (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {5--5}, doi = {}, year = {2017}, } |
|
Tal, Avishay |
STOC '17: "Formula Lower Bounds via the ..."
Formula Lower Bounds via the Quantum Method
Avishay Tal (IAS, USA) A de Morgan formula over Boolean variables x1, …, xn is a binary tree whose internal nodes are marked with AND or OR gates and whose leaves are marked with variables or their negation. We define the size of the formula as the number of leaves in it. Proving that some explicit function (in P or NP) requires a large formula is a central open question in computational complexity. While we believe that some explicit functions require exponential formula size, currently the best lower bound for an explicit function is the Ω(n3) lower bound for Andreev’s function. A long line of work in quantum query complexity, culminating in the work of Reichardt [SODA, 2011], proved that for any formula of size s, there exists a polynomial of degree at most O(√s) that approximates the formula up to a small point-wise error. This is a classical theorem, arguing about polynomials and formulae, however the only known proof for it involves quantum algorithms. We apply Reichardt result to obtain the following: (1) We show how to trade average-case hardness in exchange for size. More precisely, we show that if a function f cannot be computed correctly on more than 1/2 + 2−k of the inputs by any formula of size at most s, then computing f exactly requires formula size at least Ω(k) · s. As an application, we improve the state of the art formula size lower bounds for explicit functions by a factor of Ω(logn). (2) We prove that the bipartite formula size of the Inner-Product function is Ω(n2). (A bipartite formula on Boolean variables x1, …, xn and y1, …, yn is a binary tree whose internal nodes are marked with AND or OR gates and whose leaves can compute any function of either the x or y variables.) We show that any bipartite formula for the Inner-Product modulo 2 function, namely IP(x,y) = ∑i=1n xi yi (mod 2), must be of size Ω(n2), which is tight up to logarithmic factors. To the best of our knowledge, this is the first super-linear lower bound on the bipartite formula complexity of any explicit function. @InProceedings{STOC17p1256, author = {Avishay Tal}, title = {Formula Lower Bounds via the Quantum Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1256--1268}, doi = {}, year = {2017}, } STOC '17: "Time-Space Hardness of Learning ..." Time-Space Hardness of Learning Sparse Parities Gillat Kol, Ran Raz, and Avishay Tal (Princeton University, USA; IAS, USA) We define a concept class F to be time-space hard (or memory-samples hard) if any learning algorithm for F requires either a memory of size super-linear in n or a number of samples super-polynomial in n, where n is the length of one sample. A recent work shows that the class of all parity functions is time-space hard [Raz, FOCS’16]. Building on [Raz, FOCS’16], we show that the class of all sparse parities of Hamming weight ℓ is time-space hard, as long as ℓ ≥ ω(logn / loglogn). Consequently, linear-size DNF Formulas, linear-size Decision Trees and logarithmic-size Juntas are all time-space hard. Our result is more general and provides time-space lower bounds for learning any concept class of parity functions. We give applications of our results in the field of bounded-storage cryptography. For example, for every ω(logn) ≤ k ≤ n, we obtain an encryption scheme that requires a private key of length k, and time complexity of n per encryption/decryption of each bit, and is provably and unconditionally secure as long as the attacker uses at most o(nk) memory bits and the scheme is used at most 2o(k) times. Previously, this was known only for k=n [Raz, FOCS’16]. @InProceedings{STOC17p1067, author = {Gillat Kol and Ran Raz and Avishay Tal}, title = {Time-Space Hardness of Learning Sparse Parities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1067--1080}, doi = {}, year = {2017}, } |
|
Talgam-Cohen, Inbal |
STOC '17: "Why Prices Need Algorithms ..."
Why Prices Need Algorithms (Invited Talk)
Tim Roughgarden and Inbal Talgam-Cohen (Stanford University, USA; Hebrew University of Jerusalem, Israel) Computational complexity has already had plenty to say about the computation of economic equilibria. However, understanding when equilibria are guaranteed to exist is a central theme in economic theory, seemingly unrelated to computation. In this talk we survey our main results presented at EC’15, which show that the existence of equilibria in markets is inextricably connected to the computational complexity of related optimization problems, such as revenue or welfare maximization. We demonstrate how this relationship implies, under suitable complexity assumptions, a host of impossibility results. We also suggest a complexity-theoretic explanation for the lack of useful extensions of the Walrasian equilibrium concept: such extensions seem to require the invention of novel polynomial-time algorithms for welfare maximization. @InProceedings{STOC17p2, author = {Tim Roughgarden and Inbal Talgam-Cohen}, title = {Why Prices Need Algorithms (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {2--2}, doi = {}, year = {2017}, } STOC '17: "Approximate Modularity Revisited ..." Approximate Modularity Revisited Uriel Feige, Michal Feldman, and Inbal Talgam-Cohen (Weizmann Institute of Science, Israel; Microsoft Research, Israel; Tel Aviv University, Israel; Hebrew University of Jerusalem, Israel) Set functions with convenient properties (such as submodularity) appear in application areas of current interest, such as algorithmic game theory, and allow for improved optimization algorithms. It is natural to ask (e.g., in the context of data driven optimization) how robust such properties are, and whether small deviations from them can be tolerated. We consider two such questions in the important special case of linear set functions. One question that we address is whether any set function that approximately satisfies the modularity equation (linear functions satisfy the modularity equation exactly) is close to a linear function. The answer to this is positive (in a precise formal sense) as shown by Kalton and Roberts [1983] (and further improved by Bondarenko, Prymak, and Radchenko [2013]). We revisit their proof idea that is based on expander graphs, and provide significantly stronger upper bounds by combining it with new techniques. Furthermore, we provide improved lower bounds for this problem. Another question that we address is that of how to learn a linear function h that is close to an approximately linear function f, while querying the value of f on only a small number of sets. We present a deterministic algorithm that makes only linearly many (in the number of items) nonadaptive queries, by this improving over a previous algorithm of Chierichetti, Das, Dasgupta and Kumar [2015] that is randomized and makes more than a quadratic number of queries. Our learning algorithm is based on a Hadamard transform. @InProceedings{STOC17p1028, author = {Uriel Feige and Michal Feldman and Inbal Talgam-Cohen}, title = {Approximate Modularity Revisited}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1028--1041}, doi = {}, year = {2017}, } |
|
Ta-Shma, Amnon |
STOC '17: "An Efficient Reduction from ..."
An Efficient Reduction from Two-Source to Non-malleable Extractors: Achieving Near-Logarithmic Min-entropy
Avraham Ben-Aroya, Dean Doron, and Amnon Ta-Shma (Tel Aviv University, Israel) The breakthrough result of Chattopadhyay and Zuckerman (2016) gives a reduction from the construction of explicit two-source extractors to the construction of explicit non-malleable extractors. However, even assuming the existence of optimal explicit non-malleable extractors only gives a two-source extractor (or a Ramsey graph) for poly(logn) entropy, rather than the optimal O(logn). In this paper we modify the construction to solve the above barrier. Using the currently best explicit non-malleable extractors we get an explicit bipartite Ramsey graphs for sets of size 2k, for k=O(logn loglogn). Any further improvement in the construction of non-malleable extractors would immediately yield a corresponding two-source extractor. Intuitively, Chattopadhyay and Zuckerman use an extractor as a sampler, and we observe that one could use a weaker object – a somewhere-random condenser with a small entropy gap and a very short seed. We also show how to explicitly construct this weaker object using the error reduction technique of Raz, Reingold and Vadhan (1999), and the constant-degree dispersers of Zuckerman (2006) that also work against extremely small tests. @InProceedings{STOC17p1185, author = {Avraham Ben-Aroya and Dean Doron and Amnon Ta-Shma}, title = {An Efficient Reduction from Two-Source to Non-malleable Extractors: Achieving Near-Logarithmic Min-entropy}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1185--1194}, doi = {}, year = {2017}, } STOC '17: "Explicit, Almost Optimal, ..." Explicit, Almost Optimal, Epsilon-Balanced Codes Amnon Ta-Shma (Tel Aviv University, Israel) The question of finding an epsilon-biased set with close to optimal support size, or, equivalently, finding an explicit binary code with distance 1−є/2 and rate close to the Gilbert-Varshamov bound, attracted a lot of attention in recent decades. In this paper we solve the problem almost optimally and show an explicit є-biased set over k bits with support size O(k/є2+o(1)). This improves upon all previous explicit constructions which were in the order of k2/є2, k/є3 or k5/4/є5/2. The result is close to the Gilbert-Varshamov bound which is O(k/є2) and the lower bound which is Ω(k/є2 log1/є). The main technical tool we use is bias amplification with the s-wide replacement product. The sum of two independent samples from an є-biased set is є2 biased. Rozenman and Wigderson showed how to amplify the bias more economically by choosing two samples with an expander. Based on that they suggested a recursive construction that achieves sample size O(k/є4). We show that amplification with a long random walk over the s-wide replacement product reduces the bias almost optimally. @InProceedings{STOC17p238, author = {Amnon Ta-Shma}, title = {Explicit, Almost Optimal, Epsilon-Balanced Codes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {238--251}, doi = {}, year = {2017}, } |
|
Thierauf, Thomas |
STOC '17: "Linear Matroid Intersection ..."
Linear Matroid Intersection Is in Quasi-NC
Rohit Gurjar and Thomas Thierauf (Tel Aviv University, Israel; Aalen University, Germany) Given two matroids on the same ground set, the matroid intersection problem asks to find a common independent set of maximum size. We show that the linear matroid intersection problem is in quasi-NC2. That is, it has uniform circuits of quasi-polynomial size nO(logn), and O(log2 n) depth. This generalizes the similar result for the bipartite perfect matching problem. We do this by an almost complete derandomization of the Isolation lemma for matroid intersection. Our result also implies a blackbox singularity test for symbolic matrices of the form A0+A1 z1 +A2 z2+ ⋯+Am zm, where A0 is an arbitrary matrix and the matrices A1,A2,…,Am are of rank 1 over some field. @InProceedings{STOC17p821, author = {Rohit Gurjar and Thomas Thierauf}, title = {Linear Matroid Intersection Is in Quasi-NC}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {821--830}, doi = {}, year = {2017}, } |
|
Touchette, Dave |
STOC '17: "Exponential Separation of ..."
Exponential Separation of Quantum Communication and Classical Information
Anurag Anshu, Dave Touchette, Penghui Yao, and Nengkun Yu (National University of Singapore, Singapore; University of Waterloo, Canada; Perimeter Institute for Theoretical Physics, Canada; University of Maryland, USA; University of Technology Sydney, Australia) We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550–1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric k-ary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15-057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The high-level idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of round-elimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal trade-off between Alice’s and Bob’s communication is given, even when allowing pre-shared entanglement, while computing the related Greater-Than function on n bits: say Bob communicates at most b bits, then Alice must send n/2O (b) bits to Bob. We also present a classical protocol achieving this bound. @InProceedings{STOC17p277, author = {Anurag Anshu and Dave Touchette and Penghui Yao and Nengkun Yu}, title = {Exponential Separation of Quantum Communication and Classical Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {277--288}, doi = {}, year = {2017}, } |
|
Umans, Chris |
STOC '17: "Targeted Pseudorandom Generators, ..."
Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace
William M. Hoza and Chris Umans (University of Texas at Austin, USA; California Institute of Technology, USA) Assume that for every derandomization result for logspace algorithms, there is a pseudorandom generator strong enough to nearly recover the derandomization by iterating over all seeds and taking a majority vote. We prove under a precise version of this assumption that BPL ⊆ ∩α > 0 DSPACE(log1 + α n). We strengthen the theorem to an equivalence by considering two generalizations of the concept of a pseudorandom generator against logspace. A targeted pseudorandom generator against logspace takes as input a short uniform random seed and a finite automaton; it outputs a long bitstring that looks random to that particular automaton. A simulation advice generator for logspace stretches a small uniform random seed into a long advice string; the requirement is that there is some logspace algorithm that, given a finite automaton and this advice string, simulates the automaton reading a long uniform random input. We prove that ∩α > 0 prBPSPACE(log1 + α n) = ∩α > 0 prDSPACE(log1 + α n) if and only if for every targeted pseudorandom generator against logspace, there is a simulation advice generator for logspace with similar parameters. Finally, we observe that in a certain uniform setting (namely, if we only worry about sequences of automata that can be generated in logspace), targeted pseudorandom generators against logspace can be transformed into simulation advice generators with similar parameters. @InProceedings{STOC17p629, author = {William M. Hoza and Chris Umans}, title = {Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {629--640}, doi = {}, year = {2017}, } |
|
Valiant, Gregory |
STOC '17: "Learning from Untrusted Data ..."
Learning from Untrusted Data
Moses Charikar, Jacob Steinhardt, and Gregory Valiant (Stanford University, USA) The vast majority of theoretical results in machine learning and statistics assume that the training data is a reliable reflection of the phenomena to be learned. Similarly, most learning techniques used in practice are brittle to the presence of large amounts of biased or malicious data. Motivated by this, we consider two frameworks for studying estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, list-decodable learning, asks whether it is possible to return a list of answers such that at least one is accurate. For example, given a dataset of n points for which an unknown subset of α n points are drawn from a distribution of interest, and no assumptions are made about the remaining (1−α)n points, is it possible to return a list of poly(1/α) answers? The second framework, which we term the semi-verified model, asks whether a small dataset of trusted data (drawn from the distribution in question) can be used to extract accurate information from a much larger but untrusted dataset (of which only an α-fraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This result has immediate implications for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary. @InProceedings{STOC17p47, author = {Moses Charikar and Jacob Steinhardt and Gregory Valiant}, title = {Learning from Untrusted Data}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {47--60}, doi = {}, year = {2017}, } |
|
Vasudevan, Prashant Nalini |
STOC '17: "Average-Case Fine-Grained ..."
Average-Case Fine-Grained Hardness
Marshall Ball, Alon Rosen, Manuel Sabin, and Prashant Nalini Vasudevan (Columbia University, USA; IDC Herzliya, Israel; University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL ’94), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO ’03). We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems – namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) – in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2−o(1) time to compute on average, and that of APSP gives us a function that requires n3−o(1) time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions. @InProceedings{STOC17p483, author = {Marshall Ball and Alon Rosen and Manuel Sabin and Prashant Nalini Vasudevan}, title = {Average-Case Fine-Grained Hardness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {483--496}, doi = {}, year = {2017}, } |
|
Vazirani, Vijay V. |
STOC '17: "Settling the Complexity of ..."
Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria
Jugal Garg, Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod (University of Illinois at Urbana-Champaign, USA; Georgia Institute of Technology, USA) Our first result shows membership in PPAD for the problem of computing approximate equilibria for an Arrow-Debreu exchange market for piecewise-linear concave (PLC) utility functions. As a corollary we also obtain membership in PPAD for Leontief utility functions. This settles an open question of Vazirani and Yannakakis (2011). Next we show FIXP-hardness of computing equilibria in Arrow-Debreu exchange markets under Leontief utility functions, and Arrow-Debreu markets under linear utility functions and Leontief production sets, thereby settling these open questions of Vazirani and Yannakakis (2011). As corollaries, we obtain FIXP-hardness for PLC utilities and for Arrow-Debreu markets under linear utility functions and polyhedral production sets. In all cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be "yes" instances. If all instances are under consideration, then in all cases we prove that the problem of deciding if a given instance admits an equilibrium is ETR-complete, where ETR is the class Existential Theory of Reals. As a consequence of the results stated above, and the fact that membership in FIXP has been established for PLC utilities, the entire computational difficulty of Arrow-Debreu markets under PLC utility functions lies in the Leontief utility subcase. This is perhaps the most unexpected aspect of our result, since Leontief utilities are meant for the case that goods are perfect complements, whereas PLC utilities are very general, capturing not only the cases when goods are complements and substitutes, but also arbitrary combinations of these and much more. Finally, we give a polynomial time algorithm for finding an equilibrium in Arrow-Debreu exchange markets under Leontief utility functions provided the number of agents is a constant. This settles part of an open problem of Devanur and Kannan (2008). @InProceedings{STOC17p890, author = {Jugal Garg and Ruta Mehta and Vijay V. Vazirani and Sadra Yazdanbod}, title = {Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890--901}, doi = {}, year = {2017}, } |
|
Végh, László A. |
STOC '17: "A Simpler and Faster Strongly ..."
A Simpler and Faster Strongly Polynomial Algorithm for Generalized Flow Maximization
Neil Olver and László A. Végh (VU University Amsterdam, Netherlands; CWI, Netherlands; London School of Economics, UK) We present a new strongly polynomial algorithm for generalized flow maximization. The first strongly polynomial algorithm for this problem was given very recently by Végh; our new algorithm is much simpler, and much faster. The complexity bound O((m+nlogn)mnlog(n2/m)) improves on the previous estimate obtained by Végh by almost a factor O(n2). Even for small numerical parameter values, our algorithm is essentially as fast as the best weakly polynomial algorithms. The key new technical idea is relaxing primal feasibility conditions. This allows us to work almost exclusively with integral flows, in contrast to all previous algorithms. @InProceedings{STOC17p100, author = {Neil Olver and László A. Végh}, title = {A Simpler and Faster Strongly Polynomial Algorithm for Generalized Flow Maximization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {100--111}, doi = {}, year = {2017}, } |
|
Vempala, Santosh S. |
STOC '17: "Geodesic Walks in Polytopes ..."
Geodesic Walks in Polytopes
Yin Tat Lee and Santosh S. Vempala (Microsoft Research, USA; University of Washington, USA; Georgia Institute of Technology, USA) We introduce the geodesic walk for sampling Riemannian manifolds and apply it to the problem of generating uniform random points from the interior of polytopes in R^n specified by m inequalities. The walk is a discrete-time simulation of a stochastic differential equation (SDE) on the Riemannian manifold equipped with the metric induced by the Hessian of a convex function; each step is the solution of an ordinary differential equation (ODE). The resulting sampling algorithm for polytopes mixes in O^*(mn^3/4) steps. This is the first walk that breaks the quadratic barrier for mixing in high dimension, improving on the previous best bound of O^*(mn) by Kannan and Narayanan for the Dikin walk. We also show that each step of the geodesic walk (solving an ODE) can be implemented efficiently, thus improving the time complexity for sampling polytopes. Our analysis of the geodesic walk for general Hessian manifolds does not assume positive curvature and might be of independent interest. @InProceedings{STOC17p927, author = {Yin Tat Lee and Santosh S. Vempala}, title = {Geodesic Walks in Polytopes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {927--940}, doi = {}, year = {2017}, } |
|
Venkitasubramaniam, Muthuramakrishnan |
STOC '17: "Equivocating Yao: Constant-Round ..."
Equivocating Yao: Constant-Round Adaptively Secure Multiparty Computation in the Plain Model
Ran Canetti, Oxana Poburinnaya, and Muthuramakrishnan Venkitasubramaniam (Boston University, USA; Tel Aviv University, Israel; University of Rochester, USA) Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable two-message, two-party secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds? We provide a positive answer to this question. We define a new type of encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function. Combining our scheme with non-committing encryption, we obtain the first two-message, two-party computation protocol, and the first constant-rounds multiparty computation protocol, in the plain model, that are secure against semi-honest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within. @InProceedings{STOC17p497, author = {Ran Canetti and Oxana Poburinnaya and Muthuramakrishnan Venkitasubramaniam}, title = {Equivocating Yao: Constant-Round Adaptively Secure Multiparty Computation in the Plain Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {497--509}, doi = {}, year = {2017}, } Info |
|
Vidick, Thomas |
STOC '17: "Hardness Amplification for ..."
Hardness Amplification for Entangled Games via Anchoring
Mohammad Bavarian, Thomas Vidick, and Henry Yuen (Massachusetts Institute of Technology, USA; California Institute of Technology, USA; University of California at Berkeley, USA) We study the parallel repetition of one-round games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate — in other words, does an analogue of Raz’s parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games. We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the Feige-Kilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the Feige-Kilian transformation, our anchoring transformation is completeness preserving. We prove an exponential-decay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games. Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture. @InProceedings{STOC17p303, author = {Mohammad Bavarian and Thomas Vidick and Henry Yuen}, title = {Hardness Amplification for Entangled Games via Anchoring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {303--316}, doi = {}, year = {2017}, } STOC '17: "A Quantum Linearity Test for ..." A Quantum Linearity Test for Robustly Verifying Entanglement Anand Natarajan and Thomas Vidick (Massachusetts Institute of Technology, USA; California Institute of Technology, USA) We introduce a simple two-player test which certifies that the players apply tensor products of Pauli σX and σZ observables on the tensor product of n EPR pairs. The test has constant robustness: any strategy achieving success probability within an additive of the optimal must be poly(ε)-close, in the appropriate distance measure, to the honest n-qubit strategy. The test involves 2n-bit questions and 2-bit answers. The key technical ingredient is a quantum version of the classical linearity test of Blum, Luby, and Rubinfeld. As applications of our result we give (i) the first robust self-test for n EPR pairs; (ii) a quantum multiprover interactive proof system for the local Hamiltonian problem with a constant number of provers and classical questions and answers, and a constant completeness-soundness gap independent of system size; (iii) a robust protocol for verifiable delegated quantum computation with a constant number of quantum polynomial-time provers sharing entanglement. @InProceedings{STOC17p1003, author = {Anand Natarajan and Thomas Vidick}, title = {A Quantum Linearity Test for Robustly Verifying Entanglement}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1003--1015}, doi = {}, year = {2017}, } |
|
Vishnoi, Nisheeth K. |
STOC '17: "Real Stable Polynomials and ..."
Real Stable Polynomials and Matroids: Optimization and Counting
Damian Straszak and Nisheeth K. Vishnoi (EPFL, Switzerland) Several fundamental optimization and counting problems arising in computer science, mathematics and physics can be reduced to one of the following computational tasks involving polynomials and set systems: given an oracle access to an m-variate real polynomial g and to a family of (multi-)subsets B of [m], (1) compute the sum of coefficients of monomials in g corresponding to all the sets that appear in B(1), or find S ∈ B such that the monomial in g corresponding to S has the largest coefficient in g. Special cases of these problems, such as computing permanents and mixed discriminants, sampling from determinantal point processes, and maximizing sub-determinants with combinatorial constraints have been topics of much recent interest in theoretical computer science. In this paper we present a general convex programming framework geared to solve both of these problems. Subsequently, we show that roughly, when g is a real stable polynomial with non-negative coefficients and B is a matroid, the integrality gap of our convex relaxation is finite and depends only on m (and not on the coefficients of g). Prior to this work, such results were known only in important but sporadic cases that relied heavily on the structure of either g or B; it was not even a priori clear if one could formulate a convex relaxation that has a finite integrality gap beyond these special cases. Two notable examples are a result by Gurvits for real stable polynomials g when B contains one element, and a result by Nikolov and Singh for a family of multi-linear real stable polynomials when B is the partition matroid. This work, which encapsulates almost all interesting cases of g and B, benefits from both – it is inspired by the latter in coming up with the right convex programming relaxation and the former in deriving the integrality gap. However, proving our results requires extensions of both; in that process we come up with new notions and connections between real stable polynomials and matroids which might be of independent interest. @InProceedings{STOC17p370, author = {Damian Straszak and Nisheeth K. Vishnoi}, title = {Real Stable Polynomials and Matroids: Optimization and Counting}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {370--383}, doi = {}, year = {2017}, } |
|
Vladu, Adrian |
STOC '17: "Almost-Linear-Time Algorithms ..."
Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graph-theoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almost-linear-time directed Laplacian system solver, and, by leveraging the recent framework of [Cohen-Kelner-Peebles-Peng-Sidford-Vladu, FOCS’16], we also obtain almost-linear-time algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm3/4 + n2/3 m) logO(1) (n κ є−1)) to O((m + n2O(√lognloglogn)) logO(1) (n κ є−1)) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410--419}, doi = {}, year = {2017}, } |
|
Volk, Ben Lee |
STOC '17: "Succinct Hitting Sets and ..."
Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds
Michael A. Forbes, Amir Shpilka, and Ben Lee Volk (Simons Institute for the Theory of Computing Berkeley, USA; Tel Aviv University, Israel) We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining super-polynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits. Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)-degree polylog(N)-size circuits is a hitting set for the class of poly(N)-degree poly(N)-size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits. @InProceedings{STOC17p653, author = {Michael A. Forbes and Amir Shpilka and Ben Lee Volk}, title = {Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {653--664}, doi = {}, year = {2017}, } |
|
Vyas, Nikhil |
STOC '17: "Faster Space-Efficient Algorithms ..."
Faster Space-Efficient Algorithms for Subset Sum and k-Sum
Nikhil Bansal, Shashwat Garg, Jesper Nederlof, and Nikhil Vyas (Eindhoven University of Technology, Netherlands; IIT Bombay, India) We present randomized algorithms that solve Subset Sum and Knapsack instances with n items in O*(20.86n) time, where the O*(·) notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve Binary Linear Programming on n variables with few constraints in a similar running time. We also show that for any constant k≥ 2, random instances of k-Sum can be solved using O(nk−0.5(n)) time and O(logn) space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using O(logn) space significantly faster than the trivial O(n2) time algorithm if no value occurs too often in the same list. @InProceedings{STOC17p198, author = {Nikhil Bansal and Shashwat Garg and Jesper Nederlof and Nikhil Vyas}, title = {Faster Space-Efficient Algorithms for Subset Sum and k-Sum}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {198--209}, doi = {}, year = {2017}, } |
|
Waingarten, Erik |
STOC '17: "Approximate Near Neighbors ..."
Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA) We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation. Specifically, for every n, d = no(1), and every d-dimensional symmetric norm ||·||, there exists a data structure for (loglogn)-approximate nearest neighbor search over ||·|| for n-point datasets achieving no(1) query time and n1+o(1) space. The main technical ingredient of the algorithm is a low-distortion embedding of a symmetric norm into a low-dimensional iterated product of top-k norms. We also show that our techniques cannot be extended to *general* norms. @InProceedings{STOC17p902, author = {Alexandr Andoni and Huy L. Nguyen and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Approximate Near Neighbors for General Symmetric Norms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902--913}, doi = {}, year = {2017}, } STOC '17: "Beyond Talagrand Functions: ..." Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness Xi Chen, Erik Waingarten, and Jinyu Xie (Columbia University, USA) We prove a lower bound of Ω(n1/3) for the query complexity of any two-sided and adaptive algorithm that tests whether an unknown Boolean function f:{0,1}n→ {0,1} is monotone versus far from monotone. This improves the recent lower bound of Ω(n1/4) for the same problem by Belovs and Blais (STOC’16). Our result builds on a new family of random Boolean functions that can be viewed as a two-level extension of Talagrand’s random DNFs. Beyond monotonicity we prove a lower bound of Ω(√n) for two-sided, adaptive algorithms and a lower bound of Ω(n) for one-sided, non-adaptive algorithms for testing unateness, a natural generalization of monotonicity. The latter matches the linear upper bounds by Khot and Shinkar (RANDOM’16) and by Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri (2017). @InProceedings{STOC17p523, author = {Xi Chen and Erik Waingarten and Jinyu Xie}, title = {Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {523--536}, doi = {}, year = {2017}, } |
|
Wang, Ruosong |
STOC '17: "Exponential Separations in ..."
Exponential Separations in the Energy Complexity of Leader Election
Yi-Jun Chang, Tsvi Kopelowitz, Seth Pettie, Ruosong Wang, and Wei Zhan (University of Michigan, USA; Tsinghua University, China) Energy is often the most constrained resource for battery-powered wireless devices and the lion’s share of energy is often spent on transceiver usage (sending/receiving packets), not on computation. In this paper we study the energy complexity of Leader Election and Approximate Counting in several models of wireless radio networks. It turns out that energy complexity is very sensitive to whether the devices can generate random bits and their ability to detect collisions. We consider four collision-detection models: Strong-CD (in which transmitters and listeners detect collisions), Sender-CD and Receiver-CD (in which only transmitters or only listeners detect collisions), and No-CD (in which no one detects collisions.) The take-away message of our results is quite surprising. For randomized Leader Election algorithms, there is an exponential gap between the energy complexity of Sender-CD and Receiver-CD: No-CD = Sender-CD ≫ Receiver-CD = Strong-CD and for deterministic Leader Election algorithms, there is another exponential gap in energy complexity, but in the reverse direction: No-CD = Receiver-CD ≫ Sender-CD = Strong-CD In particular, the randomized energy complexity of Leader Election is Θ(log* n) in Sender-CD but Θ(log(log* n)) in Receiver-CD, where n is the (unknown) number of devices. Its deterministic complexity is Θ(logN) in Receiver-CD but Θ(loglogN) in Sender-CD, where N is the (known) size of the devices’ ID space. There is a tradeoff between time and energy. We give a new upper bound on the time-energy tradeoff curve for randomized Leader Election and Approximate Counting. A critical component of this algorithm is a new deterministic Leader Election algorithm for dense instances, when n=Θ(N), with inverse-Ackermann-type (O(α(N))) energy complexity. @InProceedings{STOC17p771, author = {Yi-Jun Chang and Tsvi Kopelowitz and Seth Pettie and Ruosong Wang and Wei Zhan}, title = {Exponential Separations in the Energy Complexity of Leader Election}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {771--783}, doi = {}, year = {2017}, } |
|
Wei, Fan |
STOC '17: "Local Max-Cut in Smoothed ..."
Local Max-Cut in Smoothed Polynomial Time
Omer Angel, Sébastien Bubeck, Yuval Peres, and Fan Wei (University of British Columbia, Canada; Microsoft Research, USA; Stanford University, USA) In 1988, Johnson, Papadimitriou and Yannakakis wrote that “Practically all the empirical evidence would lead us to conclude that finding locally optimal solutions is much easier than solving NP-hard problems”. Since then the empirical evidence has continued to amass, but formal proofs of this phenomenon have remained elusive. A canonical (and indeed complete) example is the local max-cut problem, for which no polynomial time method is known. In a breakthrough paper, Etscheid and R'oglin proved that the smoothed complexity of local max-cut is quasi-polynomial, i.e., if arbitrary bounded weights are randomly perturbed, a local maximum can be found in φ nO(logn) steps where φ is an upper bound on the random edge weight density. In this paper we prove smoothed polynomial complexity for local max-cut, thus confirming that finding local optima for max-cut is much easier than solving it. @InProceedings{STOC17p429, author = {Omer Angel and Sébastien Bubeck and Yuval Peres and Fan Wei}, title = {Local Max-Cut in Smoothed Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {429--437}, doi = {}, year = {2017}, } |
|
Weismantel, Robert |
STOC '17: "A Strongly Polynomial Algorithm ..."
A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming
Stephan Artmann, Robert Weismantel, and Rico Zenklusen (ETH Zurich, Switzerland) We present a strongly polynomial algorithm to solve integer programs of the form max{cT x∶ Ax≤ b, x∈ℤn }, for A∈ℤm× n with rank(A)=n, b∈ℤm, c∈ℤn, and where all determinants of (n× n)-sub-matrices of A are bounded by 2 in absolute value. In particular, this implies that integer programs max{cT x : Q x≤ b, x∈ ℤ 0n}, where Q∈ ℤm× n has the property that all subdeterminants are bounded by 2 in absolute value, can be solved in strongly polynomial time. We thus obtain an extension of the well-known result that integer programs with constraint matrices that are totally unimodular are solvable in strongly polynomial time. @InProceedings{STOC17p1206, author = {Stephan Artmann and Robert Weismantel and Rico Zenklusen}, title = {A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1206--1219}, doi = {}, year = {2017}, } |
|
Wigderson, Avi |
STOC '17: "Algorithmic and Optimization ..."
Algorithmic and Optimization Aspects of Brascamp-Lieb Inequalities, via Operator Scaling
Ankit Garg, Leonid Gurvits, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; City College of New York, USA; Princeton University, USA; IAS, USA) The celebrated Brascamp-Lieb (BL) inequalities [BL76, Lie90], and their reverse form of Barthe [Bar98], are an important mathematical tool, unifying and generalizing numerous in- equalities in analysis, convex geometry and information theory, with many used in computer science. While their structural theory is very well understood, far less is known about computing their main parameters below (which we later define). Prior to this work, the best known algorithms for any of these optimization tasks required at least exponential time. In this work, we give polynomial time algorithms to compute: (1) Feasibility of BL-datum, (2) Optimal BL- constant, (3) Weak separation oracle for BL-polytopes. What is particularly exciting about this progress, beyond the better understanding of BL- inequalities, is that the objects above naturally encode rich families of optimization problems which had no prior efficient algorithms. In particular, the BL-constants (which we efficiently compute) are solutions to non-convex optimization problems, and the BL-polytopes (for which we provide efficient membership and separation oracles) are linear programs with exponentially many facets. Thus we hope that new combinatorial optimization problems can be solved via reductions to the ones above, and make modest initial steps in exploring this possibility. Our algorithms are obtained by a simple efficient reduction of a given BL-datum to an instance of the Operator Scaling problem defined by [Gur04]. To obtain the results above, we utilize the two (very recent and different) algorithms for the operator scaling problem [GGOW16, IQS15a]. Our reduction implies algorithmic versions of many of the known structural results on BL-inequalities, and in some cases provide proofs that are different or simpler than existing ones. Further, the analytic properties of the [GGOW16] algorithm provide new, effective bounds on the magnitude and continuity of BL-constants, with applications to non-linear versions of BL-inequalities; prior work relied on compactness, and thus provided no bounds. On a higher level, our application of operator scaling algorithm to BL-inequalities further connects analysis and optimization with the diverse mathematical areas used so far to mo- tivate and solve the operator scaling problem, which include commutative invariant theory, non-commutative algebra, computational complexity and quantum information theory. @InProceedings{STOC17p397, author = {Ankit Garg and Leonid Gurvits and Rafael Oliveira and Avi Wigderson}, title = {Algorithmic and Optimization Aspects of Brascamp-Lieb Inequalities, via Operator Scaling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {397--409}, doi = {}, year = {2017}, } |
|
Williams, Ryan |
STOC '17: "Probabilistic Rank and Matrix ..."
Probabilistic Rank and Matrix Rigidity
Josh Alman and Ryan Williams (Massachusetts Institute of Technology, USA) We consider a notion of probabilistic rank and probabilistic sign-rank of a matrix, which measure the extent to which a matrix can be probabilistically represented by low-rank matrices. We demonstrate several connections with matrix rigidity, communication complexity, and circuit lower bounds. The most interesting outcomes are: The Walsh-Hadamard Transform is Not Very Rigid. We give surprising upper bounds on the rigidity of a family of matrices whose rigidity has been extensively studied, and was conjectured to be highly rigid. For the 2n × 2n Walsh-Hadamard transform Hn (a.k.a. Sylvester matrices, a.k.a. the communication matrix of Inner Product modulo 2), we show how to modify only 2ε n entries in each row and make the rank of Hn drop below 2n(1−Ω(ε2/log(1/ε))), for all small ε > 0, over any field. That is, it is not possible to prove arithmetic circuit lower bounds on Hadamard matrices such as Hn, via L. Valiant’s matrix rigidity approach. We also show non-trivial rigidity upper bounds for Hn with smaller target rank. Matrix Rigidity and Threshold Circuit Lower Bounds. We give new consequences of rigid matrices for Boolean circuit complexity. First, we show that explicit n × n Boolean matrices which maintain rank at least 2(logn)1−δ after n2/2(logn)δ/2 modified entries (over any field, for any δ > 0) would yield an explicit function that does not have sub-quadratic-size AC0 circuits with two layers of arbitrary linear threshold gates. Second, we prove that explicit 0/1 matrices over the reals which are modestly more rigid than the best known rigidity lower bounds for sign-rank would imply exponential-gate lower bounds for the infamously difficult class of depth-two linear threshold circuits with arbitrary weights on both layers. In particular, we show that matrices defined by these seemingly-difficult circuit classes actually have low probabilistic rank and sign-rank, respectively. An Equivalence Between Communication, Probabilistic Rank, and Rigidity. It has been known since Razborov [1989] that explicit rigidity lower bounds would resolve longstanding lower-bound problems in communication complexity, but it seemed possible that communication lower bounds could be proved without making progress on matrix rigidity. We show that for every function f which is randomly self-reducible in a natural way (the inner product mod 2 is an example), bounding the communication complexity of f (in a precise technical sense) is equivalent to bounding the rigidity of the matrix of f, via an equivalence with probabilistic rank. @InProceedings{STOC17p641, author = {Josh Alman and Ryan Williams}, title = {Probabilistic Rank and Matrix Rigidity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {641--652}, doi = {}, year = {2017}, } |
|
Witmer, David |
STOC '17: "Sum of Squares Lower Bounds ..."
Sum of Squares Lower Bounds for Refuting any CSP
Pravesh K. Kothari, Ryuhei Mori, Ryan O'Donnell, and David Witmer (Princeton University, USA; IAS, USA; Tokyo Institute of Technology, Japan; Carnegie Mellon University, USA) Let P:{0,1}k → {0,1} be a nontrivial k-ary predicate. Consider a random instance of the constraint satisfaction problem (P) on n variables with Δ n constraints, each being P applied to k randomly chosen literals. Provided the constraint density satisfies Δ ≫ 1, such an instance is unsatisfiable with high probability. The refutation problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate P supports a t-wise uniform probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d = Θ(n/Δ2/(t−1) logΔ) (which runs in time nO(d)) cannot refute a random instance of (P). In particular, the polynomial-time SOS algorithm requires Ω(n(t+1)/2) constraints to refute random instances of CSP(P) when P supports a t-wise uniform distribution on its satisfying assignments. Together with recent work of Lee et al.(Lee, Raghavendra, Steurer 2015), our result also implies that any polynomial-size semidefinite programming relaxation for refutation requires at least Ω(n(t+1)/2) constraints. More generally, we consider the δ-refutation problem, in which the goal is to certify that at most a (1−δ)-fraction of constraints can be simultaneously satisfied. We show that if P is δ-close to supporting a t-wise uniform distribution on satisfying assignments, then the degree-Ω(n/Δ2/(t−1) logΔ) SOS algorithm cannot (δ+o(1))-refute a random instance of CSP(P). This is the first result to show a distinction between the degree SOS needs to solve the refutation problem and the degree it needs to solve the harder δ-refutation problem. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate P, they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen, O’Donnell, Witmer (2015) and Raghavendra, Rao, Schramm (2016), this full three-way tradeoff is tight, up to lower-order factors. @InProceedings{STOC17p132, author = {Pravesh K. Kothari and Ryuhei Mori and Ryan O'Donnell and David Witmer}, title = {Sum of Squares Lower Bounds for Refuting any CSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {132--145}, doi = {}, year = {2017}, } Info |
|
Wong, Sam Chiu-wai |
STOC '17: "Subquadratic Submodular Function ..."
Subquadratic Submodular Function Minimization
Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, and Sam Chiu-wai Wong (Dartmouth College, USA; Microsoft Research, USA; Stanford University, USA; University of California at Berkeley, USA) Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n2lognM·+n3logO(1)nM) time and O(n3log2n·+n4logO(1)n) time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in O(nM3logn·) time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in [−1,1], we give an algorithm which in Õ(n5/3·/ε2) time returns an ε-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n2 bound – we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast Fujishige-Wolfe heuristic [Fujishige, 1980; Wolfe, 1976] @InProceedings{STOC17p1220, author = {Deeparnab Chakrabarty and Yin Tat Lee and Aaron Sidford and Sam Chiu-wai Wong}, title = {Subquadratic Submodular Function Minimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1220--1231}, doi = {}, year = {2017}, } |
|
Woodruff, David P. |
STOC '17: "Low Rank Approximation with ..."
Low Rank Approximation with Entrywise ℓ₁-Norm Error
Zhao Song, David P. Woodruff, and Peilin Zhong (University of Texas at Austin, USA; IBM Research, USA; Columbia University, USA) We study the ℓ1-low rank approximation problem, where for a given n × d matrix A and approximation factor α ≥ 1, the goal is to output a rank-k matrix A for which ||A−A||1 ≤ α · minrank-k matrices A′ ||A−A′||1, where for an n × d matrix C, we let ||C||1 = ∑i=1n ∑j=1d |Ci,j|. This error measure is known to be more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. The problem was shown to be NP-hard by Gillis and Vavasis and a number of heuristics have been proposed. It was asked in multiple places if there are any approximation algorithms. We give the first provable approximation algorithms for ℓ1-low rank approximation, showing that it is possible to achieve approximation factor α = (logd) · poly(k) in nnz(A) + (n+d) poly(k) time, where nnz(A) denotes the number of non-zero entries of A. If k is constant, we further improve the approximation ratio to O(1) with a poly(nd)-time algorithm. Under the Exponential Time Hypothesis, we show there is no poly(nd)-time algorithm achieving a (1+1/log1+γ(nd))-approximation, for γ > 0 an arbitrarily small constant, even when k = 1. We give a number of additional results for ℓ1-low rank approximation: nearly tight upper and lower bounds for column subset selection, CUR decompositions, extensions to low rank approximation with respect to ℓp-norms for 1 ≤ p < 2 and earthmover distance, low-communication distributed protocols and low-memory streaming algorithms, algorithms with limited randomness, and bicriteria algorithms. We also give a preliminary empirical evaluation. @InProceedings{STOC17p688, author = {Zhao Song and David P. Woodruff and Peilin Zhong}, title = {Low Rank Approximation with Entrywise ℓ₁-Norm Error}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {688--701}, doi = {}, year = {2017}, } Info |
|
Woods, Damien |
STOC '17: "The Non-cooperative Tile Assembly ..."
The Non-cooperative Tile Assembly Model Is Not Intrinsically Universal or Capable of Bounded Turing Machine Simulation
Pierre-Étienne Meunier and Damien Woods (Inria, France) The field of algorithmic self-assembly is concerned with the computational and expressive power of nanoscale self-assembling molecular systems. In the well-studied cooperative, or temperature 2, abstract tile assembly model it is known that there is a tile set to simulate any Turing machine and an intrinsically universal tile set that simulates the shapes and dynamics of any instance of the model, up to spatial rescaling. It has been an open question as to whether the seemingly simpler noncooperative, or temperature 1, model is capable of such behaviour. Here we show that this is not the case by showing that there is no tile set in the noncooperative model that is intrinsically universal, nor one capable of time-bounded Turing machine simulation within a bounded region of the plane. Although the noncooperative model intuitively seems to lack the complexity and power of the cooperative model it has been exceedingly hard to prove this. One reason is that there have been few tools to analyse the structure of complicated paths in the plane. This paper provides a number of such tools. A second reason is that almost every obvious and small generalisation to the model (e.g. allowing error, 3D, non-square tiles, signals/wires on tiles, tiles that repel each other, parallel synchronous growth) endows it with great computational, and sometimes simulation, power. Our main results show that all of these generalisations provably increase computational and/or simulation power. Our results hold for both deterministic and nondeterministic noncooperative systems. Our first main result stands in stark contrast with the fact that for both the cooperative tile assembly model, and for 3D noncooperative tile assembly, there are respective intrinsically universal tilesets. Our second main result gives a new technique (reduction to simulation) for proving negative results about computation in tile assembly. @InProceedings{STOC17p328, author = {Pierre-Étienne Meunier and Damien Woods}, title = {The Non-cooperative Tile Assembly Model Is Not Intrinsically Universal or Capable of Bounded Turing Machine Simulation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {328--341}, doi = {}, year = {2017}, } |
|
Wright, John |
STOC '17: "Efficient Quantum Tomography ..."
Efficient Quantum Tomography II
Ryan O'Donnell and John Wright (Carnegie Mellon University, USA; Massachusetts Institute of Technology, USA) We continue our analysis of: (i) “Quantum tomography”, i.e., learning a quantum state, i.e., the quantum generalization of learning a discrete probability distribution; (ii) The distribution of Young diagrams output by the RSK algorithm on random words. Regarding (ii), we introduce two powerful new tools: first, a precise upper bound on the expected length of the longest union of k disjoint increasing subsequences in a random length-n word with letter distribution α1 ≥ α2 ≥ ⋯ ≥ αd. Our bound has the correct main term and second-order term, and holds for all n, not just in the large-n limit. Second, a new majorization property of the RSK algorithm that allows one to analyze the Young diagram formed by the lower rows λk, λk+1, … of its output. These tools allow us to prove several new theorems concerning the distribution of random Young diagrams in the nonasymptotic regime, giving concrete error bounds that are optimal, or nearly so, in all parameters. As one example, we give a fundamentally new proof of the celebrated fact that the expected length of the longest increasing sequence in a random length-n permutation is bounded by 2√n. This is the k = 1, αi ≡ 1/d, d → ∞ special case of a much more general result we prove: the expected length of the kth Young diagram row produced by an α-random word is αk n ± 2√αkd n. From our new analyses of random Young diagrams we derive several new results in quantum tomography, including: (i) learning the eigenvalues of an unknown state to є-accuracy in Hellinger-squared, chi-squared, or KL distance, using n = O(d2/є) copies; (ii) learning the top-k eigenvalues of an unknown state to є-accuracy in Hellinger-squared or chi-squared distance using n = O(kd/є) copies or in ℓ22 distance using n = O(k/є) copies; (iii) learning the optimal rank-k approximation of an unknown state to є-fidelity (Hellinger-squared distance) using n = O(kd/є) copies. We believe our new techniques will lead to further advances in quantum learning; indeed, they have already subsequently been used for efficient von Neumann entropy estimation. @InProceedings{STOC17p962, author = {Ryan O'Donnell and John Wright}, title = {Efficient Quantum Tomography II}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {962--974}, doi = {}, year = {2017}, } |
|
Wulff-Nilsen, Christian |
STOC '17: "Fully-Dynamic Minimum Spanning ..."
Fully-Dynamic Minimum Spanning Forest with Improved Worst-Case Update Time
Christian Wulff-Nilsen (University of Copenhagen, Denmark) We give a Las Vegas data structure which maintains a minimum spanning forest in an n-vertex edge-weighted undirected dynamic graph undergoing updates consisting of any mixture of edge insertions and deletions. Each update is supported in O(n1/2 − c) worst-case time w.h.p. where c > 0 is some constant, and this bound also holds in expectation. This is the first data structure achieving an improvement over the O(√n) deterministic worst-case update time of Eppstein et al., a bound that has been standing for 25 years. In fact, it was previously not even known how to maintain a spanning forest of an unweighted graph in worst-case time polynomially faster than Θ(√n). Our result is achieved by first giving a reduction from fully-dynamic to decremental minimum spanning forest preserving worst-case update time up to logarithmic factors. Then decremental minimum spanning forest is solved using several novel techniques, one of which involves keeping track of low-conductance cuts in a dynamic graph. An immediate corollary of our result is the first Las Vegas data structure for fully-dynamic connectivity where each update is handled in worst-case time polynomially faster than Θ(√n) w.h.p.; this data structure has O(1) worst-case query time. @InProceedings{STOC17p1130, author = {Christian Wulff-Nilsen}, title = {Fully-Dynamic Minimum Spanning Forest with Improved Worst-Case Update Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1130--1143}, doi = {}, year = {2017}, } |
|
Xie, Jinyu |
STOC '17: "Beyond Talagrand Functions: ..."
Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness
Xi Chen, Erik Waingarten, and Jinyu Xie (Columbia University, USA) We prove a lower bound of Ω(n1/3) for the query complexity of any two-sided and adaptive algorithm that tests whether an unknown Boolean function f:{0,1}n→ {0,1} is monotone versus far from monotone. This improves the recent lower bound of Ω(n1/4) for the same problem by Belovs and Blais (STOC’16). Our result builds on a new family of random Boolean functions that can be viewed as a two-level extension of Talagrand’s random DNFs. Beyond monotonicity we prove a lower bound of Ω(√n) for two-sided, adaptive algorithms and a lower bound of Ω(n) for one-sided, non-adaptive algorithms for testing unateness, a natural generalization of monotonicity. The latter matches the linear upper bounds by Khot and Shinkar (RANDOM’16) and by Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri (2017). @InProceedings{STOC17p523, author = {Xi Chen and Erik Waingarten and Jinyu Xie}, title = {Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {523--536}, doi = {}, year = {2017}, } |
|
Yang, Lin F. |
STOC '17: "Streaming Symmetric Norms ..."
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang (Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel) We characterize the streaming space complexity of every symmetric norm l (a norm on ℝn invariant under sign-flips and coordinate-permutations), by relating this space complexity to the measure-concentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±є)-approximation to the norm of the stream, for every 0<є≤ 1/2. (The bounds match up to (є−1 logn) factors.) We further extend those bounds to any large approximation ratio D≥ 1.1, showing that the decrease in space complexity is proportional to D2, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l2 unit sphere. The same median governs many phenomena in high-dimensional spaces, such as large-deviation bounds and the critical dimension in Dvoretzky’s Theorem. The family of symmetric norms contains several well-studied norms, such as all lp norms, and indeed we provide a new explanation for the disparity in space complexity between p≤ 2 and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the top-k norm and the k-support norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in http://sublinear.info). @InProceedings{STOC17p716, author = {Jarosław Błasiok and Vladimir Braverman and Stephen R. Chestnut and Robert Krauthgamer and Lin F. Yang}, title = {Streaming Symmetric Norms via Measure Concentration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716--729}, doi = {}, year = {2017}, } |
|
Yao, Penghui |
STOC '17: "Exponential Separation of ..."
Exponential Separation of Quantum Communication and Classical Information
Anurag Anshu, Dave Touchette, Penghui Yao, and Nengkun Yu (National University of Singapore, Singapore; University of Waterloo, Canada; Perimeter Institute for Theoretical Physics, Canada; University of Maryland, USA; University of Technology Sydney, Australia) We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550–1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric k-ary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15-057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The high-level idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of round-elimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal trade-off between Alice’s and Bob’s communication is given, even when allowing pre-shared entanglement, while computing the related Greater-Than function on n bits: say Bob communicates at most b bits, then Alice must send n/2O (b) bits to Bob. We also present a classical protocol achieving this bound. @InProceedings{STOC17p277, author = {Anurag Anshu and Dave Touchette and Penghui Yao and Nengkun Yu}, title = {Exponential Separation of Quantum Communication and Classical Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {277--288}, doi = {}, year = {2017}, } |
|
Yazdanbod, Sadra |
STOC '17: "Settling the Complexity of ..."
Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria
Jugal Garg, Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod (University of Illinois at Urbana-Champaign, USA; Georgia Institute of Technology, USA) Our first result shows membership in PPAD for the problem of computing approximate equilibria for an Arrow-Debreu exchange market for piecewise-linear concave (PLC) utility functions. As a corollary we also obtain membership in PPAD for Leontief utility functions. This settles an open question of Vazirani and Yannakakis (2011). Next we show FIXP-hardness of computing equilibria in Arrow-Debreu exchange markets under Leontief utility functions, and Arrow-Debreu markets under linear utility functions and Leontief production sets, thereby settling these open questions of Vazirani and Yannakakis (2011). As corollaries, we obtain FIXP-hardness for PLC utilities and for Arrow-Debreu markets under linear utility functions and polyhedral production sets. In all cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be "yes" instances. If all instances are under consideration, then in all cases we prove that the problem of deciding if a given instance admits an equilibrium is ETR-complete, where ETR is the class Existential Theory of Reals. As a consequence of the results stated above, and the fact that membership in FIXP has been established for PLC utilities, the entire computational difficulty of Arrow-Debreu markets under PLC utility functions lies in the Leontief utility subcase. This is perhaps the most unexpected aspect of our result, since Leontief utilities are meant for the case that goods are perfect complements, whereas PLC utilities are very general, capturing not only the cases when goods are complements and substitutes, but also arbitrary combinations of these and much more. Finally, we give a polynomial time algorithm for finding an equilibrium in Arrow-Debreu exchange markets under Leontief utility functions provided the number of agents is a constant. This settles part of an open problem of Devanur and Kannan (2008). @InProceedings{STOC17p890, author = {Jugal Garg and Ruta Mehta and Vijay V. Vazirani and Sadra Yazdanbod}, title = {Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890--901}, doi = {}, year = {2017}, } |
|
Young, Robert |
STOC '17: "The Integrality Gap of the ..."
The Integrality Gap of the Goemans-Linial SDP Relaxation for Sparsest Cut Is at Least a Constant Multiple of √log n
Assaf Naor and Robert Young (Princeton University, USA; New York University, USA) We prove that the integrality gap of the Goemans–Linial semidefinite programming relaxation for the Sparsest Cut Problem is Ω(√logn) on inputs with n vertices, thus matching the previously best known upper bound (logn)1/2+o(1) up to lower-order factors. This statement is a consequence of the following new isoperimetric-type inequality. Consider the 8-regular graph whose vertex set is the 5-dimensional integer grid ℤ5 and where each vertex (a,b,c,d,e)∈ ℤ5 is connected to the 8 vertices (a± 1,b,c,d,e), (a,b± 1,c,d,e), (a,b,c± 1,d,e± a), (a,b,c,d± 1,e± b). This graph is known as the Cayley graph of the 5-dimensional discrete Heisenberg group. Given Ω⊂ ℤ5, denote the size of its edge boundary in this graph (a.k.a. the horizontal perimeter of Ω) by |∂hΩ|. For t∈ ℕ, denote by |∂vtΩ| the number of (a,b,c,d,e)∈ ℤ5 such that exactly one of the two vectors (a,b,c,d,e),(a,b,c,d,e+t) is in Ω. The vertical perimeter of Ω is defined to be |∂vΩ|= √∑t=1∞|∂vtΩ|2/t2. We show that every subset Ω⊂ ℤ5 satisfies |∂vΩ|=O(|∂hΩ|). This vertical-versus-horizontal isoperimetric inequality yields the above-stated integrality gap for Sparsest Cut and answers several geometric and analytic questions of independent interest. The theorem stated above is the culmination of a program whose aim is to understand the performance of the Goemans–Linial semidefinite program through the embeddability properties of Heisenberg groups. These investigations have mathematical significance even beyond their established relevance to approximation algorithms and combinatorial optimization. In particular they contribute to a range of mathematical disciplines including functional analysis, geometric group theory, harmonic analysis, sub-Riemannian geometry, geometric measure theory, ergodic theory, group representations, and metric differentiation. This article builds on the above cited works, with the “twist” that while those works were equally valid for any finite dimensional Heisenberg group, our result holds for the Heisenberg group of dimension 5 (or higher) but fails for the 3-dimensional Heisenberg group. This insight leads to our core contribution, which is a deduction of an endpoint L1-boundedness of a certain singular integral on ℝ5 from the (local) L2-boundedness of the corresponding singular integral on ℝ3. To do this, we devise a corona-type decomposition of subsets of a Heisenberg group, in the spirit of the construction that David and Semmes performed in ℝn, but with two main conceptual differences (in addition to more technical differences that arise from the peculiarities of the geometry of Heisenberg group). Firstly, the“atoms” of our decomposition are perturbations of intrinsic Lipschitz graphs in the sense of Franchi, Serapioni, and Serra Cassano (plus the requisite “wild” regions that satisfy a Carleson packing condition). Secondly, we control the local overlap of our corona decomposition by using quantitative monotonicity rather than Jones-type β-numbers. @InProceedings{STOC17p564, author = {Assaf Naor and Robert Young}, title = {The Integrality Gap of the Goemans-Linial SDP Relaxation for Sparsest Cut Is at Least a Constant Multiple of √log n}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564--575}, doi = {}, year = {2017}, } |
|
Yu, Huacheng |
STOC '17: "DecreaseKeys Are Expensive ..."
DecreaseKeys Are Expensive for External Memory Priority Queues
Kasper Eenberg, Kasper Green Larsen, and Huacheng Yu (Aarhus University, Denmark; Stanford University, USA) One of the biggest open problems in external memory data structures is the priority queue problem with DecreaseKey operations. If only Insert and ExtractMin operations need to be supported, one can design a comparison-based priority queue performing O((N/B)lgM/B N) I/Os over a sequence of N operations, where B is the disk block size in number of words and M is the main memory size in number of words. This matches the lower bound for comparison-based sorting and is hence optimal for comparison-based priority queues. However, if we also need to support DecreaseKeys, the performance of the best known priority queue is only O((N/B) lg2 N) I/Os. The big open question is whether a degradation in performance really is necessary. We answer this question affirmatively by proving a lower bound of Ω((N/B) lglgN B) I/Os for processing a sequence of N intermixed Insert, ExtraxtMin and DecreaseKey operations. Our lower bound is proved in the cell probe model and thus holds also for non-comparison-based priority queues. @InProceedings{STOC17p1081, author = {Kasper Eenberg and Kasper Green Larsen and Huacheng Yu}, title = {DecreaseKeys Are Expensive for External Memory Priority Queues}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {1081--1093}, doi = {}, year = {2017}, } |
|
Yu, Nengkun |
STOC '17: "Exponential Separation of ..."
Exponential Separation of Quantum Communication and Classical Information
Anurag Anshu, Dave Touchette, Penghui Yao, and Nengkun Yu (National University of Singapore, Singapore; University of Waterloo, Canada; Perimeter Institute for Theoretical Physics, Canada; University of Maryland, USA; University of Technology Sydney, Australia) We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550–1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric k-ary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15-057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The high-level idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of round-elimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal trade-off between Alice’s and Bob’s communication is given, even when allowing pre-shared entanglement, while computing the related Greater-Than function on n bits: say Bob communicates at most b bits, then Alice must send n/2O (b) bits to Bob. We also present a classical protocol achieving this bound. @InProceedings{STOC17p277, author = {Anurag Anshu and Dave Touchette and Penghui Yao and Nengkun Yu}, title = {Exponential Separation of Quantum Communication and Classical Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {277--288}, doi = {}, year = {2017}, } |
|
Yuen, Henry |
STOC '17: "Hardness Amplification for ..."
Hardness Amplification for Entangled Games via Anchoring
Mohammad Bavarian, Thomas Vidick, and Henry Yuen (Massachusetts Institute of Technology, USA; California Institute of Technology, USA; University of California at Berkeley, USA) We study the parallel repetition of one-round games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate — in other words, does an analogue of Raz’s parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games. We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the Feige-Kilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the Feige-Kilian transformation, our anchoring transformation is completeness preserving. We prove an exponential-decay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games. Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture. @InProceedings{STOC17p303, author = {Mohammad Bavarian and Thomas Vidick and Henry Yuen}, title = {Hardness Amplification for Entangled Games via Anchoring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {303--316}, doi = {}, year = {2017}, } |
|
Zandieh, Amir |
STOC '17: "An Adaptive Sublinear-Time ..."
An Adaptive Sublinear-Time Block Sparse Fourier Transform
Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh (EPFL, Switzerland) The problem of approximately computing the k dominant Fourier coefficients of a vector X quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with O(klognlog(n/k)) runtime [Hassanieh et al., STOC’12] and O(klogn) sample complexity [Indyk et al., FOCS’14]. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a (k0,k1)-block sparse model. In this model, signal frequencies are clustered in k0 intervals with width k1 in Fourier space, and k= k0k1 is the total sparsity. Our main result is the first sparse FFT algorithm for (k0, k1)-block sparse signals with a sample complexity of O*(k0k1 + k0log(1+ k0)logn) at constant signal-to-noise ratios, and sublinear runtime. Our algorithm crucially uses adaptivity to achieve the improved sample complexity bound, and we provide a lower bound showing that this is essential in the Fourier setting: Any non-adaptive algorithm must use Ω(k0k1logn/k0k1) samples for the (k0,k1)-block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energy-based importance sampling technique that may be of independent interest. @InProceedings{STOC17p702, author = {Volkan Cevher and Michael Kapralov and Jonathan Scarlett and Amir Zandieh}, title = {An Adaptive Sublinear-Time Block Sparse Fourier Transform}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {702--715}, doi = {}, year = {2017}, } |
|
Zdeborova, Lenka |
STOC '17: "Information-Theoretic Thresholds ..."
Information-Theoretic Thresholds from the Cavity Method
Amin Coja-Oghlan, Florent Krzakala, Will Perkins, and Lenka Zdeborova (Goethe University Frankfurt, Germany; CNRS, France; PSL Research University, France; ENS, France; UPMC, France; University of Birmingham, UK; CEA, France; University of Paris-Saclay, France) Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications. @InProceedings{STOC17p146, author = {Amin Coja-Oghlan and Florent Krzakala and Will Perkins and Lenka Zdeborova}, title = {Information-Theoretic Thresholds from the Cavity Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {146--157}, doi = {}, year = {2017}, } |
|
Zenklusen, Rico |
STOC '17: "A Strongly Polynomial Algorithm ..."
A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming
Stephan Artmann, Robert Weismantel, and Rico Zenklusen (ETH Zurich, Switzerland) We present a strongly polynomial algorithm to solve integer programs of the form max{cT x∶ Ax≤ b, x∈ℤn }, for A∈ℤm× n with rank(A)=n, b∈ℤm, c∈ℤn, and where all determinants of (< |