STOC 2017 – Author Index 
Contents 
Abstracts 
Authors

A B C D E F G H I J K L M N O P R S T U V W X Y Z
Aaronson, Scott 
STOC '17: "The Computational Complexity ..."
The Computational Complexity of Ball Permutations
Scott Aaronson, Adam Bouland, Greg Kuperberg, and Saeed Mehraban (University of Texas at Austin, USA; Massachusetts Institute of Technology, USA; University of California at Davis, USA) We define several models of computation based on permuting distinguishable particles (which we call balls) and characterize their computational complexity. In the quantum setting, we use the representation theory of the symmetric group to find variants of this model which are intermediate between BPP and DQC1 (the class of problems solvable with one clean qubit) and between DQC1 and BQP. Furthermore, we consider a restricted version of this model based on an exactly solvable scattering problem of particles moving on a line. Despite the simplicity of this model from the perspective of mathematical physics, we show that if we allow intermediate destructive measurements and specific input states, then the model cannot be efficiently simulated classically up to multiplicative error unless the polynomial hierarchy collapses. Finally, we define a classical version of this model in which one can probabilistically permute balls. We find this yields a complexity class which is intermediate between L and BPP, and that a nondeterministic version of this model is NPcomplete. @InProceedings{STOC17p317, author = {Scott Aaronson and Adam Bouland and Greg Kuperberg and Saeed Mehraban}, title = {The Computational Complexity of Ball Permutations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {317327}, doi = {}, year = {2017}, } 

Abolhassani, Melika 
STOC '17: "Beating 11/e for Ordered ..."
Beating 11/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a thresholdbased algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to nonuniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 11/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {6171}, doi = {}, year = {2017}, } 

Agarwal, Naman 
STOC '17: "Finding Approximate Local ..."
Finding Approximate Local Minima Faster than Gradient Descent
Naman Agarwal, Zeyuan AllenZhu, Brian Bullins, Elad Hazan, and Tengyu Ma (Princeton University, USA; IAS, USA) We design a nonconvex secondorder optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other nonconvex objectives arising in machine learning. @InProceedings{STOC17p1195, author = {Naman Agarwal and Zeyuan AllenZhu and Brian Bullins and Elad Hazan and Tengyu Ma}, title = {Finding Approximate Local Minima Faster than Gradient Descent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11951199}, doi = {}, year = {2017}, } 

AllenZhu, Zeyuan 
STOC '17: "Katyusha: The First Direct ..."
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan AllenZhu (IAS, USA; Princeton University, USA) Nesterov’s momentum trick is famously known for accelerating gradient descent, and has been proven useful in building fast iterative algorithms. However, in the stochastic setting, counterexamples exist and prevent Nesterov’s momentum from providing similar acceleration, even if the underlying problem is convex. We introduce Katyusha, a direct, primalonly stochastic gradient method to fix this issue. It has a provably accelerated convergence rate in convex (offline) stochastic optimization. The main ingredient is Katyusha momentum, a novel “negative momentum” on top of Nesterov’s momentum that can be incorporated into a variancereduction based algorithm and speed it up. Since variance reduction has been successfully applied to a growing list of practical problems, our paper suggests that in each of such cases, one could potentially give Katyusha a hug. @InProceedings{STOC17p1200, author = {Zeyuan AllenZhu}, title = {Katyusha: The First Direct Acceleration of Stochastic Gradient Methods}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12001205}, doi = {}, year = {2017}, } STOC '17: "Finding Approximate Local ..." Finding Approximate Local Minima Faster than Gradient Descent Naman Agarwal, Zeyuan AllenZhu, Brian Bullins, Elad Hazan, and Tengyu Ma (Princeton University, USA; IAS, USA) We design a nonconvex secondorder optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other nonconvex objectives arising in machine learning. @InProceedings{STOC17p1195, author = {Naman Agarwal and Zeyuan AllenZhu and Brian Bullins and Elad Hazan and Tengyu Ma}, title = {Finding Approximate Local Minima Faster than Gradient Descent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11951199}, doi = {}, year = {2017}, } 

Alman, Josh 
STOC '17: "Probabilistic Rank and Matrix ..."
Probabilistic Rank and Matrix Rigidity
Josh Alman and Ryan Williams (Massachusetts Institute of Technology, USA) We consider a notion of probabilistic rank and probabilistic signrank of a matrix, which measure the extent to which a matrix can be probabilistically represented by lowrank matrices. We demonstrate several connections with matrix rigidity, communication complexity, and circuit lower bounds. The most interesting outcomes are: The WalshHadamard Transform is Not Very Rigid. We give surprising upper bounds on the rigidity of a family of matrices whose rigidity has been extensively studied, and was conjectured to be highly rigid. For the 2^{n} × 2^{n} WalshHadamard transform H_{n} (a.k.a. Sylvester matrices, a.k.a. the communication matrix of Inner Product modulo 2), we show how to modify only 2^{ε n} entries in each row and make the rank of H_{n} drop below 2^{n(1−Ω(ε2/log(1/ε)))}, for all small ε > 0, over any field. That is, it is not possible to prove arithmetic circuit lower bounds on Hadamard matrices such as H_{n}, via L. Valiant’s matrix rigidity approach. We also show nontrivial rigidity upper bounds for H_{n} with smaller target rank. Matrix Rigidity and Threshold Circuit Lower Bounds. We give new consequences of rigid matrices for Boolean circuit complexity. First, we show that explicit n × n Boolean matrices which maintain rank at least 2^{(logn)1−δ} after n^{2}/2^{(logn)δ/2} modified entries (over any field, for any δ > 0) would yield an explicit function that does not have subquadraticsize AC^{0} circuits with two layers of arbitrary linear threshold gates. Second, we prove that explicit 0/1 matrices over the reals which are modestly more rigid than the best known rigidity lower bounds for signrank would imply exponentialgate lower bounds for the infamously difficult class of depthtwo linear threshold circuits with arbitrary weights on both layers. In particular, we show that matrices defined by these seeminglydifficult circuit classes actually have low probabilistic rank and signrank, respectively. An Equivalence Between Communication, Probabilistic Rank, and Rigidity. It has been known since Razborov [1989] that explicit rigidity lower bounds would resolve longstanding lowerbound problems in communication complexity, but it seemed possible that communication lower bounds could be proved without making progress on matrix rigidity. We show that for every function f which is randomly selfreducible in a natural way (the inner product mod 2 is an example), bounding the communication complexity of f (in a precise technical sense) is equivalent to bounding the rigidity of the matrix of f, via an equivalence with probabilistic rank. @InProceedings{STOC17p641, author = {Josh Alman and Ryan Williams}, title = {Probabilistic Rank and Matrix Rigidity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {641652}, doi = {}, year = {2017}, } 

Ambainis, Andris 
STOC '17: "Quantum Algorithm for Tree ..."
Quantum Algorithm for Tree Size Estimation, with Applications to Backtracking and 2Player Games
Andris Ambainis and Martins Kokainis (University of Latvia, Latvia) We study quantum algorithms on search trees of unknown structure, in a model where the tree can be discovered by local exploration. That is, we are given the root of the tree and access to a black box which, given a vertex v, outputs the children of v. We construct a quantum algorithm which, given such access to a search tree of depth at most n, estimates the size of the tree T within a factor of 1± δ in Õ(√nT) steps. More generally, the same algorithm can be used to estimate size of directed acyclic graphs (DAGs) in a similar model. We then show two applications of this result: a) We show how to transform a classical backtracking search algorithm which examines T nodes of a search tree into an Õ(√Tn^{3/2}) time quantum algorithm, improving over an earlier quantum backtracking algorithm of Montanaro (arXiv:1509.02374). b) We give a quantum algorithm for evaluating ANDOR formulas in a model where the formula can be discovered by local exploration (modeling position trees in 2player games) which evaluates formulas of size T and depth T^{o(1)} in time O(T^{1/2+o(1)}). Thus, the quantum speedup is essentially the same as in the case when the formula is known in advance. @InProceedings{STOC17p989, author = {Andris Ambainis and Martins Kokainis}, title = {Quantum Algorithm for Tree Size Estimation, with Applications to Backtracking and 2Player Games}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {9891002}, doi = {}, year = {2017}, } 

Anari, Nima 
STOC '17: "A Generalization of Permanent ..."
A Generalization of Permanent Inequalities and Applications in Counting and Optimization
Nima Anari and Shayan Oveis Gharan (Stanford University, USA; University of Washington, USA) A polynomial p∈ℝ[z_{1},…,z_{n}] is real stable if it has no roots in the upperhalf complex plane. Gurvits’s permanent inequality gives a lower bound on the coefficient of the z_{1}z_{2}… z_{n} monomial of a real stable polynomial p with nonnegative coefficients. This fundamental inequality has been used to attack several counting and optimization problems. Here, we study a more general question: Given a stable multilinear polynomial p with nonnegative coefficients and a set of monomials S, we show that if the polynomial obtained by summing up all monomials in S is real stable, then we can lower bound the sum of coefficients of monomials of p that are in S. We also prove generalizations of this theorem to (real stable) polynomials that are not multilinear. We use our theorem to give a new proof of Schrijver’s inequality on the number of perfect matchings of a regular bipartite graph, generalize a recent result of Nikolov and Singh, and give deterministic polynomial time approximation algorithms for several counting problems. @InProceedings{STOC17p384, author = {Nima Anari and Shayan Oveis Gharan}, title = {A Generalization of Permanent Inequalities and Applications in Counting and Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {384396}, doi = {}, year = {2017}, } 

Andoni, Alexandr 
STOC '17: "Approximate Near Neighbors ..."
Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA) We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doublylogarithmic approximation. Specifically, for every n, d = n^{o(1)}, and every ddimensional symmetric norm ·, there exists a data structure for (loglogn)approximate nearest neighbor search over · for npoint datasets achieving n^{o(1)} query time and n^{1+o(1)} space. The main technical ingredient of the algorithm is a lowdistortion embedding of a symmetric norm into a lowdimensional iterated product of topk norms. We also show that our techniques cannot be extended to *general* norms. @InProceedings{STOC17p902, author = {Alexandr Andoni and Huy L. Nguyen and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Approximate Near Neighbors for General Symmetric Norms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902913}, doi = {}, year = {2017}, } 

Angel, Omer 
STOC '17: "Local MaxCut in Smoothed ..."
Local MaxCut in Smoothed Polynomial Time
Omer Angel, Sébastien Bubeck, Yuval Peres, and Fan Wei (University of British Columbia, Canada; Microsoft Research, USA; Stanford University, USA) In 1988, Johnson, Papadimitriou and Yannakakis wrote that “Practically all the empirical evidence would lead us to conclude that finding locally optimal solutions is much easier than solving NPhard problems”. Since then the empirical evidence has continued to amass, but formal proofs of this phenomenon have remained elusive. A canonical (and indeed complete) example is the local maxcut problem, for which no polynomial time method is known. In a breakthrough paper, Etscheid and R'oglin proved that the smoothed complexity of local maxcut is quasipolynomial, i.e., if arbitrary bounded weights are randomly perturbed, a local maximum can be found in φ n^{O(logn)} steps where φ is an upper bound on the random edge weight density. In this paper we prove smoothed polynomial complexity for local maxcut, thus confirming that finding local optima for maxcut is much easier than solving it. @InProceedings{STOC17p429, author = {Omer Angel and Sébastien Bubeck and Yuval Peres and Fan Wei}, title = {Local MaxCut in Smoothed Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {429437}, doi = {}, year = {2017}, } 

Angelidakis, Haris 
STOC '17: "Algorithms for Stable and ..."
Algorithms for Stable and PerturbationResilient Problems
Haris Angelidakis, Konstantin Makarychev, and Yury Makarychev (Toyota Technological Institute at Chicago, USA; Northwestern University, USA) We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is αstable or αperturbationresilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2perturbation resilient instances of clustering problems with natural centerbased objectives. The class of clustering problems with natural centerbased objectives includes such problems as kmeans, kmedian, and kcenter. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√2≈2.41 perturbationresilient instances. Our result is tight in the sense that no polynomialtime algorithm can solve (2−ε)perturbation resilient instances of kcenter unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2−2/k)stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4stable instances. We also give an algorithm for (2−2/k+δ)weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomialtime algorithms for n^{1−ε}stable instances of Set Cover, Minimum Vertex Cover, and Min 2Horn Deletion (unless P = NP). @InProceedings{STOC17p438, author = {Haris Angelidakis and Konstantin Makarychev and Yury Makarychev}, title = {Algorithms for Stable and PerturbationResilient Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {438451}, doi = {}, year = {2017}, } 

Anshu, Anurag 
STOC '17: "Exponential Separation of ..."
Exponential Separation of Quantum Communication and Classical Information
Anurag Anshu, Dave Touchette, Penghui Yao, and Nengkun Yu (National University of Singapore, Singapore; University of Waterloo, Canada; Perimeter Institute for Theoretical Physics, Canada; University of Maryland, USA; University of Technology Sydney, Australia) We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550–1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric kary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The highlevel idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of roundelimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal tradeoff between Alice’s and Bob’s communication is given, even when allowing preshared entanglement, while computing the related GreaterThan function on n bits: say Bob communicates at most b bits, then Alice must send n/2^{O (b)} bits to Bob. We also present a classical protocol achieving this bound. @InProceedings{STOC17p277, author = {Anurag Anshu and Dave Touchette and Penghui Yao and Nengkun Yu}, title = {Exponential Separation of Quantum Communication and Classical Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {277288}, doi = {}, year = {2017}, } 

Arora, Sanjeev 
STOC '17: "Provable Learning of Noisyor ..."
Provable Learning of Noisyor Networks
Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski (Princeton University, USA; Duke University, USA) Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Learning the model that is, the mapping from hidden variables to visible ones and vice versais NPhard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structure: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the singlelayer noisyOR network, which is a textbook example of a bayes net, and used for example in the classic QMRDT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable. @InProceedings{STOC17p1057, author = {Sanjeev Arora and Rong Ge and Tengyu Ma and Andrej Risteski}, title = {Provable Learning of Noisyor Networks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10571066}, doi = {}, year = {2017}, } 

Artmann, Stephan 
STOC '17: "A Strongly Polynomial Algorithm ..."
A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming
Stephan Artmann, Robert Weismantel, and Rico Zenklusen (ETH Zurich, Switzerland) We present a strongly polynomial algorithm to solve integer programs of the form max{c^{T} x∶ Ax≤ b, x∈ℤ^{n} }, for A∈ℤ^{m× n} with rank(A)=n, b∈ℤ^{m}, c∈ℤ^{n}, and where all determinants of (n× n)submatrices of A are bounded by 2 in absolute value. In particular, this implies that integer programs max{c^{T} x : Q x≤ b, x∈ ℤ_{ 0}^{n}}, where Q∈ ℤ^{m× n} has the property that all subdeterminants are bounded by 2 in absolute value, can be solved in strongly polynomial time. We thus obtain an extension of the wellknown result that integer programs with constraint matrices that are totally unimodular are solvable in strongly polynomial time. @InProceedings{STOC17p1206, author = {Stephan Artmann and Robert Weismantel and Rico Zenklusen}, title = {A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12061219}, doi = {}, year = {2017}, } 

Arvind, V. 
STOC '17: "Randomized Polynomial Time ..."
Randomized Polynomial Time Identity Testing for Noncommutative Circuits
V. Arvind, Pushkar S Joglekar, Partha Mukhopadhyay, and S. Raja (Institute of Mathematical Sciences, India; Vishwakarma Institute of Technology Pune, India; Chennai Mathematical Institute, India) In this paper we show that blackbox polynomial identity testing for noncommutative polynomials f∈F⟨ z_{1},z_{2},⋯,z_{n} ⟩ of degree D and sparsity t, can be done in randomized (n,logt,logD) time. As a consequence, given a circuit C of size s computing a polynomial f∈F⟨ z_{1},z_{2},⋯,z_{n} ⟩ with at most t nonzero monomials, then testing if f is identically zero can be done by a randomized algorithm with running time polynomial in s and n and logt. This makes significant progress on a question that has been open for over ten years. Our algorithm is based on automatatheoretic ideas that can efficiently isolate a monomial in the given polynomial. In particular, we carry out the monomial isolation using nondeterministic automata. In general, noncommutative circuits of size s can compute polynomials of degree exponential in s and number of monomials doubleexponential in s. In this paper, we consider a natural class of homogeneous noncommutative circuits, that we call +regular circuits, and give a whitebox polynomial time deterministic polynomial identity test. These circuits can compute noncommutative polynomials with number of monomials doubleexponential in the circuit size. Our algorithm combines some new structural results for +regular circuits with known results for noncommutative ABP identity testing, rank bound of commutative depth three identities, and equivalence testing problem for words. Finally, we consider the blackbox identity testing problem for depth three +regular circuits and give a randomized polynomial time identity test. In particular, we show if f∈⟨ Z⟩ is a nonzero noncommutative polynomial computed by a depth three +regular circuit of size s, then f cannot be a polynomial identity for the matrix algebra M_{s}(F) when F is sufficiently large depending on the degree of f. @InProceedings{STOC17p831, author = {V. Arvind and Pushkar S Joglekar and Partha Mukhopadhyay and S. Raja}, title = {Randomized Polynomial Time Identity Testing for Noncommutative Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {831841}, doi = {}, year = {2017}, } 

Azar, Yossi 
STOC '17: "Online Service with Delay ..."
Online Service with Delay
Yossi Azar, Arun Ganesh, Rong Ge, and Debmalya Panigrahi (Tel Aviv University, Israel; Duke University, USA) In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a polylogarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems. @InProceedings{STOC17p551, author = {Yossi Azar and Arun Ganesh and Rong Ge and Debmalya Panigrahi}, title = {Online Service with Delay}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {551563}, doi = {}, year = {2017}, } 

Babaioff, Moshe 
STOC '17: "The MenuSize Complexity of ..."
The MenuSize Complexity of Revenue Approximation
Moshe Babaioff, Yannai A. Gonczarowski, and Noam Nisan (Microsoft Research, Israel; Hebrew University of Jerusalem, Israel) We consider a monopolist that is selling n items to a single additive buyer, where the buyer’s values for the items are drawn according to independent distributions F_{1},F_{2},…,F_{n} that possibly have unbounded support. It is well known that — unlike in the single item case — the revenueoptimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open. In this paper, we give an affirmative answer to this open question, showing that for every n and for every ε>0, there exists a complexity bound C=C(n,ε) such that auctions of menu size at most C suffice for obtaining a (1−ε) fraction of the optimal revenue from any F_{1},…,F_{n}. We prove upper and lower bounds on the revenue approximation complexity C(n,ε), as well as on the deterministic communication complexity required to run an auction that achieves such an approximation. @InProceedings{STOC17p869, author = {Moshe Babaioff and Yannai A. Gonczarowski and Noam Nisan}, title = {The MenuSize Complexity of Revenue Approximation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {869877}, doi = {}, year = {2017}, } 

Babichenko, Yakov 
STOC '17: "Communication Complexity of ..."
Communication Complexity of Approximate Nash Equilibria
Yakov Babichenko and Aviad Rubinstein (Technion, Israel; University of California at Berkeley, USA) For a constant є, we prove a (N) lower bound on the (randomized) communication complexity of єNash equilibrium in twoplayer N× N games. For nplayer binaryaction games we prove an exp(n) lower bound for the (randomized) communication complexity of (є,є)weak approximate Nash equilibrium, which is a profile of mixed actions such that at least (1−є)fraction of the players are єbest replying. @InProceedings{STOC17p878, author = {Yakov Babichenko and Aviad Rubinstein}, title = {Communication Complexity of Approximate Nash Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {878889}, doi = {}, year = {2017}, } 

Balkanski, Eric 
STOC '17: "The Limitations of Optimization ..."
The Limitations of Optimization from Samples
Eric Balkanski, Aviad Rubinstein, and Yaron Singer (Harvard University, USA; University of California at Berkeley, USA) In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomiallymany samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unitdemand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature. @InProceedings{STOC17p1016, author = {Eric Balkanski and Aviad Rubinstein and Yaron Singer}, title = {The Limitations of Optimization from Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10161027}, doi = {}, year = {2017}, } 

Ball, Marshall 
STOC '17: "AverageCase FineGrained ..."
AverageCase FineGrained Hardness
Marshall Ball, Alon Rosen, Manuel Sabin, and Prashant Nalini Vasudevan (Columbia University, USA; IDC Herzliya, Israel; University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widelyconjectured worstcase hardness for problems from the study of finegrained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL ’94), but these have been canonical functions that have not found further use, while our functions are closely related to wellstudied problems and have considerable algebraic structure. Based on the averagecase hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing finegrained OneWay Functions. We also show how our reductions make conjectures regarding the worstcase hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO ’03). We prove our hardness results in each case by showing finegrained reductions from solving one of three problems – namely, Orthogonal Vectors (OV), 3SUM, and AllPairs Shortest Paths (APSP) – in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n^{2−o(1)} time to compute on average, and that of APSP gives us a function that requires n^{3−o(1)} time. Using the same techniques we also obtain a conditional averagecase time hierarchy of functions. @InProceedings{STOC17p483, author = {Marshall Ball and Alon Rosen and Manuel Sabin and Prashant Nalini Vasudevan}, title = {AverageCase FineGrained Hardness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {483496}, doi = {}, year = {2017}, } 

Bansal, Nikhil 
STOC '17: "Algorithmic Discrepancy Beyond ..."
Algorithmic Discrepancy Beyond Partial Coloring
Nikhil Bansal and Shashwat Garg (Eindhoven University of Technology, Netherlands) The partial coloring method is one of the most powerful and widely used method in combinatorial discrepancy problems. However, in many cases it leads to suboptimal bounds as the partial coloring step must be iterated a logarithmic number of times, and the errors can add up in an adversarial way. We give a new and general algorithmic framework that overcomes the limitations of the partial coloring method and can be applied in a blackbox manner to various problems. Using this framework, we give new improved bounds and algorithms for several classic problems in discrepancy. In particular, for Tusnady’s problem, we give an improved O(log^{2} n) bound for discrepancy of axisparallel rectangles and more generally an O_{d}(log^{d}n) bound for ddimensional boxes in ℝ^{d}. Previously, even nonconstructively, the best bounds were O(log^{2.5} n) and O_{d}(log^{d+0.5}n) respectively. Similarly, for the Steinitz problem we give the first algorithm that matches the best known nonconstructive bounds due to Banaszczyk in the ℓ_{∞} case, and improves the previous algorithmic bounds substantially in the ℓ_{2} case. Our framework is based upon a substantial generalization of the techniques developed recently in the context of the Komlós discrepancy problem. @InProceedings{STOC17p914, author = {Nikhil Bansal and Shashwat Garg}, title = {Algorithmic Discrepancy Beyond Partial Coloring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {914926}, doi = {}, year = {2017}, } STOC '17: "Faster SpaceEfficient Algorithms ..." Faster SpaceEfficient Algorithms for Subset Sum and kSum Nikhil Bansal, Shashwat Garg, Jesper Nederlof, and Nikhil Vyas (Eindhoven University of Technology, Netherlands; IIT Bombay, India) We present randomized algorithms that solve Subset Sum and Knapsack instances with n items in O^{*}(2^{0.86n}) time, where the O^{*}(·) notation suppresses factors polynomial in the input size, and polynomial space, assuming random readonly access to exponentially many random bits. These results can be extended to solve Binary Linear Programming on n variables with few constraints in a similar running time. We also show that for any constant k≥ 2, random instances of kSum can be solved using O(n^{k−0.5}(n)) time and O(logn) space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random readonly access to random bits, we show that this problem can be solved using O(logn) space significantly faster than the trivial O(n^{2}) time algorithm if no value occurs too often in the same list. @InProceedings{STOC17p198, author = {Nikhil Bansal and Shashwat Garg and Jesper Nederlof and Nikhil Vyas}, title = {Faster SpaceEfficient Algorithms for Subset Sum and kSum}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {198209}, doi = {}, year = {2017}, } 

Barak, Boaz 
STOC '17: "Quantum Entanglement, Sum ..."
Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture
Boaz Barak, Pravesh K. Kothari, and David Steurer (Harvard University, USA; Princeton University, USA; IAS, USA; Cornell University, USA) For every constant є>0, we give an exp(Õ(√n))time algorithm for the 1 vs 1−є Best Separable State (BSS) problem of distinguishing, given an n^{2}× n^{2} matrix corresponding to a quantum measurement, between the case that there is a separable (i.e., nonentangled) state ρ that accepts with probability 1, and the case that every separable state is accepted with probability at most 1−є. Equivalently, our algorithm takes the description of a subspace ⊆ F^{n2} (where F can be either the real or complex field) and distinguishes between the case that contains a rank one matrix, and the case that every rank one matrix is at least є far (in ℓ_{2} distance) from . To the best of our knowledge, this is the first improvement over the bruteforce exp(n)time algorithm for this problem. Our algorithm is based on the sumofsquares hierarchy and its analysis is inspired by Lovett’s proof (STOC ’14, JACM ’16) that the communication complexity of every rankn Boolean matrix is bounded by Õ(√n). @InProceedings{STOC17p975, author = {Boaz Barak and Pravesh K. Kothari and David Steurer}, title = {Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {975988}, doi = {}, year = {2017}, } 

Bavarian, Mohammad 
STOC '17: "Hardness Amplification for ..."
Hardness Amplification for Entangled Games via Anchoring
Mohammad Bavarian, Thomas Vidick, and Henry Yuen (Massachusetts Institute of Technology, USA; California Institute of Technology, USA; University of California at Berkeley, USA) We study the parallel repetition of oneround games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate — in other words, does an analogue of Raz’s parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games. We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the FeigeKilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the FeigeKilian transformation, our anchoring transformation is completeness preserving. We prove an exponentialdecay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games. Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture. @InProceedings{STOC17p303, author = {Mohammad Bavarian and Thomas Vidick and Henry Yuen}, title = {Hardness Amplification for Entangled Games via Anchoring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {303316}, doi = {}, year = {2017}, } 

BenAroya, Avraham 
STOC '17: "An Efficient Reduction from ..."
An Efficient Reduction from TwoSource to Nonmalleable Extractors: Achieving NearLogarithmic Minentropy
Avraham BenAroya, Dean Doron, and Amnon TaShma (Tel Aviv University, Israel) The breakthrough result of Chattopadhyay and Zuckerman (2016) gives a reduction from the construction of explicit twosource extractors to the construction of explicit nonmalleable extractors. However, even assuming the existence of optimal explicit nonmalleable extractors only gives a twosource extractor (or a Ramsey graph) for poly(logn) entropy, rather than the optimal O(logn). In this paper we modify the construction to solve the above barrier. Using the currently best explicit nonmalleable extractors we get an explicit bipartite Ramsey graphs for sets of size 2^{k}, for k=O(logn loglogn). Any further improvement in the construction of nonmalleable extractors would immediately yield a corresponding twosource extractor. Intuitively, Chattopadhyay and Zuckerman use an extractor as a sampler, and we observe that one could use a weaker object – a somewhererandom condenser with a small entropy gap and a very short seed. We also show how to explicitly construct this weaker object using the error reduction technique of Raz, Reingold and Vadhan (1999), and the constantdegree dispersers of Zuckerman (2006) that also work against extremely small tests. @InProceedings{STOC17p1185, author = {Avraham BenAroya and Dean Doron and Amnon TaShma}, title = {An Efficient Reduction from TwoSource to Nonmalleable Extractors: Achieving NearLogarithmic Minentropy}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11851194}, doi = {}, year = {2017}, } 

Bouland, Adam 
STOC '17: "The Computational Complexity ..."
The Computational Complexity of Ball Permutations
Scott Aaronson, Adam Bouland, Greg Kuperberg, and Saeed Mehraban (University of Texas at Austin, USA; Massachusetts Institute of Technology, USA; University of California at Davis, USA) We define several models of computation based on permuting distinguishable particles (which we call balls) and characterize their computational complexity. In the quantum setting, we use the representation theory of the symmetric group to find variants of this model which are intermediate between BPP and DQC1 (the class of problems solvable with one clean qubit) and between DQC1 and BQP. Furthermore, we consider a restricted version of this model based on an exactly solvable scattering problem of particles moving on a line. Despite the simplicity of this model from the perspective of mathematical physics, we show that if we allow intermediate destructive measurements and specific input states, then the model cannot be efficiently simulated classically up to multiplicative error unless the polynomial hierarchy collapses. Finally, we define a classical version of this model in which one can probabilistically permute balls. We find this yields a complexity class which is intermediate between L and BPP, and that a nondeterministic version of this model is NPcomplete. @InProceedings{STOC17p317, author = {Scott Aaronson and Adam Bouland and Greg Kuperberg and Saeed Mehraban}, title = {The Computational Complexity of Ball Permutations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {317327}, doi = {}, year = {2017}, } 

Brakerski, Zvika 
STOC '17: "Noninteractive Delegation ..."
Noninteractive Delegation and Batch NP Verification from Standard Computational Assumptions
Zvika Brakerski, Justin Holmgren, and Yael Kalai (Weizmann Institute of Science, Israel; Massachusetts Institute of Technology, USA; Microsoft Research, USA) We present an adaptive and noninteractive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomialtime cryptographic assumptions (e.g. the worst case hardness of polynomialfactor approximation of shortvector lattice problems). In our protocol, the verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements by simpling sending a proof to the verifier. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer nonadaptive twomessage protocols (where the first message could not be reused), and required either obfuscationbased assumptions or superpolynomial hardness assumptions. We show that our techniques can also be applied to construct a new type of (nonadaptive) 2message argument for batch NPstatements. Specifically, we can simultaneously prove (with computational soundness) the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness. @InProceedings{STOC17p474, author = {Zvika Brakerski and Justin Holmgren and Yael Kalai}, title = {Noninteractive Delegation and Batch NP Verification from Standard Computational Assumptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {474482}, doi = {}, year = {2017}, } 

Braverman, Vladimir 
STOC '17: "Streaming Symmetric Norms ..."
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang (Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel) We characterize the streaming space complexity of every symmetric norm l (a norm on ℝ^{n} invariant under signflips and coordinatepermutations), by relating this space complexity to the measureconcentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±є)approximation to the norm of the stream, for every 0<є≤ 1/2. (The bounds match up to (є^{−1} logn) factors.) We further extend those bounds to any large approximation ratio D≥ 1.1, showing that the decrease in space complexity is proportional to D^{2}, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l_{2} unit sphere. The same median governs many phenomena in highdimensional spaces, such as largedeviation bounds and the critical dimension in Dvoretzky’s Theorem. The family of symmetric norms contains several wellstudied norms, such as all l_{p} norms, and indeed we provide a new explanation for the disparity in space complexity between p≤ 2 and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the topk norm and the ksupport norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in http://sublinear.info). @InProceedings{STOC17p716, author = {Jarosław Błasiok and Vladimir Braverman and Stephen R. Chestnut and Robert Krauthgamer and Lin F. Yang}, title = {Streaming Symmetric Norms via Measure Concentration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716729}, doi = {}, year = {2017}, } 

Bubeck, Sébastien 
STOC '17: "KernelBased Methods for Bandit ..."
KernelBased Methods for Bandit Convex Optimization
Sébastien Bubeck, Yin Tat Lee, and Ronen Eldan (Microsoft Research, USA; Weizmann Institute of Science, Israel) We consider the adversarial convex bandit problem and we build the first poly(T)time algorithm with poly(n) √Tregret for this problem. To do so we introduce three new ideas in the derivativefree optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves Õ(n^{9.5} √T)regret, and we show that a simple variant of this algorithm can be run in poly(n log(T))time per step at the cost of an additional poly(n) T^{o(1)} factor in the regret. These results improve upon the Õ(n^{11} √T)regret and exp(poly(T))time result of the first two authors, and the log(T)^{poly(n)} √Tregret and log(T)^{poly(n)}time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve Õ(n^{1.5} √T)regret, and moreover that this regret is unimprovable (the current best lower bound being Ω(n √T) and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order n^{3} / є^{2}. @InProceedings{STOC17p72, author = {Sébastien Bubeck and Yin Tat Lee and Ronen Eldan}, title = {KernelBased Methods for Bandit Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {7285}, doi = {}, year = {2017}, } Video Info STOC '17: "Local MaxCut in Smoothed ..." Local MaxCut in Smoothed Polynomial Time Omer Angel, Sébastien Bubeck, Yuval Peres, and Fan Wei (University of British Columbia, Canada; Microsoft Research, USA; Stanford University, USA) In 1988, Johnson, Papadimitriou and Yannakakis wrote that “Practically all the empirical evidence would lead us to conclude that finding locally optimal solutions is much easier than solving NPhard problems”. Since then the empirical evidence has continued to amass, but formal proofs of this phenomenon have remained elusive. A canonical (and indeed complete) example is the local maxcut problem, for which no polynomial time method is known. In a breakthrough paper, Etscheid and R'oglin proved that the smoothed complexity of local maxcut is quasipolynomial, i.e., if arbitrary bounded weights are randomly perturbed, a local maximum can be found in φ n^{O(logn)} steps where φ is an upper bound on the random edge weight density. In this paper we prove smoothed polynomial complexity for local maxcut, thus confirming that finding local optima for maxcut is much easier than solving it. @InProceedings{STOC17p429, author = {Omer Angel and Sébastien Bubeck and Yuval Peres and Fan Wei}, title = {Local MaxCut in Smoothed Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {429437}, doi = {}, year = {2017}, } 

Bullins, Brian 
STOC '17: "Finding Approximate Local ..."
Finding Approximate Local Minima Faster than Gradient Descent
Naman Agarwal, Zeyuan AllenZhu, Brian Bullins, Elad Hazan, and Tengyu Ma (Princeton University, USA; IAS, USA) We design a nonconvex secondorder optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other nonconvex objectives arising in machine learning. @InProceedings{STOC17p1195, author = {Naman Agarwal and Zeyuan AllenZhu and Brian Bullins and Elad Hazan and Tengyu Ma}, title = {Finding Approximate Local Minima Faster than Gradient Descent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11951199}, doi = {}, year = {2017}, } 

Błasiok, Jarosław 
STOC '17: "Streaming Symmetric Norms ..."
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang (Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel) We characterize the streaming space complexity of every symmetric norm l (a norm on ℝ^{n} invariant under signflips and coordinatepermutations), by relating this space complexity to the measureconcentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±є)approximation to the norm of the stream, for every 0<є≤ 1/2. (The bounds match up to (є^{−1} logn) factors.) We further extend those bounds to any large approximation ratio D≥ 1.1, showing that the decrease in space complexity is proportional to D^{2}, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l_{2} unit sphere. The same median governs many phenomena in highdimensional spaces, such as largedeviation bounds and the critical dimension in Dvoretzky’s Theorem. The family of symmetric norms contains several wellstudied norms, such as all l_{p} norms, and indeed we provide a new explanation for the disparity in space complexity between p≤ 2 and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the topk norm and the ksupport norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in http://sublinear.info). @InProceedings{STOC17p716, author = {Jarosław Błasiok and Vladimir Braverman and Stephen R. Chestnut and Robert Krauthgamer and Lin F. Yang}, title = {Streaming Symmetric Norms via Measure Concentration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716729}, doi = {}, year = {2017}, } 

Cai, JinYi 
STOC '17: "Holographic Algorithm with ..."
Holographic Algorithm with Matchgates Is Universal for Planar #CSP over Boolean Domain
JinYi Cai and Zhiguo Fu (University of WisconsinMadison, USA; Jilin University, China) We prove a complexity classification theorem that classifies all counting constraint satisfaction problems (#CSP) over Boolean variables into exactly three classes: (1) Polynomialtime solvable; (2) #Phard for general instances, but solvable in polynomialtime over planar structures; and (3) #Phard over planar structures. The classification applies to all finite sets of complexvalued, not necessarily symmetric, constraint functions on Boolean variables. It is shown that Valiant's holographic algorithm with matchgates is universal strategy for all problems in class (2). @InProceedings{STOC17p842, author = {JinYi Cai and Zhiguo Fu}, title = {Holographic Algorithm with Matchgates Is Universal for Planar #CSP over Boolean Domain}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {842855}, doi = {}, year = {2017}, } 

Cai, Yang 
STOC '17: "Simple Mechanisms for Subadditive ..."
Simple Mechanisms for Subadditive Buyers via Duality
Yang Cai and Mingfei Zhao (McGill University, Canada) We provide simple and approximately revenueoptimal mechanisms in the multiitem multibidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers’ valuations are XOS over independent items. If the buyers’ valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the CaiDevanurWeinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques. @InProceedings{STOC17p170, author = {Yang Cai and Mingfei Zhao}, title = {Simple Mechanisms for Subadditive Buyers via Duality}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {170183}, doi = {}, year = {2017}, } 

Calude, Cristian S. 
STOC '17: "Deciding Parity Games in Quasipolynomial ..."
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan (University of Auckland, New Zealand; National University of Singapore, Singapore) It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime n^{O(√n)} to O(n^{log(m)+6}) and from an XPalgorithm with runtime O(n^{Θ(m)}) for fixed parameter m to an FPTalgorithm with runtime O(n^{5})+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((m^{m} · n)^{5}); it is also shown that this bound cannot be improved to O((2^{m} · n)^{c}), for any c, unless FPT = W[1]. @InProceedings{STOC17p252, author = {Cristian S. Calude and Sanjay Jain and Bakhadyr Khoussainov and Wei Li and Frank Stephan}, title = {Deciding Parity Games in Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {252263}, doi = {}, year = {2017}, } 

Canetti, Ran 
STOC '17: "Equivocating Yao: ConstantRound ..."
Equivocating Yao: ConstantRound Adaptively Secure Multiparty Computation in the Plain Model
Ran Canetti, Oxana Poburinnaya, and Muthuramakrishnan Venkitasubramaniam (Boston University, USA; Tel Aviv University, Israel; University of Rochester, USA) Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable twomessage, twoparty secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds? We provide a positive answer to this question. We define a new type of encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function. Combining our scheme with noncommitting encryption, we obtain the first twomessage, twoparty computation protocol, and the first constantrounds multiparty computation protocol, in the plain model, that are secure against semihonest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within. @InProceedings{STOC17p497, author = {Ran Canetti and Oxana Poburinnaya and Muthuramakrishnan Venkitasubramaniam}, title = {Equivocating Yao: ConstantRound Adaptively Secure Multiparty Computation in the Plain Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {497509}, doi = {}, year = {2017}, } Info 

Cevher, Volkan 
STOC '17: "An Adaptive SublinearTime ..."
An Adaptive SublinearTime Block Sparse Fourier Transform
Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh (EPFL, Switzerland) The problem of approximately computing the k dominant Fourier coefficients of a vector X quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with O(klognlog(n/k)) runtime [Hassanieh et al., STOC’12] and O(klogn) sample complexity [Indyk et al., FOCS’14]. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a (k_{0},k_{1})block sparse model. In this model, signal frequencies are clustered in k_{0} intervals with width k_{1} in Fourier space, and k= k_{0}k_{1} is the total sparsity. Our main result is the first sparse FFT algorithm for (k_{0}, k_{1})block sparse signals with a sample complexity of O^{*}(k_{0}k_{1} + k_{0}log(1+ k_{0})logn) at constant signaltonoise ratios, and sublinear runtime. Our algorithm crucially uses adaptivity to achieve the improved sample complexity bound, and we provide a lower bound showing that this is essential in the Fourier setting: Any nonadaptive algorithm must use Ω(k_{0}k_{1}logn/k_{0}k_{1}) samples for the (k_{0},k_{1})block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energybased importance sampling technique that may be of independent interest. @InProceedings{STOC17p702, author = {Volkan Cevher and Michael Kapralov and Jonathan Scarlett and Amir Zandieh}, title = {An Adaptive SublinearTime Block Sparse Fourier Transform}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {702715}, doi = {}, year = {2017}, } 

Chakrabarty, Deeparnab 
STOC '17: "Subquadratic Submodular Function ..."
Subquadratic Submodular Function Minimization
Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, and Sam Chiuwai Wong (Dartmouth College, USA; Microsoft Research, USA; Stanford University, USA; University of California at Berkeley, USA) Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n^{2}lognM·+n^{3}log^{O(1)}nM) time and O(n^{3}log^{2}n·+n^{4}log^{O(1)}n) time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], The main contribution of this paper are subquadratic SFM algorithms. For integervalued submodular functions, we give an SFM algorithm which runs in O(nM^{3}logn·) time giving the first nearly linear time algorithm in any known regime. For realvalued submodular functions with range in [−1,1], we give an algorithm which in Õ(n^{5/3}·/ε^{2}) time returns an εadditive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n^{2} bound – we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast FujishigeWolfe heuristic [Fujishige, 1980; Wolfe, 1976] @InProceedings{STOC17p1220, author = {Deeparnab Chakrabarty and Yin Tat Lee and Aaron Sidford and Sam Chiuwai Wong}, title = {Subquadratic Submodular Function Minimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12201231}, doi = {}, year = {2017}, } 

Chang, YiJun 
STOC '17: "Exponential Separations in ..."
Exponential Separations in the Energy Complexity of Leader Election
YiJun Chang, Tsvi Kopelowitz, Seth Pettie, Ruosong Wang, and Wei Zhan (University of Michigan, USA; Tsinghua University, China) Energy is often the most constrained resource for batterypowered wireless devices and the lion’s share of energy is often spent on transceiver usage (sending/receiving packets), not on computation. In this paper we study the energy complexity of Leader Election and Approximate Counting in several models of wireless radio networks. It turns out that energy complexity is very sensitive to whether the devices can generate random bits and their ability to detect collisions. We consider four collisiondetection models: StrongCD (in which transmitters and listeners detect collisions), SenderCD and ReceiverCD (in which only transmitters or only listeners detect collisions), and NoCD (in which no one detects collisions.) The takeaway message of our results is quite surprising. For randomized Leader Election algorithms, there is an exponential gap between the energy complexity of SenderCD and ReceiverCD: NoCD = SenderCD ≫ ReceiverCD = StrongCD and for deterministic Leader Election algorithms, there is another exponential gap in energy complexity, but in the reverse direction: NoCD = ReceiverCD ≫ SenderCD = StrongCD In particular, the randomized energy complexity of Leader Election is Θ(log^{*} n) in SenderCD but Θ(log(log^{*} n)) in ReceiverCD, where n is the (unknown) number of devices. Its deterministic complexity is Θ(logN) in ReceiverCD but Θ(loglogN) in SenderCD, where N is the (known) size of the devices’ ID space. There is a tradeoff between time and energy. We give a new upper bound on the timeenergy tradeoff curve for randomized Leader Election and Approximate Counting. A critical component of this algorithm is a new deterministic Leader Election algorithm for dense instances, when n=Θ(N), with inverseAckermanntype (O(α(N))) energy complexity. @InProceedings{STOC17p771, author = {YiJun Chang and Tsvi Kopelowitz and Seth Pettie and Ruosong Wang and Wei Zhan}, title = {Exponential Separations in the Energy Complexity of Leader Election}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {771783}, doi = {}, year = {2017}, } 

Charikar, Moses 
STOC '17: "Learning from Untrusted Data ..."
Learning from Untrusted Data
Moses Charikar, Jacob Steinhardt, and Gregory Valiant (Stanford University, USA) The vast majority of theoretical results in machine learning and statistics assume that the training data is a reliable reflection of the phenomena to be learned. Similarly, most learning techniques used in practice are brittle to the presence of large amounts of biased or malicious data. Motivated by this, we consider two frameworks for studying estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, listdecodable learning, asks whether it is possible to return a list of answers such that at least one is accurate. For example, given a dataset of n points for which an unknown subset of α n points are drawn from a distribution of interest, and no assumptions are made about the remaining (1−α)n points, is it possible to return a list of poly(1/α) answers? The second framework, which we term the semiverified model, asks whether a small dataset of trusted data (drawn from the distribution in question) can be used to extract accurate information from a much larger but untrusted dataset (of which only an αfraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This result has immediate implications for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary. @InProceedings{STOC17p47, author = {Moses Charikar and Jacob Steinhardt and Gregory Valiant}, title = {Learning from Untrusted Data}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {4760}, doi = {}, year = {2017}, } 

Chattopadhyay, Eshan 
STOC '17: "Nonmalleable Codes and Extractors ..."
Nonmalleable Codes and Extractors for SmallDepth Circuits, and Affine Functions
Eshan Chattopadhyay and Xin Li (IAS, USA; Johns Hopkins University, USA) Nonmalleable codes were introduced by Dziembowski, Pietrzak and Wichs as an elegant relaxation of error correcting codes, where the motivation is to handle more general forms of tampering while still providing meaningful guarantees. This has led to many elegant constructions and applications in cryptography. However, most works so far only studied tampering in the splitstate model where different parts of the codeword are tampered independently, and thus do not apply to many other natural classes of tampering functions. The only exceptions are the work of Agrawal et al. which studied nonmalleable codes against bit permutation composed with bitwise tampering, and the works of Faust et al/ and Ball et al. which studied nonmalleable codes against local functions. However, in both cases each tampered bit only depends on a subset of input bits. In this work, we study the problem of constructing nonmalleable codes against more general tampering functions that act on the entire codeword. We give the first efficient constructions of nonmalleable codes against tampering functions and affine tampering functions. These are the first explicit nonmalleable codes against tampering functions where each tampered bit can depend on all input bits. We also give efficient nonmalleable codes against tlocal functions for t=o(√n), where a tlocal function has the property that any output bit depends on at most t input bits. In the case of deterministic decoders, this improves upon the results of Ball et al, which can handle t≤ n^{1/4}. All our results on nonmalleable codes are obtained by using the connection between nonmalleable codes and seedless nonmalleable extractors discovered by Cheraghchi and Guruswami. Therefore, we also give the first efficient constructions of seedless nonmalleable extractors against tampering functions, tlocal tampering functions for t=o(√n), and affine tampering functions. To derive our results on nonmalleable codes, we design efficient algorithms to almost uniformly sample from the preimage of any given output of our nonmalleable extractor. @InProceedings{STOC17p1171, author = {Eshan Chattopadhyay and Xin Li}, title = {Nonmalleable Codes and Extractors for SmallDepth Circuits, and Affine Functions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11711184}, doi = {}, year = {2017}, } Video 

Chawla, Shuchi 
STOC '17: "Stability of Service under ..."
Stability of Service under TimeofUse Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of WisconsinMadison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider timeofuse pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a pertimeunit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under TimeofUse Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184197}, doi = {}, year = {2017}, } 

Chen, Xi 
STOC '17: "Addition Is Exponentially ..."
Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits
Xi Chen, Igor C. Oliveira, and Rocco A. Servedio (Columbia University, USA; Charles University in Prague, Czechia) Let Add_{k,N} denote the Boolean function which takes as input k strings of N bits each, representing k numbers a^{(1)},…,a^{(k)} in {0,1,…,2^{N}−1}, and outputs 1 if and only if a^{(1)} + ⋯ + a^{(k)} ≥ 2^{N}. Let MAJ_{t,n} denote a monotone unweighted threshold gate, i.e., the Boolean function which takes as input a single string x ∈ {0,1}^{n} and outputs 1 if and only if x_{1} + ⋯ + x_{n} ≥ t. The function Add_{k,N} may be viewed as a monotone function that performs addition, and MAJ_{t,n} may be viewed as a monotone gate that performs counting. We refer to circuits that are composed of MAJ gates as monotone majority circuits. The main result of this paper is an exponential lower bound on the size of boundeddepth monotone majority circuits that compute Add_{k,N}. More precisely, we show that for any constant d ≥ 2, any depthd monotone majority circuit that computes Add_{d,N} must have size 2^{Ω(N1/d)}. As Add_{k,N} can be computed by a single monotone weighted threshold gate (that uses exponentially large weights), our lower bound implies that constantdepth monotone majority circuits require exponential size to simulate monotone weighted threshold gates. This answers a question posed by Goldmann and Karpinski (STOC’93) and recently restated by Håstad (2010, 2014). We also show that our lower bound is essentially best possible, by constructing a depthd, size 2^{O(N1/d)} monotone majority circuit for Add_{d,N}. As a corollary of our lower bound, we significantly strengthen a classical theorem in circuit complexity due to Ajtai and Gurevich (JACM’87). They exhibited a monotone function that is in AC^{0} but requires superpolynomial size for any constantdepth monotone circuit composed of unbounded fanin AND and OR gates. We describe a monotone function that is in depth3 AC^{0} but requires exponential size monotone circuits of any constant depth, even if the circuits are composed of MAJ gates. @InProceedings{STOC17p1232, author = {Xi Chen and Igor C. Oliveira and Rocco A. Servedio}, title = {Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12321245}, doi = {}, year = {2017}, } STOC '17: "Beyond Talagrand Functions: ..." Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness Xi Chen, Erik Waingarten, and Jinyu Xie (Columbia University, USA) We prove a lower bound of Ω(n^{1/3}) for the query complexity of any twosided and adaptive algorithm that tests whether an unknown Boolean function f:{0,1}^{n}→ {0,1} is monotone versus far from monotone. This improves the recent lower bound of Ω(n^{1/4}) for the same problem by Belovs and Blais (STOC’16). Our result builds on a new family of random Boolean functions that can be viewed as a twolevel extension of Talagrand’s random DNFs. Beyond monotonicity we prove a lower bound of Ω(√n) for twosided, adaptive algorithms and a lower bound of Ω(n) for onesided, nonadaptive algorithms for testing unateness, a natural generalization of monotonicity. The latter matches the linear upper bounds by Khot and Shinkar (RANDOM’16) and by Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri (2017). @InProceedings{STOC17p523, author = {Xi Chen and Erik Waingarten and Jinyu Xie}, title = {Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {523536}, doi = {}, year = {2017}, } 

Chestnut, Stephen R. 
STOC '17: "Streaming Symmetric Norms ..."
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang (Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel) We characterize the streaming space complexity of every symmetric norm l (a norm on ℝ^{n} invariant under signflips and coordinatepermutations), by relating this space complexity to the measureconcentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±є)approximation to the norm of the stream, for every 0<є≤ 1/2. (The bounds match up to (є^{−1} logn) factors.) We further extend those bounds to any large approximation ratio D≥ 1.1, showing that the decrease in space complexity is proportional to D^{2}, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l_{2} unit sphere. The same median governs many phenomena in highdimensional spaces, such as largedeviation bounds and the critical dimension in Dvoretzky’s Theorem. The family of symmetric norms contains several wellstudied norms, such as all l_{p} norms, and indeed we provide a new explanation for the disparity in space complexity between p≤ 2 and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the topk norm and the ksupport norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in http://sublinear.info). @InProceedings{STOC17p716, author = {Jarosław Błasiok and Vladimir Braverman and Stephen R. Chestnut and Robert Krauthgamer and Lin F. Yang}, title = {Streaming Symmetric Norms via Measure Concentration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716729}, doi = {}, year = {2017}, } 

Christiani, Tobias 
STOC '17: "Set Similarity Search Beyond ..."
Set Similarity Search Beyond MinHash
Tobias Christiani and Rasmus Pagh (IT University of Copenhagen, Denmark) We consider the problem of approximate set similarity search under BraunBlanquet similarity B(x, y) = x ∩ y / max(x, y). The (b_{1}, b_{2})approximate BraunBlanquet similarity search problem is to preprocess a collection of sets P such that, given a query set q, if there exists x ∈ P with B(q, x) ≥ b_{1}, then we can efficiently return x′ ∈ P with B(q, x′) > b_{2}. We present a simple data structure that solves this problem with space usage O(n^{1+ρ}logn + ∑_{x ∈ P}x) and query time O(qn^{ρ} logn) where n = P and ρ = log(1/b_{1})/log(1/b_{2}). Making use of existing lower bounds for localitysensitive hashing by O’Donnell et al. (TOCT 2014) we show that this value of ρ is tight across the parameter space, i.e., for every choice of constants 0 < b_{2} < b_{1} < 1. In the case where all sets have the same size our solution strictly improves upon the value of ρ that can be obtained through the use of stateoftheart dataindependent techniques in the IndykMotwani localitysensitive hashing framework (STOC 1998) such as Broder’s MinHash (CCS 1997) for Jaccard similarity and Andoni et al.’s crosspolytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is dataindependent, for a large part of the parameter space we outperform the currently best datadependent method by Andoni and Razenshteyn (STOC 2015). @InProceedings{STOC17p1094, author = {Tobias Christiani and Rasmus Pagh}, title = {Set Similarity Search Beyond MinHash}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10941107}, doi = {}, year = {2017}, } 

Chuzhoy, Julia 
STOC '17: "New Hardness Results for Routing ..."
New Hardness Results for Routing on Disjoint Paths
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical NodeDisjoint Paths (NDP) problem, the input consists of an undirected nvertex graph G, and a collection M={(s_{1},t_{1}),…,(s_{k},t_{k})} of pairs of its vertices, called sourcedestination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via nodedisjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is O(√n), while the best current negative result is an Ω(log^{1/2−δ}n)hardness of approximation for any constant δ, under standard complexity assumptions. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves an Õ(n^{1/4})approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is Õ(n^{9/19}). The best currently known lower bound for both these versions of the problem is APXhardness. In this paper we prove that NDP is 2^{Ω(√logn)}hard to approximate, unless all problems in NP have algorithms with running time n^{O(logn)}. Our result holds even when the underlying graph is a planar graph with maximum vertex degree 4, and all source vertices lie on the boundary of a single face (but the destination vertices may lie anywhere in the graph). We extend this result to the closely related EdgeDisjoint Paths problem, showing the same hardness of approximation ratio even for subcubic planar graphs with all sources lying on the boundary of a single face. @InProceedings{STOC17p86, author = {Julia Chuzhoy and David H. K. Kim and Rachit Nimavat}, title = {New Hardness Results for Routing on Disjoint Paths}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {8699}, doi = {}, year = {2017}, } 

Cohen, Gil 
STOC '17: "Towards Optimal TwoSource ..."
Towards Optimal TwoSource Extractors and Ramsey Graphs
Gil Cohen (Princeton University, USA) The main contribution of this work is a construction of a twosource extractor for quasilogarithmic minentropy. That is, an extractor for two independent nbit sources with minentropy O(logn), which is optimal up to the poly(loglogn) factor. A strong motivation for constructing twosource extractors for low entropy is for Ramsey graphs constructions. Our twosource extractor readily yields a (logn)^{(logloglogn)O(1)}Ramsey graph on n vertices. Although there has been exciting progress towards constructing O(logn)Ramsey graphs in recent years, a line of work that this paper contributes to, it is not clear if current techniques can be pushed so as to match this bound. Interestingly, however, as an artifact of current techniques, one obtains strongly explicit Ramsey graphs, namely, graphs on n vertices where the existence of an edge connecting any pair of vertices can be determined in time poly(logn). On top of our strongly explicit construction, in this work, we consider algorithms that output the entire graph in poly(n)time, and make progress towards matching the desired O(logn) bound in this setting. In our opinion, this is a natural setting in which Ramsey graphs constructions should be studied. The main technical novelty of this work lies in an improved construction of an independencepreserving merger (IPM), a variant of the wellstudied notion of a merger, which was recently introduced by Cohen and Schulman. Our construction is based on a new connection to correlation breakers with advice. In fact, our IPM satisfies a stronger and more natural property than that required by the original definition, and we believe it may find further applications. @InProceedings{STOC17p1157, author = {Gil Cohen}, title = {Towards Optimal TwoSource Extractors and Ramsey Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11571170}, doi = {}, year = {2017}, } 

Cohen, Michael B. 
STOC '17: "AlmostLinearTime Algorithms ..."
AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graphtheoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almostlineartime directed Laplacian system solver, and, by leveraging the recent framework of [CohenKelnerPeeblesPengSidfordVladu, FOCS’16], we also obtain almostlineartime algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm^{3/4} + n^{2/3} m) log^{O(1)} (n κ є^{−1})) to O((m + n2^{O(√lognloglogn)}) log^{O(1)} (n κ є^{−1})) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410419}, doi = {}, year = {2017}, } 

CojaOghlan, Amin 
STOC '17: "InformationTheoretic Thresholds ..."
InformationTheoretic Thresholds from the Cavity Method
Amin CojaOghlan, Florent Krzakala, Will Perkins, and Lenka Zdeborova (Goethe University Frankfurt, Germany; CNRS, France; PSL Research University, France; ENS, France; UPMC, France; University of Birmingham, UK; CEA, France; University of ParisSaclay, France) Vindicating a sophisticated but nonrigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the informationtheoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in LowDensity Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications. @InProceedings{STOC17p146, author = {Amin CojaOghlan and Florent Krzakala and Will Perkins and Lenka Zdeborova}, title = {InformationTheoretic Thresholds from the Cavity Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {146157}, doi = {}, year = {2017}, } 

Curticapean, Radu 
STOC '17: "Homomorphisms Are a Good Basis ..."
Homomorphisms Are a Good Basis for Counting Small Subgraphs
Radu Curticapean, Holger Dell, and Dániel Marx (Hungarian Academy of Sciences, Hungary; Saarland University, Germany) We introduce graph motif parameters, a class of graph parameters that depend only on the frequencies of constantsize induced subgraphs. Classical works by Lovász show that many interesting quantities have this form, including, for fixed graphs H, the number of Hcopies (induced or not) in an input graph G, and the number of homomorphisms from H to G. We use the framework of graph motif parameters to obtain faster algorithms for counting subgraph copies of fixed graphs H in host graphs G. More precisely, for graphs H on k edges, we show how to count subgraph copies of H in time k^{O(k)}· n^{0.174k + o(k)} by a surprisingly simple algorithm. This improves upon previously known running times, such as O(n^{0.91k + c}) time for kedge matchings or O(n^{0.46k + c}) time for kcycles. Furthermore, we prove a general complexity dichotomy for evaluating graph motif parameters: Given a class C of such parameters, we consider the problem of evaluating f∈ C on input graphs G, parameterized by the number of induced subgraphs that f depends upon. For every recursively enumerable class C, we prove the above problem to be either FPT or #W[1]hard, with an explicit dichotomy criterion. This allows us to recover known dichotomies for counting subgraphs, induced subgraphs, and homomorphisms in a uniform and simplified way, together with improved lower bounds. Finally, we extend graph motif parameters to colored subgraphs and prove a complexity trichotomy: For vertexcolored graphs H and G, where H is from a fixed class of graphs, we want to count colorpreserving Hcopies in G. We show that this problem is either polynomialtime solvable or FPT or #W[1]hard, and that the FPT cases indeed need FPT time under reasonable assumptions. @InProceedings{STOC17p210, author = {Radu Curticapean and Holger Dell and Dániel Marx}, title = {Homomorphisms Are a Good Basis for Counting Small Subgraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {210223}, doi = {}, year = {2017}, } 

Dagan, Yuval 
STOC '17: "Twenty (Simple) Questions ..."
Twenty (Simple) Questions
Yuval Dagan, Yuval Filmus, Ariel Gabizon, and Shay Moran (Technion, Israel; Zerocoin Electronic Coin, USA; University of California at San Diego, USA; Simons Institute for the Theory of Computing Berkeley, USA) A basic combinatorial interpretation of Shannon’s entropy function is via the ”20 questions” game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the ”20 questions” game is given by a Huffman code for π: Bob’s questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately?* Our first main result shows that for every distribution π, Bob has a strategy that uses only questions of the form ”x < c?” and ”x = c?”, and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn^{1/r}) questions that achieve a performance of at most H(π)+r, and show that Ω(rn^{1/r}) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25^{n+o(n)} questions such that for every distribution π, Bob can implement an optimal strategy for π using only questions from Q. We also show that 1.25^{n−o(n)} questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)^{Θ(1/r)} questions are necessary and sufficient. @InProceedings{STOC17p9, author = {Yuval Dagan and Yuval Filmus and Ariel Gabizon and Shay Moran}, title = {Twenty (Simple) Questions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {921}, doi = {}, year = {2017}, } 

Dahlgaard, Søren 
STOC '17: "Finding Even Cycles Faster ..."
Finding Even Cycles Faster via Capped kWalks
Søren Dahlgaard, Mathias Bæk Tejs Knudsen, and Morten Stöckel (University of Copenhagen, Denmark) Finding cycles in graphs is a fundamental problem in algorithmic graph theory. In this paper, we consider the problem of finding and reporting a cycle of length 2k in an undirected graph G with n nodes and m edges for constant k≥ 2. A classic result by Bondy and Simonovits [J. Combinatorial Theory, 1974] implies that if m ≥ 100k n^{1+1/k}, then G contains a 2kcycle, further implying that one needs to consider only graphs with m = O(n^{1+1/k}). Previously the best known algorithms were an O(n^{2}) algorithm due to Yuster and Zwick [J. Discrete Math 1997] as well as a O(m^{2−(1+⌈ k/2 ⌉−1)/(k+1)}) algorithm by Alon et. al. [Algorithmica 1997]. We present an algorithm that uses O( m^{2k/(k+1)} ) time and finds a 2kcycle if one exists. This bound is O(n^{2}) exactly when m = Θ(n^{1+1/k}). When finding 4cycles our new bound coincides with Alon et. al., while for every k>2 our new bound yields a polynomial improvement in m. Yuster and Zwick noted that it is “plausible to conjecture that O(n^{2}) is the best possible bound in terms of n”. We show “conditional optimality”: if this hypothesis holds then our O(m^{2k/(k+1)}) algorithm is tight as well. Furthermore, a folklore reduction implies that no combinatorial algorithm can determine if a graph contains a 6cycle in time O(m^{3/2−ε}) for any ε>0 unless boolean matrix multiplication can be solved combinatorially in time O(n^{3−ε′}) for some ε′ > 0, which is widely believed to be false. Coupled with our main result, this gives tight bounds for finding 6cycles combinatorially and also separates the complexity of finding 4 and 6cycles giving evidence that the exponent of m in the running time should indeed increase with k. The key ingredient in our algorithm is a new notion of capped kwalks, which are walks of length k that visit only nodes according to a fixed ordering. Our main technical contribution is an involved analysis proving several properties of such walks which may be of independent interest. @InProceedings{STOC17p112, author = {Søren Dahlgaard and Mathias Bæk Tejs Knudsen and Morten Stöckel}, title = {Finding Even Cycles Faster via Capped kWalks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {112120}, doi = {}, year = {2017}, } 

De, Anindya 
STOC '17: "Optimal MeanBased Algorithms ..."
Optimal MeanBased Algorithms for Trace Reconstruction
Anindya De, Ryan O'Donnell, and Rocco A. Servedio (Northwestern University, USA; Carnegie Mellon University, USA; Columbia University, USA) In the (deletionchannel) trace reconstruction problem, there is an unknown nbit source string x. An algorithm is given access to independent “traces” of x, where a trace is formed by deleting each bit of x independently with probability δ. The goal of the algorithm is to recover x exactly (with high probability), while minimizing samples (number of traces) and running time. Previously, the best known algorithm for the trace reconstruction problem was due to Holenstein et al. [HMPW08]; it uses exp(O(n^{1/2})) samples and running time for any fixed 0 < δ < 1. It is also what we call a “meanbased algorithm”, meaning that it only uses the empirical means of the individual bits of the traces. Holenstein et al. also gave a lower bound, showing that any meanbased algorithm must use at least n^{Ω(logn)} samples. In this paper we improve both of these results, obtaining matching upper and lower bounds for meanbased trace reconstruction. For any constant deletion rate 0 < δ < 1, we give a meanbased algorithm that uses exp(O(n^{1/3})) time and traces; we also prove that any meanbased algorithm must use at least exp(Ω(n^{1/3})) traces. In fact, we obtain matching upper and lower bounds even for δ subconstant and ρ 1−δ subconstant: when (log^{3} n)/n ≪ δ ≤ 1/2 the bound is exp(−Θ(δ n)^{1/3}), and when 1/√n ≪ ρ ≤ 1/2 the bound is exp(−Θ(n/ρ)^{1/3}). Our proofs involve estimates for the maxima of Littlewood polynomials on complex disks. We show that these techniques can also be used to perform trace reconstruction with random insertions and bitflips in addition to deletions. We also find a surprising result: for deletion probabilities δ > 1/2, the presence of insertions can actually help with trace reconstruction. @InProceedings{STOC17p1047, author = {Anindya De and Ryan O'Donnell and Rocco A. Servedio}, title = {Optimal MeanBased Algorithms for Trace Reconstruction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10471056}, doi = {}, year = {2017}, } 

Dell, Holger 
STOC '17: "Homomorphisms Are a Good Basis ..."
Homomorphisms Are a Good Basis for Counting Small Subgraphs
Radu Curticapean, Holger Dell, and Dániel Marx (Hungarian Academy of Sciences, Hungary; Saarland University, Germany) We introduce graph motif parameters, a class of graph parameters that depend only on the frequencies of constantsize induced subgraphs. Classical works by Lovász show that many interesting quantities have this form, including, for fixed graphs H, the number of Hcopies (induced or not) in an input graph G, and the number of homomorphisms from H to G. We use the framework of graph motif parameters to obtain faster algorithms for counting subgraph copies of fixed graphs H in host graphs G. More precisely, for graphs H on k edges, we show how to count subgraph copies of H in time k^{O(k)}· n^{0.174k + o(k)} by a surprisingly simple algorithm. This improves upon previously known running times, such as O(n^{0.91k + c}) time for kedge matchings or O(n^{0.46k + c}) time for kcycles. Furthermore, we prove a general complexity dichotomy for evaluating graph motif parameters: Given a class C of such parameters, we consider the problem of evaluating f∈ C on input graphs G, parameterized by the number of induced subgraphs that f depends upon. For every recursively enumerable class C, we prove the above problem to be either FPT or #W[1]hard, with an explicit dichotomy criterion. This allows us to recover known dichotomies for counting subgraphs, induced subgraphs, and homomorphisms in a uniform and simplified way, together with improved lower bounds. Finally, we extend graph motif parameters to colored subgraphs and prove a complexity trichotomy: For vertexcolored graphs H and G, where H is from a fixed class of graphs, we want to count colorpreserving Hcopies in G. We show that this problem is either polynomialtime solvable or FPT or #W[1]hard, and that the FPT cases indeed need FPT time under reasonable assumptions. @InProceedings{STOC17p210, author = {Radu Curticapean and Holger Dell and Dániel Marx}, title = {Homomorphisms Are a Good Basis for Counting Small Subgraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {210223}, doi = {}, year = {2017}, } 

Devanur, Nikhil R. 
STOC '17: "Stability of Service under ..."
Stability of Service under TimeofUse Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of WisconsinMadison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider timeofuse pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a pertimeunit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under TimeofUse Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184197}, doi = {}, year = {2017}, } 

Doron, Dean 
STOC '17: "An Efficient Reduction from ..."
An Efficient Reduction from TwoSource to Nonmalleable Extractors: Achieving NearLogarithmic Minentropy
Avraham BenAroya, Dean Doron, and Amnon TaShma (Tel Aviv University, Israel) The breakthrough result of Chattopadhyay and Zuckerman (2016) gives a reduction from the construction of explicit twosource extractors to the construction of explicit nonmalleable extractors. However, even assuming the existence of optimal explicit nonmalleable extractors only gives a twosource extractor (or a Ramsey graph) for poly(logn) entropy, rather than the optimal O(logn). In this paper we modify the construction to solve the above barrier. Using the currently best explicit nonmalleable extractors we get an explicit bipartite Ramsey graphs for sets of size 2^{k}, for k=O(logn loglogn). Any further improvement in the construction of nonmalleable extractors would immediately yield a corresponding twosource extractor. Intuitively, Chattopadhyay and Zuckerman use an extractor as a sampler, and we observe that one could use a weaker object – a somewhererandom condenser with a small entropy gap and a very short seed. We also show how to explicitly construct this weaker object using the error reduction technique of Raz, Reingold and Vadhan (1999), and the constantdegree dispersers of Zuckerman (2006) that also work against extremely small tests. @InProceedings{STOC17p1185, author = {Avraham BenAroya and Dean Doron and Amnon TaShma}, title = {An Efficient Reduction from TwoSource to Nonmalleable Extractors: Achieving NearLogarithmic Minentropy}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11851194}, doi = {}, year = {2017}, } 

Dughmi, Shaddin 
STOC '17: "Bernoulli Factories and BlackBox ..."
Bernoulli Factories and BlackBox Reductions in Mechanism Design
Shaddin Dughmi, Jason D. Hartline, Robert Kleinberg, and Rad Niazadeh (University of Southern California, USA; Northwestern University, USA; Cornell University, USA) We provide a polynomialtime reduction from Bayesian incentivecompatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multidimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior blackbox reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism’s output, which is typically #Phard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on ”Bernoulli Factories”. In a Bernoulli factory problem, one is given a function mapping the bias of an “input coin” to that of an “output coin”, and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the ”expectations from samples” computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by ”exponential weights”: expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of HartlineMalekianKleinberg [2015] exactly incentive compatible. @InProceedings{STOC17p158, author = {Shaddin Dughmi and Jason D. Hartline and Robert Kleinberg and Rad Niazadeh}, title = {Bernoulli Factories and BlackBox Reductions in Mechanism Design}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {158169}, doi = {}, year = {2017}, } 

Durfee, David 
STOC '17: "Sampling Random Spanning Trees ..."
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva (Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA) We present an algorithm that, with high probability, generates a random spanning tree from an edgeweighted undirected graph in (n^{5/3 }m^{1/3}) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(n^{ω}). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{n^{ω},m√n,m^{4/3}}) for m ≫ n^{7/4} (Colbourn et al. ’96, KelnerMadry ’09, Madry et al. ’15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinantbased and random walkbased techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute approximate effective resistances for a set S of vertex pairs via approximate Schur complements in (m+(n + S)^{−2}) time, without using the JohnsonLindenstrauss lemma which requires ( min{(m + S)^{−2}, m+n^{−4} +S^{−2}}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn’t sufficiently accurate. @InProceedings{STOC17p730, author = {David Durfee and Rasmus Kyng and John Peebles and Anup B. Rao and Sushant Sachdeva}, title = {Sampling Random Spanning Trees Faster Than Matrix Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {730742}, doi = {}, year = {2017}, } Info 

Eenberg, Kasper 
STOC '17: "DecreaseKeys Are Expensive ..."
DecreaseKeys Are Expensive for External Memory Priority Queues
Kasper Eenberg, Kasper Green Larsen, and Huacheng Yu (Aarhus University, Denmark; Stanford University, USA) One of the biggest open problems in external memory data structures is the priority queue problem with DecreaseKey operations. If only Insert and ExtractMin operations need to be supported, one can design a comparisonbased priority queue performing O((N/B)lg_{M/B} N) I/Os over a sequence of N operations, where B is the disk block size in number of words and M is the main memory size in number of words. This matches the lower bound for comparisonbased sorting and is hence optimal for comparisonbased priority queues. However, if we also need to support DecreaseKeys, the performance of the best known priority queue is only O((N/B) lg_{2} N) I/Os. The big open question is whether a degradation in performance really is necessary. We answer this question affirmatively by proving a lower bound of Ω((N/B) lg_{lgN} B) I/Os for processing a sequence of N intermixed Insert, ExtraxtMin and DecreaseKey operations. Our lower bound is proved in the cell probe model and thus holds also for noncomparisonbased priority queues. @InProceedings{STOC17p1081, author = {Kasper Eenberg and Kasper Green Larsen and Huacheng Yu}, title = {DecreaseKeys Are Expensive for External Memory Priority Queues}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10811093}, doi = {}, year = {2017}, } 

Ehsani, Soheil 
STOC '17: "Beating 11/e for Ordered ..."
Beating 11/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a thresholdbased algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to nonuniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 11/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {6171}, doi = {}, year = {2017}, } 

Eldan, Ronen 
STOC '17: "KernelBased Methods for Bandit ..."
KernelBased Methods for Bandit Convex Optimization
Sébastien Bubeck, Yin Tat Lee, and Ronen Eldan (Microsoft Research, USA; Weizmann Institute of Science, Israel) We consider the adversarial convex bandit problem and we build the first poly(T)time algorithm with poly(n) √Tregret for this problem. To do so we introduce three new ideas in the derivativefree optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves Õ(n^{9.5} √T)regret, and we show that a simple variant of this algorithm can be run in poly(n log(T))time per step at the cost of an additional poly(n) T^{o(1)} factor in the regret. These results improve upon the Õ(n^{11} √T)regret and exp(poly(T))time result of the first two authors, and the log(T)^{poly(n)} √Tregret and log(T)^{poly(n)}time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve Õ(n^{1.5} √T)regret, and moreover that this regret is unimprovable (the current best lower bound being Ω(n √T) and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order n^{3} / є^{2}. @InProceedings{STOC17p72, author = {Sébastien Bubeck and Yin Tat Lee and Ronen Eldan}, title = {KernelBased Methods for Bandit Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {7285}, doi = {}, year = {2017}, } Video Info 

Elkin, Michael 
STOC '17: "Distributed Exact Shortest ..."
Distributed Exact Shortest Paths in Sublinear Time
Michael Elkin (BenGurion University of the Negev, Israel) The distributed singlesource shortest paths problem is one of the most fundamental and central problems in the messagepassing distributed computing. Classical BellmanFord algorithm solves it in O(n) time, where n is the number of vertices in the input graph G. Peleg and Rubinovich, FOCS’99, showed a lower bound of Ω(D + √n) for this problem, where D is the hopdiameter of G. Whether or not this problem can be solved in o(n) time when D is relatively small is a major notorious open question. Despite intensive research that yielded nearoptimal algorithms for the approximate variant of this problem, no progress was reported for the original problem. In this paper we answer this question in the affirmative. We devise an algorithm that requires O((n logn)^{5/6}) time, for D = O(√n logn), and O(D^{1/3} · (n logn)^{2/3}) time, for larger D. This running time is sublinear in n in almost the entire range of parameters, specifically, for D = o(n/log^{2} n). We also generalize our result in two directions. One is when edges have bandwidth b ≥ 1, and the other is the ssources shortest paths problem. For the former problem, our algorithm provides an improved bound, compared to the unitbandwidth case. In particular, we provide an allpairs shortest paths algorithm that requires O(n^{5/3} · log^{2/3} n) time, even for b = 1, for all values of D. For the latter problem (of s sources), our algorithm also provides bounds that improve upon the previous stateoftheart in the entire range of parameters. From the technical viewpoint, our algorithm computes a hopset G″ of a skeleton graph G′ of G without first computing G′ itself. We then conduct a BellmanFord exploration in G′ ∪ G″, while computing the required edges of G′ on the fly. As a result, our algorithm computes exactly those edges of G′ that it really needs, rather than computing approximately the entire G′. @InProceedings{STOC17p757, author = {Michael Elkin}, title = {Distributed Exact Shortest Paths in Sublinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {757770}, doi = {}, year = {2017}, } 

Esfandiari, Hossein 
STOC '17: "Beating 11/e for Ordered ..."
Beating 11/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a thresholdbased algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to nonuniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 11/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {6171}, doi = {}, year = {2017}, } 

Fan, Zhou 
STOC '17: "How Well Do Local Algorithms ..."
How Well Do Local Algorithms Solve Semidefinite Programs?
Zhou Fan and Andrea Montanari (Stanford University, USA) Several probabilistic models from highdimensional statistics and machine learning reveal an intriguing and yet poorly understood dichotomy. Either simple local algorithms succeed in estimating the object of interest, or even sophisticated semidefinite programming (SDP) relaxations fail. In order to explore this phenomenon, we study a classical SDP relaxation of the minimum graph bisection problem, when applied to ErdosRényi random graphs with bounded average degree d > 1, and obtain several types of results. First, we use a dual witness construction (using the socalled nonbacktracking matrix of the graph) to upper bound the SDP value. Second, we prove that a simple local algorithm approximately solves the SDP to within a factor 2d^2/(2d^2 + d  1) of the upper bound. In particular, the local algorithm is at most 8/9 suboptimal, and 1 + O(d^{1}) suboptimal for large degree. We then analyze a more sophisticated local algorithm, which aggregates information according to the harmonic measure on the limiting GaltonWatson (GW) tree. The resulting lower bound is expressed in terms of the conductance of the GW tree and matches surprisingly well the empirically determined SDP values on largescale ErdosRényi graphs. We finally consider the planted partition model. In this case, purely local algorithms are known to fail, but they do succeed if a small amount of side information is available. Our results imply quantitative bounds on the threshold for partial recovery using SDP in this model. @InProceedings{STOC17p604, author = {Zhou Fan and Andrea Montanari}, title = {How Well Do Local Algorithms Solve Semidefinite Programs?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {604614}, doi = {}, year = {2017}, } 

Feige, Uriel 
STOC '17: "Approximate Modularity Revisited ..."
Approximate Modularity Revisited
Uriel Feige, Michal Feldman, and Inbal TalgamCohen (Weizmann Institute of Science, Israel; Microsoft Research, Israel; Tel Aviv University, Israel; Hebrew University of Jerusalem, Israel) Set functions with convenient properties (such as submodularity) appear in application areas of current interest, such as algorithmic game theory, and allow for improved optimization algorithms. It is natural to ask (e.g., in the context of data driven optimization) how robust such properties are, and whether small deviations from them can be tolerated. We consider two such questions in the important special case of linear set functions. One question that we address is whether any set function that approximately satisfies the modularity equation (linear functions satisfy the modularity equation exactly) is close to a linear function. The answer to this is positive (in a precise formal sense) as shown by Kalton and Roberts [1983] (and further improved by Bondarenko, Prymak, and Radchenko [2013]). We revisit their proof idea that is based on expander graphs, and provide significantly stronger upper bounds by combining it with new techniques. Furthermore, we provide improved lower bounds for this problem. Another question that we address is that of how to learn a linear function h that is close to an approximately linear function f, while querying the value of f on only a small number of sets. We present a deterministic algorithm that makes only linearly many (in the number of items) nonadaptive queries, by this improving over a previous algorithm of Chierichetti, Das, Dasgupta and Kumar [2015] that is randomized and makes more than a quadratic number of queries. Our learning algorithm is based on a Hadamard transform. @InProceedings{STOC17p1028, author = {Uriel Feige and Michal Feldman and Inbal TalgamCohen}, title = {Approximate Modularity Revisited}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10281041}, doi = {}, year = {2017}, } 

Feldman, Michal 
STOC '17: "Approximate Modularity Revisited ..."
Approximate Modularity Revisited
Uriel Feige, Michal Feldman, and Inbal TalgamCohen (Weizmann Institute of Science, Israel; Microsoft Research, Israel; Tel Aviv University, Israel; Hebrew University of Jerusalem, Israel) Set functions with convenient properties (such as submodularity) appear in application areas of current interest, such as algorithmic game theory, and allow for improved optimization algorithms. It is natural to ask (e.g., in the context of data driven optimization) how robust such properties are, and whether small deviations from them can be tolerated. We consider two such questions in the important special case of linear set functions. One question that we address is whether any set function that approximately satisfies the modularity equation (linear functions satisfy the modularity equation exactly) is close to a linear function. The answer to this is positive (in a precise formal sense) as shown by Kalton and Roberts [1983] (and further improved by Bondarenko, Prymak, and Radchenko [2013]). We revisit their proof idea that is based on expander graphs, and provide significantly stronger upper bounds by combining it with new techniques. Furthermore, we provide improved lower bounds for this problem. Another question that we address is that of how to learn a linear function h that is close to an approximately linear function f, while querying the value of f on only a small number of sets. We present a deterministic algorithm that makes only linearly many (in the number of items) nonadaptive queries, by this improving over a previous algorithm of Chierichetti, Das, Dasgupta and Kumar [2015] that is randomized and makes more than a quadratic number of queries. Our learning algorithm is based on a Hadamard transform. @InProceedings{STOC17p1028, author = {Uriel Feige and Michal Feldman and Inbal TalgamCohen}, title = {Approximate Modularity Revisited}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10281041}, doi = {}, year = {2017}, } 

Filmus, Yuval 
STOC '17: "Twenty (Simple) Questions ..."
Twenty (Simple) Questions
Yuval Dagan, Yuval Filmus, Ariel Gabizon, and Shay Moran (Technion, Israel; Zerocoin Electronic Coin, USA; University of California at San Diego, USA; Simons Institute for the Theory of Computing Berkeley, USA) A basic combinatorial interpretation of Shannon’s entropy function is via the ”20 questions” game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the ”20 questions” game is given by a Huffman code for π: Bob’s questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately?* Our first main result shows that for every distribution π, Bob has a strategy that uses only questions of the form ”x < c?” and ”x = c?”, and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn^{1/r}) questions that achieve a performance of at most H(π)+r, and show that Ω(rn^{1/r}) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25^{n+o(n)} questions such that for every distribution π, Bob can implement an optimal strategy for π using only questions from Q. We also show that 1.25^{n−o(n)} questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)^{Θ(1/r)} questions are necessary and sufficient. @InProceedings{STOC17p9, author = {Yuval Dagan and Yuval Filmus and Ariel Gabizon and Shay Moran}, title = {Twenty (Simple) Questions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {921}, doi = {}, year = {2017}, } 

Forbes, Michael A. 
STOC '17: "Succinct Hitting Sets and ..."
Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds
Michael A. Forbes, Amir Shpilka, and Ben Lee Volk (Simons Institute for the Theory of Computing Berkeley, USA; Tel Aviv University, Israel) We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining superpolynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits. Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)degree polylog(N)size circuits is a hitting set for the class of poly(N)degree poly(N)size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits. @InProceedings{STOC17p653, author = {Michael A. Forbes and Amir Shpilka and Ben Lee Volk}, title = {Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {653664}, doi = {}, year = {2017}, } 

Foster, Nate 
STOC '17: "The Next 700 Network Programming ..."
The Next 700 Network Programming Languages (Invited Talk)
Nate Foster (Cornell University, USA) Specification and verification of computer networks has become a reality in recent years, with the emergence of domainspecific programming languages and automated reasoning tools. But the design of these frameworks has been largely ad hoc, driven more by the needs of applications and the capabilities of hardware than by any foundational principles. This talk will present NetKAT, a language for programming networks based on a wellstudied mathematical foundation: regular languages and finite automata. The talk will describe the design of the language, discuss its semantic underpinnings, and present highlights from ongoing work extending the language with stateful and probabilistic features. @InProceedings{STOC17p7, author = {Nate Foster}, title = {The Next 700 Network Programming Languages (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {77}, doi = {}, year = {2017}, } 

Fu, Zhiguo 
STOC '17: "Holographic Algorithm with ..."
Holographic Algorithm with Matchgates Is Universal for Planar #CSP over Boolean Domain
JinYi Cai and Zhiguo Fu (University of WisconsinMadison, USA; Jilin University, China) We prove a complexity classification theorem that classifies all counting constraint satisfaction problems (#CSP) over Boolean variables into exactly three classes: (1) Polynomialtime solvable; (2) #Phard for general instances, but solvable in polynomialtime over planar structures; and (3) #Phard over planar structures. The classification applies to all finite sets of complexvalued, not necessarily symmetric, constraint functions on Boolean variables. It is shown that Valiant's holographic algorithm with matchgates is universal strategy for all problems in class (2). @InProceedings{STOC17p842, author = {JinYi Cai and Zhiguo Fu}, title = {Holographic Algorithm with Matchgates Is Universal for Planar #CSP over Boolean Domain}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {842855}, doi = {}, year = {2017}, } 

Gabizon, Ariel 
STOC '17: "Twenty (Simple) Questions ..."
Twenty (Simple) Questions
Yuval Dagan, Yuval Filmus, Ariel Gabizon, and Shay Moran (Technion, Israel; Zerocoin Electronic Coin, USA; University of California at San Diego, USA; Simons Institute for the Theory of Computing Berkeley, USA) A basic combinatorial interpretation of Shannon’s entropy function is via the ”20 questions” game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the ”20 questions” game is given by a Huffman code for π: Bob’s questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately?* Our first main result shows that for every distribution π, Bob has a strategy that uses only questions of the form ”x < c?” and ”x = c?”, and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn^{1/r}) questions that achieve a performance of at most H(π)+r, and show that Ω(rn^{1/r}) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25^{n+o(n)} questions such that for every distribution π, Bob can implement an optimal strategy for π using only questions from Q. We also show that 1.25^{n−o(n)} questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)^{Θ(1/r)} questions are necessary and sufficient. @InProceedings{STOC17p9, author = {Yuval Dagan and Yuval Filmus and Ariel Gabizon and Shay Moran}, title = {Twenty (Simple) Questions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {921}, doi = {}, year = {2017}, } 

Ganesh, Arun 
STOC '17: "Online Service with Delay ..."
Online Service with Delay
Yossi Azar, Arun Ganesh, Rong Ge, and Debmalya Panigrahi (Tel Aviv University, Israel; Duke University, USA) In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a polylogarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems. @InProceedings{STOC17p551, author = {Yossi Azar and Arun Ganesh and Rong Ge and Debmalya Panigrahi}, title = {Online Service with Delay}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {551563}, doi = {}, year = {2017}, } 

Garg, Ankit 
STOC '17: "Algorithmic and Optimization ..."
Algorithmic and Optimization Aspects of BrascampLieb Inequalities, via Operator Scaling
Ankit Garg, Leonid Gurvits, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; City College of New York, USA; Princeton University, USA; IAS, USA) The celebrated BrascampLieb (BL) inequalities [BL76, Lie90], and their reverse form of Barthe [Bar98], are an important mathematical tool, unifying and generalizing numerous in equalities in analysis, convex geometry and information theory, with many used in computer science. While their structural theory is very well understood, far less is known about computing their main parameters below (which we later define). Prior to this work, the best known algorithms for any of these optimization tasks required at least exponential time. In this work, we give polynomial time algorithms to compute: (1) Feasibility of BLdatum, (2) Optimal BL constant, (3) Weak separation oracle for BLpolytopes. What is particularly exciting about this progress, beyond the better understanding of BL inequalities, is that the objects above naturally encode rich families of optimization problems which had no prior efficient algorithms. In particular, the BLconstants (which we efficiently compute) are solutions to nonconvex optimization problems, and the BLpolytopes (for which we provide efficient membership and separation oracles) are linear programs with exponentially many facets. Thus we hope that new combinatorial optimization problems can be solved via reductions to the ones above, and make modest initial steps in exploring this possibility. Our algorithms are obtained by a simple efficient reduction of a given BLdatum to an instance of the Operator Scaling problem defined by [Gur04]. To obtain the results above, we utilize the two (very recent and different) algorithms for the operator scaling problem [GGOW16, IQS15a]. Our reduction implies algorithmic versions of many of the known structural results on BLinequalities, and in some cases provide proofs that are different or simpler than existing ones. Further, the analytic properties of the [GGOW16] algorithm provide new, effective bounds on the magnitude and continuity of BLconstants, with applications to nonlinear versions of BLinequalities; prior work relied on compactness, and thus provided no bounds. On a higher level, our application of operator scaling algorithm to BLinequalities further connects analysis and optimization with the diverse mathematical areas used so far to mo tivate and solve the operator scaling problem, which include commutative invariant theory, noncommutative algebra, computational complexity and quantum information theory. @InProceedings{STOC17p397, author = {Ankit Garg and Leonid Gurvits and Rafael Oliveira and Avi Wigderson}, title = {Algorithmic and Optimization Aspects of BrascampLieb Inequalities, via Operator Scaling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {397409}, doi = {}, year = {2017}, } 

Garg, Jugal 
STOC '17: "Settling the Complexity of ..."
Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria
Jugal Garg, Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod (University of Illinois at UrbanaChampaign, USA; Georgia Institute of Technology, USA) Our first result shows membership in PPAD for the problem of computing approximate equilibria for an ArrowDebreu exchange market for piecewiselinear concave (PLC) utility functions. As a corollary we also obtain membership in PPAD for Leontief utility functions. This settles an open question of Vazirani and Yannakakis (2011). Next we show FIXPhardness of computing equilibria in ArrowDebreu exchange markets under Leontief utility functions, and ArrowDebreu markets under linear utility functions and Leontief production sets, thereby settling these open questions of Vazirani and Yannakakis (2011). As corollaries, we obtain FIXPhardness for PLC utilities and for ArrowDebreu markets under linear utility functions and polyhedral production sets. In all cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be "yes" instances. If all instances are under consideration, then in all cases we prove that the problem of deciding if a given instance admits an equilibrium is ETRcomplete, where ETR is the class Existential Theory of Reals. As a consequence of the results stated above, and the fact that membership in FIXP has been established for PLC utilities, the entire computational difficulty of ArrowDebreu markets under PLC utility functions lies in the Leontief utility subcase. This is perhaps the most unexpected aspect of our result, since Leontief utilities are meant for the case that goods are perfect complements, whereas PLC utilities are very general, capturing not only the cases when goods are complements and substitutes, but also arbitrary combinations of these and much more. Finally, we give a polynomial time algorithm for finding an equilibrium in ArrowDebreu exchange markets under Leontief utility functions provided the number of agents is a constant. This settles part of an open problem of Devanur and Kannan (2008). @InProceedings{STOC17p890, author = {Jugal Garg and Ruta Mehta and Vijay V. Vazirani and Sadra Yazdanbod}, title = {Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890901}, doi = {}, year = {2017}, } 

Garg, Shashwat 
STOC '17: "Algorithmic Discrepancy Beyond ..."
Algorithmic Discrepancy Beyond Partial Coloring
Nikhil Bansal and Shashwat Garg (Eindhoven University of Technology, Netherlands) The partial coloring method is one of the most powerful and widely used method in combinatorial discrepancy problems. However, in many cases it leads to suboptimal bounds as the partial coloring step must be iterated a logarithmic number of times, and the errors can add up in an adversarial way. We give a new and general algorithmic framework that overcomes the limitations of the partial coloring method and can be applied in a blackbox manner to various problems. Using this framework, we give new improved bounds and algorithms for several classic problems in discrepancy. In particular, for Tusnady’s problem, we give an improved O(log^{2} n) bound for discrepancy of axisparallel rectangles and more generally an O_{d}(log^{d}n) bound for ddimensional boxes in ℝ^{d}. Previously, even nonconstructively, the best bounds were O(log^{2.5} n) and O_{d}(log^{d+0.5}n) respectively. Similarly, for the Steinitz problem we give the first algorithm that matches the best known nonconstructive bounds due to Banaszczyk in the ℓ_{∞} case, and improves the previous algorithmic bounds substantially in the ℓ_{2} case. Our framework is based upon a substantial generalization of the techniques developed recently in the context of the Komlós discrepancy problem. @InProceedings{STOC17p914, author = {Nikhil Bansal and Shashwat Garg}, title = {Algorithmic Discrepancy Beyond Partial Coloring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {914926}, doi = {}, year = {2017}, } STOC '17: "Faster SpaceEfficient Algorithms ..." Faster SpaceEfficient Algorithms for Subset Sum and kSum Nikhil Bansal, Shashwat Garg, Jesper Nederlof, and Nikhil Vyas (Eindhoven University of Technology, Netherlands; IIT Bombay, India) We present randomized algorithms that solve Subset Sum and Knapsack instances with n items in O^{*}(2^{0.86n}) time, where the O^{*}(·) notation suppresses factors polynomial in the input size, and polynomial space, assuming random readonly access to exponentially many random bits. These results can be extended to solve Binary Linear Programming on n variables with few constraints in a similar running time. We also show that for any constant k≥ 2, random instances of kSum can be solved using O(n^{k−0.5}(n)) time and O(logn) space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random readonly access to random bits, we show that this problem can be solved using O(logn) space significantly faster than the trivial O(n^{2}) time algorithm if no value occurs too often in the same list. @InProceedings{STOC17p198, author = {Nikhil Bansal and Shashwat Garg and Jesper Nederlof and Nikhil Vyas}, title = {Faster SpaceEfficient Algorithms for Subset Sum and kSum}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {198209}, doi = {}, year = {2017}, } 

Ge, Rong 
STOC '17: "Online Service with Delay ..."
Online Service with Delay
Yossi Azar, Arun Ganesh, Rong Ge, and Debmalya Panigrahi (Tel Aviv University, Israel; Duke University, USA) In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a polylogarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems. @InProceedings{STOC17p551, author = {Yossi Azar and Arun Ganesh and Rong Ge and Debmalya Panigrahi}, title = {Online Service with Delay}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {551563}, doi = {}, year = {2017}, } STOC '17: "Provable Learning of Noisyor ..." Provable Learning of Noisyor Networks Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski (Princeton University, USA; Duke University, USA) Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Learning the model that is, the mapping from hidden variables to visible ones and vice versais NPhard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structure: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the singlelayer noisyOR network, which is a textbook example of a bayes net, and used for example in the classic QMRDT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable. @InProceedings{STOC17p1057, author = {Sanjeev Arora and Rong Ge and Tengyu Ma and Andrej Risteski}, title = {Provable Learning of Noisyor Networks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10571066}, doi = {}, year = {2017}, } 

Ghaffari, Mohsen 
STOC '17: "On the Complexity of Local ..."
On the Complexity of Local Distributed Graph Problems
Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus (ETH Zurich, Switzerland; University of Freiburg, Germany) This paper is centered on the complexity of graph problems in the wellstudied LOCAL model of distributed computing, introduced by Linial [FOCS ’87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2^{O(√logn)}. Understanding and potentially narrowing down this exponential gap is considered to be one of the central longstanding open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexitytheoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCALproblems (including MIS and (Δ+1)coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zeroround randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmictime randomized distributed algorithms in a simpler way. As a result, we provide a polylogtime distributed approximation scheme for arbitrary distributed covering and packing integer linear programs. @InProceedings{STOC17p784, author = {Mohsen Ghaffari and Fabian Kuhn and Yannic Maus}, title = {On the Complexity of Local Distributed Graph Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {784797}, doi = {}, year = {2017}, } 

Gharan, Shayan Oveis 
STOC '17: "A Generalization of Permanent ..."
A Generalization of Permanent Inequalities and Applications in Counting and Optimization
Nima Anari and Shayan Oveis Gharan (Stanford University, USA; University of Washington, USA) A polynomial p∈ℝ[z_{1},…,z_{n}] is real stable if it has no roots in the upperhalf complex plane. Gurvits’s permanent inequality gives a lower bound on the coefficient of the z_{1}z_{2}… z_{n} monomial of a real stable polynomial p with nonnegative coefficients. This fundamental inequality has been used to attack several counting and optimization problems. Here, we study a more general question: Given a stable multilinear polynomial p with nonnegative coefficients and a set of monomials S, we show that if the polynomial obtained by summing up all monomials in S is real stable, then we can lower bound the sum of coefficients of monomials of p that are in S. We also prove generalizations of this theorem to (real stable) polynomials that are not multilinear. We use our theorem to give a new proof of Schrijver’s inequality on the number of perfect matchings of a regular bipartite graph, generalize a recent result of Nikolov and Singh, and give deterministic polynomial time approximation algorithms for several counting problems. @InProceedings{STOC17p384, author = {Nima Anari and Shayan Oveis Gharan}, title = {A Generalization of Permanent Inequalities and Applications in Counting and Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {384396}, doi = {}, year = {2017}, } 

Gishboliner, Lior 
STOC '17: "Removal Lemmas with Polynomial ..."
Removal Lemmas with Polynomial Bounds
Lior Gishboliner and Asaf Shapira (Tel Aviv University, Israel) We give new sufficient and necessary criteria guaranteeing that a hereditary graph property can be tested with a polynomial query complexity. Although both are simple combinatorial criteria, they imply almost all prior positive and negative results of this type, as well as many new ones. One striking application of our results is that every semialgebraic graph property (e.g., being an interval graph, a unitdisc graph etc.) can be tested with a polynomial query complexity. This confirms a conjecture of Alon. The proofs combine probabilistic ideas together with a novel application of a conditional regularity lemma for matrices, due to Alon, Fischer and Newman. @InProceedings{STOC17p510, author = {Lior Gishboliner and Asaf Shapira}, title = {Removal Lemmas with Polynomial Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {510522}, doi = {}, year = {2017}, } 

Gonczarowski, Yannai A. 
STOC '17: "Efficient Empirical Revenue ..."
Efficient Empirical Revenue Maximization in SingleParameter Auction Environments
Yannai A. Gonczarowski and Noam Nisan (Hebrew University of Jerusalem, Israel; Microsoft Research, Israel) We present a polynomialtime algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of singleparameter auction environments including matroid environments, position environments, and the public project environment. The valuation distributions may be arbitrary bounded distributions (in particular, they may be irregular, and may differ for the various bidders), thus resolving a problem left open by previous papers. The analysis uses basic tools, is performed in its entirety in valuespace, and simplifies the analysis of previously known results for special cases. Furthermore, the analysis extends to certain singleparameter auction environments where precise revenue maximization is known to be intractable, such as knapsack environments. @InProceedings{STOC17p856, author = {Yannai A. Gonczarowski and Noam Nisan}, title = {Efficient Empirical Revenue Maximization in SingleParameter Auction Environments}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {856868}, doi = {}, year = {2017}, } STOC '17: "The MenuSize Complexity of ..." The MenuSize Complexity of Revenue Approximation Moshe Babaioff, Yannai A. Gonczarowski, and Noam Nisan (Microsoft Research, Israel; Hebrew University of Jerusalem, Israel) We consider a monopolist that is selling n items to a single additive buyer, where the buyer’s values for the items are drawn according to independent distributions F_{1},F_{2},…,F_{n} that possibly have unbounded support. It is well known that — unlike in the single item case — the revenueoptimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open. In this paper, we give an affirmative answer to this open question, showing that for every n and for every ε>0, there exists a complexity bound C=C(n,ε) such that auctions of menu size at most C suffice for obtaining a (1−ε) fraction of the optimal revenue from any F_{1},…,F_{n}. We prove upper and lower bounds on the revenue approximation complexity C(n,ε), as well as on the deterministic communication complexity required to run an auction that achieves such an approximation. @InProceedings{STOC17p869, author = {Moshe Babaioff and Yannai A. Gonczarowski and Noam Nisan}, title = {The MenuSize Complexity of Revenue Approximation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {869877}, doi = {}, year = {2017}, } 

Grandoni, Fabrizio 
STOC '17: "Surviving in Directed Graphs: ..."
Surviving in Directed Graphs: A QuasiPolynomialTime Polylogarithmic Approximation for TwoConnected Directed Steiner Tree
Fabrizio Grandoni and Bundit Laekhanukit (IDSIA, Switzerland; University of Lugano, Switzerland; Weizmann Institute of Science, Israel) Realword networks are often prone to failures. A reliable network needs to cope with this situation and must provide a backup communication channel. This motivates the study of survivable network design, which has been a focus of research for a few decades. To date, survivable network design problems on undirected graphs are wellunderstood. For example, there is a 2 approximation in the case of edge failures [Jain, FOCS’98/Combinatorica’01]. The problems on directed graphs, in contrast, have seen very little progress. Most techniques for the undirected case like primaldual and iterative rounding methods do not seem to extend to the directed case. Almost no nontrivial approximation algorithm is known even for a simple case where we wish to design a network that tolerates a single failure. In this paper, we study a survivable network design problem on directed graphs, 2Connected Directed Steiner Tree (2DST): given an nvertex weighted directed graph, a root r, and a set of h terminals S, find a mincost subgraph H that has two edge/vertex disjoint paths from r to any t∈ S. 2DST is a natural generalization of the classical Directed Steiner Tree problem (DST), where we have an additional requirement that the network must tolerate one failure. No nontrivial approximation is known for 2DST. This was left as an open problem by Feldman et al., [SODA’09; JCSS] and has then been studied by Cheriyan et al. [SODA’12; TALG] and Laekhanukit [SODA’14]. However, no positive result was known except for the special case of a Dshallow instance [Laekhanukit, ICALP’16]. We present an O(D^{3}logD· h^{2/D}· logn) approximation algorithm for 2DST that runs in time O(n^{O(D)}), for any D∈[log_{2}h]. This implies a polynomialtime O(h^{є}logn) approximation for any constant є>0, and a polylogarithmic approximation running in quasipolynomial time. We remark that this is essentially the bestknown even for the classical DST, and the latter problem is O(log^{2−є}n)hard to approximate [Halperin and Krauthgamer, STOC’03]. As a by product, we obtain an algorithm with the same approximation guarantee for the 2Connected Directed Steiner Subgraph problem, where the goal is to find a mincost subgraph such that every pair of terminals are 2edge/vertex connected. Our approximation algorithm is based on a careful combination of several techniques. In more detail, we decompose an optimal solution into two (possibly not edge disjoint) divergent trees that induces two edge disjoint paths from the root to any given terminal. These divergent trees are then embedded into a shallow tree by means of Zelikovsky’s height reduction theorem. On the latter tree we solve a 2Connected Group Steiner Tree problem and then map back this solution to the original graph. Crucially, our tree embedding is achieved via a probabilistic mapping guided by an LP: This is the main technical novelty of our approach, and might be useful for future work. @InProceedings{STOC17p420, author = {Fabrizio Grandoni and Bundit Laekhanukit}, title = {Surviving in Directed Graphs: A QuasiPolynomialTime Polylogarithmic Approximation for TwoConnected Directed Steiner Tree}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {420428}, doi = {}, year = {2017}, } 

Guo, Heng 
STOC '17: "Uniform Sampling through the ..."
Uniform Sampling through the Lovász Local Lemma
Heng Guo, Mark Jerrum, and Jingcheng Liu (Queen Mary University of London, UK; University of California at Berkeley, USA) We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lovász Local Lemma and some clas sical sampling algorithms such as the “cyclepopping” algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of kCNF formulas with bounded variable occurrences. @InProceedings{STOC17p342, author = {Heng Guo and Mark Jerrum and Jingcheng Liu}, title = {Uniform Sampling through the Lovász Local Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {342355}, doi = {}, year = {2017}, } 

Gupta, Anupam 
STOC '17: "Online and Dynamic Algorithms ..."
Online and Dynamic Algorithms for Set Cover
Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Debmalya Panigrahi (Carnegie Mellon University, USA; Microsoft Research, India; IIT Delhi, India; Duke University, USA) In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of “active” elements to be covered changes over time. The goal is to maintain a nearoptimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primaldual offline algorithms for set cover. The former leads to a competitive ratio of O(logn_{t}), where n_{t} is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on f_{t}, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research. @InProceedings{STOC17p537, author = {Anupam Gupta and Ravishankar Krishnaswamy and Amit Kumar and Debmalya Panigrahi}, title = {Online and Dynamic Algorithms for Set Cover}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {537550}, doi = {}, year = {2017}, } 

Gurjar, Rohit 
STOC '17: "Linear Matroid Intersection ..."
Linear Matroid Intersection Is in QuasiNC
Rohit Gurjar and Thomas Thierauf (Tel Aviv University, Israel; Aalen University, Germany) Given two matroids on the same ground set, the matroid intersection problem asks to find a common independent set of maximum size. We show that the linear matroid intersection problem is in quasiNC^{2}. That is, it has uniform circuits of quasipolynomial size n^{O(logn)}, and O(log^{2} n) depth. This generalizes the similar result for the bipartite perfect matching problem. We do this by an almost complete derandomization of the Isolation lemma for matroid intersection. Our result also implies a blackbox singularity test for symbolic matrices of the form A_{0}+A_{1} z_{1} +A_{2} z_{2}+ ⋯+A_{m} z_{m}, where A_{0} is an arbitrary matrix and the matrices A_{1},A_{2},…,A_{m} are of rank 1 over some field. @InProceedings{STOC17p821, author = {Rohit Gurjar and Thomas Thierauf}, title = {Linear Matroid Intersection Is in QuasiNC}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {821830}, doi = {}, year = {2017}, } 

Gurvits, Leonid 
STOC '17: "Algorithmic and Optimization ..."
Algorithmic and Optimization Aspects of BrascampLieb Inequalities, via Operator Scaling
Ankit Garg, Leonid Gurvits, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; City College of New York, USA; Princeton University, USA; IAS, USA) The celebrated BrascampLieb (BL) inequalities [BL76, Lie90], and their reverse form of Barthe [Bar98], are an important mathematical tool, unifying and generalizing numerous in equalities in analysis, convex geometry and information theory, with many used in computer science. While their structural theory is very well understood, far less is known about computing their main parameters below (which we later define). Prior to this work, the best known algorithms for any of these optimization tasks required at least exponential time. In this work, we give polynomial time algorithms to compute: (1) Feasibility of BLdatum, (2) Optimal BL constant, (3) Weak separation oracle for BLpolytopes. What is particularly exciting about this progress, beyond the better understanding of BL inequalities, is that the objects above naturally encode rich families of optimization problems which had no prior efficient algorithms. In particular, the BLconstants (which we efficiently compute) are solutions to nonconvex optimization problems, and the BLpolytopes (for which we provide efficient membership and separation oracles) are linear programs with exponentially many facets. Thus we hope that new combinatorial optimization problems can be solved via reductions to the ones above, and make modest initial steps in exploring this possibility. Our algorithms are obtained by a simple efficient reduction of a given BLdatum to an instance of the Operator Scaling problem defined by [Gur04]. To obtain the results above, we utilize the two (very recent and different) algorithms for the operator scaling problem [GGOW16, IQS15a]. Our reduction implies algorithmic versions of many of the known structural results on BLinequalities, and in some cases provide proofs that are different or simpler than existing ones. Further, the analytic properties of the [GGOW16] algorithm provide new, effective bounds on the magnitude and continuity of BLconstants, with applications to nonlinear versions of BLinequalities; prior work relied on compactness, and thus provided no bounds. On a higher level, our application of operator scaling algorithm to BLinequalities further connects analysis and optimization with the diverse mathematical areas used so far to mo tivate and solve the operator scaling problem, which include commutative invariant theory, noncommutative algebra, computational complexity and quantum information theory. @InProceedings{STOC17p397, author = {Ankit Garg and Leonid Gurvits and Rafael Oliveira and Avi Wigderson}, title = {Algorithmic and Optimization Aspects of BrascampLieb Inequalities, via Operator Scaling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {397409}, doi = {}, year = {2017}, } 

Haeupler, Bernhard 
STOC '17: "Synchronization Strings: Codes ..."
Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound
Bernhard Haeupler and Amirbehshad Shahrasbi (Carnegie Mellon University, USA) We introduce synchronization strings, which provide a novel way of efficiently dealing with synchronization errors, i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to deal with than more commonly considered halferrors, i.e., symbol corruptions and erasures. For every є >0, synchronization strings allow to index a sequence with an є^{−O(1)} size alphabet such that one can efficiently transform k synchronization errors into (1 + є)k halferrors. This powerful new technique has many applications. In this paper we focus on designing insdel codes, i.e., error correcting block codes (ECCs) for insertion deletion channels. While ECCs for both halferrors and synchronization errors have been intensely studied, the later has largely resisted progress. As Mitzenmacher puts it in his 2009 survey: “Channels with synchronization errors ... are simply not adequately understood by current theory. Given the nearcomplete knowledge we have for channels with erasures and errors ... our lack of understanding about channels with synchronization errors is truly remarkable.” Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed and only since 2016 are there constructions of constant rate indel codes for asymptotically large noise rates. Even in the asymptotically large or small noise regime these codes are polynomially far from the optimal ratedistance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A straight forward application of our synchronization strings based indexing method gives a simple blackbox construction which transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs over large constant alphabets into the realm of insdel codes. Most notably, for the complete noise spectrum we obtain efficient “nearMDS” insdel codes which get arbitrarily close to the optimal ratedistance tradeoff given by the Singleton bound. In particular, for any δ ∈ (0,1) and >0 we give insdel codes achieving a rate of 1 − δ − є over a constant size alphabet that efficiently correct a δ fraction of insertions or deletions. @InProceedings{STOC17p33, author = {Bernhard Haeupler and Amirbehshad Shahrasbi}, title = {Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3346}, doi = {}, year = {2017}, } 

HajiAghayi, MohammadTaghi 
STOC '17: "Beating 11/e for Ordered ..."
Beating 11/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a thresholdbased algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to nonuniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 11/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {6171}, doi = {}, year = {2017}, } 

Hartline, Jason D. 
STOC '17: "Bernoulli Factories and BlackBox ..."
Bernoulli Factories and BlackBox Reductions in Mechanism Design
Shaddin Dughmi, Jason D. Hartline, Robert Kleinberg, and Rad Niazadeh (University of Southern California, USA; Northwestern University, USA; Cornell University, USA) We provide a polynomialtime reduction from Bayesian incentivecompatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multidimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior blackbox reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism’s output, which is typically #Phard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on ”Bernoulli Factories”. In a Bernoulli factory problem, one is given a function mapping the bias of an “input coin” to that of an “output coin”, and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the ”expectations from samples” computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by ”exponential weights”: expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of HartlineMalekianKleinberg [2015] exactly incentive compatible. @InProceedings{STOC17p158, author = {Shaddin Dughmi and Jason D. Hartline and Robert Kleinberg and Rad Niazadeh}, title = {Bernoulli Factories and BlackBox Reductions in Mechanism Design}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {158169}, doi = {}, year = {2017}, } 

Hazan, Elad 
STOC '17: "Finding Approximate Local ..."
Finding Approximate Local Minima Faster than Gradient Descent
Naman Agarwal, Zeyuan AllenZhu, Brian Bullins, Elad Hazan, and Tengyu Ma (Princeton University, USA; IAS, USA) We design a nonconvex secondorder optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other nonconvex objectives arising in machine learning. @InProceedings{STOC17p1195, author = {Naman Agarwal and Zeyuan AllenZhu and Brian Bullins and Elad Hazan and Tengyu Ma}, title = {Finding Approximate Local Minima Faster than Gradient Descent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11951199}, doi = {}, year = {2017}, } 

Holmgren, Justin 
STOC '17: "Noninteractive Delegation ..."
Noninteractive Delegation and Batch NP Verification from Standard Computational Assumptions
Zvika Brakerski, Justin Holmgren, and Yael Kalai (Weizmann Institute of Science, Israel; Massachusetts Institute of Technology, USA; Microsoft Research, USA) We present an adaptive and noninteractive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomialtime cryptographic assumptions (e.g. the worst case hardness of polynomialfactor approximation of shortvector lattice problems). In our protocol, the verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements by simpling sending a proof to the verifier. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer nonadaptive twomessage protocols (where the first message could not be reused), and required either obfuscationbased assumptions or superpolynomial hardness assumptions. We show that our techniques can also be applied to construct a new type of (nonadaptive) 2message argument for batch NPstatements. Specifically, we can simultaneously prove (with computational soundness) the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness. @InProceedings{STOC17p474, author = {Zvika Brakerski and Justin Holmgren and Yael Kalai}, title = {Noninteractive Delegation and Batch NP Verification from Standard Computational Assumptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {474482}, doi = {}, year = {2017}, } 

Holroyd, Alexander E. 
STOC '17: "Stability of Service under ..."
Stability of Service under TimeofUse Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of WisconsinMadison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider timeofuse pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a pertimeunit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under TimeofUse Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184197}, doi = {}, year = {2017}, } 

Hoza, William M. 
STOC '17: "Targeted Pseudorandom Generators, ..."
Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace
William M. Hoza and Chris Umans (University of Texas at Austin, USA; California Institute of Technology, USA) Assume that for every derandomization result for logspace algorithms, there is a pseudorandom generator strong enough to nearly recover the derandomization by iterating over all seeds and taking a majority vote. We prove under a precise version of this assumption that BPL ⊆ ∩_{α > 0} DSPACE(log^{1 + α} n). We strengthen the theorem to an equivalence by considering two generalizations of the concept of a pseudorandom generator against logspace. A targeted pseudorandom generator against logspace takes as input a short uniform random seed and a finite automaton; it outputs a long bitstring that looks random to that particular automaton. A simulation advice generator for logspace stretches a small uniform random seed into a long advice string; the requirement is that there is some logspace algorithm that, given a finite automaton and this advice string, simulates the automaton reading a long uniform random input. We prove that ∩_{α > 0} prBPSPACE(log^{1 + α} n) = ∩_{α > 0} prDSPACE(log^{1 + α} n) if and only if for every targeted pseudorandom generator against logspace, there is a simulation advice generator for logspace with similar parameters. Finally, we observe that in a certain uniform setting (namely, if we only worry about sequences of automata that can be generated in logspace), targeted pseudorandom generators against logspace can be transformed into simulation advice generators with similar parameters. @InProceedings{STOC17p629, author = {William M. Hoza and Chris Umans}, title = {Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {629640}, doi = {}, year = {2017}, } 

Im, Sungjin 
STOC '17: "Efficient Massively Parallel ..."
Efficient Massively Parallel Methods for Dynamic Programming
Sungjin Im, Benjamin Moseley, and Xiaorui Sun (University of California at Merced, USA; Washington University at St. Louis, USA; Simons Institute for the Theory of Computing Berkeley, USA) Modern science and engineering is driven by massively large data sets and its advance heavily relies on massively parallel computing platforms such as Spark, MapReduce, and Hadoop. Theoretical models have been proposed to understand the power and limitations of such platforms. Recent study of developed theoretical models has led to the discovery of new algorithms that are fast and efficient in both theory and practice, thereby beginning to unlock their underlying power. Given recent promising results, the area has turned its focus on discovering widely applicable algorithmic techniques for solving problems efficiently. In this paper we make progress towards this goal by giving a principled framework for simulating sequential dynamic programs in the distributed setting. In particular, we identify two key properties, monotonicity and decomposability, which allow us to derive efficient distributed algorithms for problems possessing the properties. We showcase our framework by considering several core dynamic programming applications, Longest Increasing Subsequence, Optimal Binary Search Tree, and Weighted Interval Selection. For these problems, we derive algorithms yielding solutions that are arbitrarily close to the optimum, using O(1) rounds and Õ(n/m) memory on each machine where n is the input size and m is the number of machines available. @InProceedings{STOC17p798, author = {Sungjin Im and Benjamin Moseley and Xiaorui Sun}, title = {Efficient Massively Parallel Methods for Dynamic Programming}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {798811}, doi = {}, year = {2017}, } 

Italiano, Giuseppe F. 
STOC '17: "Decremental SingleSource ..."
Decremental SingleSource Reachability in Planar Digraphs
Giuseppe F. Italiano, Adam Karczmarz, Jakub Łącki, and Piotr Sankowski (University of Rome Tor Vergata, Italy; University of Warsaw, Poland; Google Research, USA) In this paper we show a new algorithm for the decremental singlesource reachability problem in directed planar graphs. It processes any sequence of edge deletions in O(nlog^{2}nloglogn) total time and explicitly maintains the set of vertices reachable from a fixed source vertex. Hence, if all edges are eventually deleted, the amortized time of processing each edge deletion is only O(log^{2} n loglogn), which improves upon a previously known O(√n ) solution. We also show an algorithm for decremental maintenance of strongly connected components in directed planar graphs with the same total update time. These results constitute the first almost optimal (up to polylogarithmic factors) algorithms for both problems. To the best of our knowledge, these are the first dynamic algorithms with polylogarithmic update times on general directed planar graphs for nontrivial reachabilitytype problems, for which only polynomial bounds are known in general graphs. @InProceedings{STOC17p1108, author = {Giuseppe F. Italiano and Adam Karczmarz and Jakub Łącki and Piotr Sankowski}, title = {Decremental SingleSource Reachability in Planar Digraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11081121}, doi = {}, year = {2017}, } 

Iwata, Satoru 
STOC '17: "A Weighted Linear Matroid ..."
A Weighted Linear Matroid Parity Algorithm
Satoru Iwata and Yusuke Kobayashi (University of Tokyo, Japan; University of Tsukuba, Japan) The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Lovász (1980) showed that this problem admits a minmax formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem. In this paper, we present a combinatorial, deterministic, strongly polynomial algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primaldual approach with the aid of the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem. @InProceedings{STOC17p264, author = {Satoru Iwata and Yusuke Kobayashi}, title = {A Weighted Linear Matroid Parity Algorithm}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {264276}, doi = {}, year = {2017}, } 

Jain, Sanjay 
STOC '17: "Deciding Parity Games in Quasipolynomial ..."
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan (University of Auckland, New Zealand; National University of Singapore, Singapore) It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime n^{O(√n)} to O(n^{log(m)+6}) and from an XPalgorithm with runtime O(n^{Θ(m)}) for fixed parameter m to an FPTalgorithm with runtime O(n^{5})+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((m^{m} · n)^{5}); it is also shown that this bound cannot be improved to O((2^{m} · n)^{c}), for any c, unless FPT = W[1]. @InProceedings{STOC17p252, author = {Cristian S. Calude and Sanjay Jain and Bakhadyr Khoussainov and Wei Li and Frank Stephan}, title = {Deciding Parity Games in Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {252263}, doi = {}, year = {2017}, } 

Jerrum, Mark 
STOC '17: "Uniform Sampling through the ..."
Uniform Sampling through the Lovász Local Lemma
Heng Guo, Mark Jerrum, and Jingcheng Liu (Queen Mary University of London, UK; University of California at Berkeley, USA) We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lovász Local Lemma and some clas sical sampling algorithms such as the “cyclepopping” algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of kCNF formulas with bounded variable occurrences. @InProceedings{STOC17p342, author = {Heng Guo and Mark Jerrum and Jingcheng Liu}, title = {Uniform Sampling through the Lovász Local Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {342355}, doi = {}, year = {2017}, } 

Ji, Zhengfeng 
STOC '17: "Compression of Quantum Multiprover ..."
Compression of Quantum Multiprover Interactive Proofs
Zhengfeng Ji (University of Technology Sydney, Australia) We present a protocol that transforms any quantum multiprover interactive proof into a nonlocal game in which questions consist of logarithmic number of bits and answers of constant number of bits. As a corollary, it follows that the promise problem corresponding to the approximation of the nonlocal value to inverse polynomial accuracy is complete for QMIP*, and therefore NEXPhard. This establishes that nonlocal games are provably harder than classical games without any complexity theory assumptions. Our result also indicates that gap amplification for nonlocal games may be impossible in general and provides a negative evidence for the feasibility of the gap amplification approach to the multiprover variant of the quantum PCP conjecture. @InProceedings{STOC17p289, author = {Zhengfeng Ji}, title = {Compression of Quantum Multiprover Interactive Proofs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {289302}, doi = {}, year = {2017}, } 

Joglekar, Pushkar S 
STOC '17: "Randomized Polynomial Time ..."
Randomized Polynomial Time Identity Testing for Noncommutative Circuits
V. Arvind, Pushkar S Joglekar, Partha Mukhopadhyay, and S. Raja (Institute of Mathematical Sciences, India; Vishwakarma Institute of Technology Pune, India; Chennai Mathematical Institute, India) In this paper we show that blackbox polynomial identity testing for noncommutative polynomials f∈F⟨ z_{1},z_{2},⋯,z_{n} ⟩ of degree D and sparsity t, can be done in randomized (n,logt,logD) time. As a consequence, given a circuit C of size s computing a polynomial f∈F⟨ z_{1},z_{2},⋯,z_{n} ⟩ with at most t nonzero monomials, then testing if f is identically zero can be done by a randomized algorithm with running time polynomial in s and n and logt. This makes significant progress on a question that has been open for over ten years. Our algorithm is based on automatatheoretic ideas that can efficiently isolate a monomial in the given polynomial. In particular, we carry out the monomial isolation using nondeterministic automata. In general, noncommutative circuits of size s can compute polynomials of degree exponential in s and number of monomials doubleexponential in s. In this paper, we consider a natural class of homogeneous noncommutative circuits, that we call +regular circuits, and give a whitebox polynomial time deterministic polynomial identity test. These circuits can compute noncommutative polynomials with number of monomials doubleexponential in the circuit size. Our algorithm combines some new structural results for +regular circuits with known results for noncommutative ABP identity testing, rank bound of commutative depth three identities, and equivalence testing problem for words. Finally, we consider the blackbox identity testing problem for depth three +regular circuits and give a randomized polynomial time identity test. In particular, we show if f∈⟨ Z⟩ is a nonzero noncommutative polynomial computed by a depth three +regular circuit of size s, then f cannot be a polynomial identity for the matrix algebra M_{s}(F) when F is sufficiently large depending on the degree of f. @InProceedings{STOC17p831, author = {V. Arvind and Pushkar S Joglekar and Partha Mukhopadhyay and S. Raja}, title = {Randomized Polynomial Time Identity Testing for Noncommutative Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {831841}, doi = {}, year = {2017}, } 

Kabanets, Valentine 
STOC '17: "A Polynomial Restriction Lemma ..."
A Polynomial Restriction Lemma with Applications
Valentine Kabanets, Daniel M. Kane, and Zhenjian Lu (Simon Fraser University, Canada; University of California at San Diego, USA) A polynomial threshold function (PTF) of degree d is a boolean function of the form f=sgn(p), where p is a degreed polynomial, and sgn is the sign function. The main result of the paper is an almost optimal bound on the probability that a random restriction of a PTF is not close to a constant function, where a boolean function g is called δclose to constant if, for some v∈{1,−1}, we have g(x)=v for all but at most δ fraction of inputs. We show for every PTF f of degree d≥ 1, and parameters 0<δ, r≤ 1/16, that Pr_{ρ∼ Rr} [f_{ρ} is not δ close to constant] ≤ √r · (logr^{−1} · logδ^{−1})^{O(d2)}, where ρ∼ R_{r} is a random restriction leaving each variable, independently, free with probability r, and otherwise assigning it 1 or −1 uniformly at random. In fact, we show a more general result for random block restrictions: given an arbitrary partitioning of input variables into m blocks, a random block restriction picks a uniformly random block ℓ∈ [m] and assigns 1 or −1, uniformly at random, to all variable outside the chosen block ℓ. We prove the Block Restriction Lemma saying that a PTF f of degree d becomes δclose to constant when hit with a random block restriction, except with probability at most m^{−1/2} · (logm· logδ^{−1})^{O(d2)}. As an application of our Restriction Lemma, we prove lower bounds against constantdepth circuits with PTF gates of any degree 1≤ d≪ √logn/loglogn, generalizing the recent bounds against constantdepth circuits with linear threshold gates (LTF gates) proved by Kane and Williams (STOC, 2016) and Chen, Santhanam, and Srinivasan (CCC, 2016). In particular, we show that there is an nvariate boolean function F_{n} ∈ P such that every depth2 circuit with PTF gates of degree d≥ 1 that computes F_{n} must have at least (n^{3/2+1/d})· (logn)^{−O(d2)} wires. For constant depths greater than 2, we also show averagecase lower bounds for such circuits with superlinear number of wires. These are the first superlinear bounds on the number of wires for circuits with PTF gates. We also give short proofs of the optimalexponent average sensitivity bound for degreed PTFs due to Kane (Computational Complexity, 2014), and the LittlewoodOfford type anticoncentration bound for degreed multilinear polynomials due to Meka, Nguyen, and Vu (Theory of Computing, 2016). Finally, we give derandomized versions of our Block Restriction Lemma and LittlewoodOfford type anticoncentration bounds, using a pseudorandom generator for PTFs due to Meka and Zuckerman (SICOMP, 2013). @InProceedings{STOC17p615, author = {Valentine Kabanets and Daniel M. Kane and Zhenjian Lu}, title = {A Polynomial Restriction Lemma with Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {615628}, doi = {}, year = {2017}, } 

Kalai, Yael 
STOC '17: "Noninteractive Delegation ..."
Noninteractive Delegation and Batch NP Verification from Standard Computational Assumptions
Zvika Brakerski, Justin Holmgren, and Yael Kalai (Weizmann Institute of Science, Israel; Massachusetts Institute of Technology, USA; Microsoft Research, USA) We present an adaptive and noninteractive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomialtime cryptographic assumptions (e.g. the worst case hardness of polynomialfactor approximation of shortvector lattice problems). In our protocol, the verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements by simpling sending a proof to the verifier. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer nonadaptive twomessage protocols (where the first message could not be reused), and required either obfuscationbased assumptions or superpolynomial hardness assumptions. We show that our techniques can also be applied to construct a new type of (nonadaptive) 2message argument for batch NPstatements. Specifically, we can simultaneously prove (with computational soundness) the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness. @InProceedings{STOC17p474, author = {Zvika Brakerski and Justin Holmgren and Yael Kalai}, title = {Noninteractive Delegation and Batch NP Verification from Standard Computational Assumptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {474482}, doi = {}, year = {2017}, } 

Kane, Daniel M. 
STOC '17: "A Polynomial Restriction Lemma ..."
A Polynomial Restriction Lemma with Applications
Valentine Kabanets, Daniel M. Kane, and Zhenjian Lu (Simon Fraser University, Canada; University of California at San Diego, USA) A polynomial threshold function (PTF) of degree d is a boolean function of the form f=sgn(p), where p is a degreed polynomial, and sgn is the sign function. The main result of the paper is an almost optimal bound on the probability that a random restriction of a PTF is not close to a constant function, where a boolean function g is called δclose to constant if, for some v∈{1,−1}, we have g(x)=v for all but at most δ fraction of inputs. We show for every PTF f of degree d≥ 1, and parameters 0<δ, r≤ 1/16, that Pr_{ρ∼ Rr} [f_{ρ} is not δ close to constant] ≤ √r · (logr^{−1} · logδ^{−1})^{O(d2)}, where ρ∼ R_{r} is a random restriction leaving each variable, independently, free with probability r, and otherwise assigning it 1 or −1 uniformly at random. In fact, we show a more general result for random block restrictions: given an arbitrary partitioning of input variables into m blocks, a random block restriction picks a uniformly random block ℓ∈ [m] and assigns 1 or −1, uniformly at random, to all variable outside the chosen block ℓ. We prove the Block Restriction Lemma saying that a PTF f of degree d becomes δclose to constant when hit with a random block restriction, except with probability at most m^{−1/2} · (logm· logδ^{−1})^{O(d2)}. As an application of our Restriction Lemma, we prove lower bounds against constantdepth circuits with PTF gates of any degree 1≤ d≪ √logn/loglogn, generalizing the recent bounds against constantdepth circuits with linear threshold gates (LTF gates) proved by Kane and Williams (STOC, 2016) and Chen, Santhanam, and Srinivasan (CCC, 2016). In particular, we show that there is an nvariate boolean function F_{n} ∈ P such that every depth2 circuit with PTF gates of degree d≥ 1 that computes F_{n} must have at least (n^{3/2+1/d})· (logn)^{−O(d2)} wires. For constant depths greater than 2, we also show averagecase lower bounds for such circuits with superlinear number of wires. These are the first superlinear bounds on the number of wires for circuits with PTF gates. We also give short proofs of the optimalexponent average sensitivity bound for degreed PTFs due to Kane (Computational Complexity, 2014), and the LittlewoodOfford type anticoncentration bound for degreed multilinear polynomials due to Meka, Nguyen, and Vu (Theory of Computing, 2016). Finally, we give derandomized versions of our Block Restriction Lemma and LittlewoodOfford type anticoncentration bounds, using a pseudorandom generator for PTFs due to Meka and Zuckerman (SICOMP, 2013). @InProceedings{STOC17p615, author = {Valentine Kabanets and Daniel M. Kane and Zhenjian Lu}, title = {A Polynomial Restriction Lemma with Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {615628}, doi = {}, year = {2017}, } 

Kapralov, Michael 
STOC '17: "An Adaptive SublinearTime ..."
An Adaptive SublinearTime Block Sparse Fourier Transform
Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh (EPFL, Switzerland) The problem of approximately computing the k dominant Fourier coefficients of a vector X quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with O(klognlog(n/k)) runtime [Hassanieh et al., STOC’12] and O(klogn) sample complexity [Indyk et al., FOCS’14]. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a (k_{0},k_{1})block sparse model. In this model, signal frequencies are clustered in k_{0} intervals with width k_{1} in Fourier space, and k= k_{0}k_{1} is the total sparsity. Our main result is the first sparse FFT algorithm for (k_{0}, k_{1})block sparse signals with a sample complexity of O^{*}(k_{0}k_{1} + k_{0}log(1+ k_{0})logn) at constant signaltonoise ratios, and sublinear runtime. Our algorithm crucially uses adaptivity to achieve the improved sample complexity bound, and we provide a lower bound showing that this is essential in the Fourier setting: Any nonadaptive algorithm must use Ω(k_{0}k_{1}logn/k_{0}k_{1}) samples for the (k_{0},k_{1})block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energybased importance sampling technique that may be of independent interest. @InProceedings{STOC17p702, author = {Volkan Cevher and Michael Kapralov and Jonathan Scarlett and Amir Zandieh}, title = {An Adaptive SublinearTime Block Sparse Fourier Transform}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {702715}, doi = {}, year = {2017}, } 

Karczmarz, Adam 
STOC '17: "Decremental SingleSource ..."
Decremental SingleSource Reachability in Planar Digraphs
Giuseppe F. Italiano, Adam Karczmarz, Jakub Łącki, and Piotr Sankowski (University of Rome Tor Vergata, Italy; University of Warsaw, Poland; Google Research, USA) In this paper we show a new algorithm for the decremental singlesource reachability problem in directed planar graphs. It processes any sequence of edge deletions in O(nlog^{2}nloglogn) total time and explicitly maintains the set of vertices reachable from a fixed source vertex. Hence, if all edges are eventually deleted, the amortized time of processing each edge deletion is only O(log^{2} n loglogn), which improves upon a previously known O(√n ) solution. We also show an algorithm for decremental maintenance of strongly connected components in directed planar graphs with the same total update time. These results constitute the first almost optimal (up to polylogarithmic factors) algorithms for both problems. To the best of our knowledge, these are the first dynamic algorithms with polylogarithmic update times on general directed planar graphs for nontrivial reachabilitytype problems, for which only polynomial bounds are known in general graphs. @InProceedings{STOC17p1108, author = {Giuseppe F. Italiano and Adam Karczmarz and Jakub Łącki and Piotr Sankowski}, title = {Decremental SingleSource Reachability in Planar Digraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11081121}, doi = {}, year = {2017}, } 

Karlin, Anna R. 
STOC '17: "Stability of Service under ..."
Stability of Service under TimeofUse Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of WisconsinMadison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider timeofuse pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a pertimeunit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under TimeofUse Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184197}, doi = {}, year = {2017}, } 

Kelner, Jonathan 
STOC '17: "AlmostLinearTime Algorithms ..."
AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graphtheoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almostlineartime directed Laplacian system solver, and, by leveraging the recent framework of [CohenKelnerPeeblesPengSidfordVladu, FOCS’16], we also obtain almostlineartime algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm^{3/4} + n^{2/3} m) log^{O(1)} (n κ є^{−1})) to O((m + n2^{O(√lognloglogn)}) log^{O(1)} (n κ є^{−1})) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410419}, doi = {}, year = {2017}, } 

Khot, Subhash 
STOC '17: "On Independent Sets, 2to2 ..."
On Independent Sets, 2to2 Games, and Grassmann Graphs
Subhash Khot, Dor Minzer, and Muli Safra (New York University, USA; Tel Aviv University, Israel) We present a candidate reduction from the 3Lin problem to the 2to2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain nonstandard sense. A reduction that is sound in this nonstandard sense implies that it is NPhard to distinguish whether an nvertex graph has an independent set of size ( 1− 1/√2 ) n − o(n) or whether every independent set has size o(n), and consequently, that it is NPhard to approximate the Vertex Cover problem within a factor √2−o(1). @InProceedings{STOC17p576, author = {Subhash Khot and Dor Minzer and Muli Safra}, title = {On Independent Sets, 2to2 Games, and Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {576589}, doi = {}, year = {2017}, } 

Khoussainov, Bakhadyr 
STOC '17: "Deciding Parity Games in Quasipolynomial ..."
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan (University of Auckland, New Zealand; National University of Singapore, Singapore) It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime n^{O(√n)} to O(n^{log(m)+6}) and from an XPalgorithm with runtime O(n^{Θ(m)}) for fixed parameter m to an FPTalgorithm with runtime O(n^{5})+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((m^{m} · n)^{5}); it is also shown that this bound cannot be improved to O((2^{m} · n)^{c}), for any c, unless FPT = W[1]. @InProceedings{STOC17p252, author = {Cristian S. Calude and Sanjay Jain and Bakhadyr Khoussainov and Wei Li and Frank Stephan}, title = {Deciding Parity Games in Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {252263}, doi = {}, year = {2017}, } 

Kim, David H. K. 
STOC '17: "New Hardness Results for Routing ..."
New Hardness Results for Routing on Disjoint Paths
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical NodeDisjoint Paths (NDP) problem, the input consists of an undirected nvertex graph G, and a collection M={(s_{1},t_{1}),…,(s_{k},t_{k})} of pairs of its vertices, called sourcedestination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via nodedisjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is O(√n), while the best current negative result is an Ω(log^{1/2−δ}n)hardness of approximation for any constant δ, under standard complexity assumptions. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves an Õ(n^{1/4})approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is Õ(n^{9/19}). The best currently known lower bound for both these versions of the problem is APXhardness. In this paper we prove that NDP is 2^{Ω(√logn)}hard to approximate, unless all problems in NP have algorithms with running time n^{O(logn)}. Our result holds even when the underlying graph is a planar graph with maximum vertex degree 4, and all source vertices lie on the boundary of a single face (but the destination vertices may lie anywhere in the graph). We extend this result to the closely related EdgeDisjoint Paths problem, showing the same hardness of approximation ratio even for subcubic planar graphs with all sources lying on the boundary of a single face. @InProceedings{STOC17p86, author = {Julia Chuzhoy and David H. K. Kim and Rachit Nimavat}, title = {New Hardness Results for Routing on Disjoint Paths}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {8699}, doi = {}, year = {2017}, } 

Kleinberg, Robert 
STOC '17: "Bernoulli Factories and BlackBox ..."
Bernoulli Factories and BlackBox Reductions in Mechanism Design
Shaddin Dughmi, Jason D. Hartline, Robert Kleinberg, and Rad Niazadeh (University of Southern California, USA; Northwestern University, USA; Cornell University, USA) We provide a polynomialtime reduction from Bayesian incentivecompatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multidimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior blackbox reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism’s output, which is typically #Phard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on ”Bernoulli Factories”. In a Bernoulli factory problem, one is given a function mapping the bias of an “input coin” to that of an “output coin”, and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the ”expectations from samples” computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by ”exponential weights”: expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of HartlineMalekianKleinberg [2015] exactly incentive compatible. @InProceedings{STOC17p158, author = {Shaddin Dughmi and Jason D. Hartline and Robert Kleinberg and Rad Niazadeh}, title = {Bernoulli Factories and BlackBox Reductions in Mechanism Design}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {158169}, doi = {}, year = {2017}, } STOC '17: "Beating 11/e for Ordered ..." Beating 11/e for Ordered Prophets Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a thresholdbased algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to nonuniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 11/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {6171}, doi = {}, year = {2017}, } 

Knudsen, Mathias Bæk Tejs 
STOC '17: "Finding Even Cycles Faster ..."
Finding Even Cycles Faster via Capped kWalks
Søren Dahlgaard, Mathias Bæk Tejs Knudsen, and Morten Stöckel (University of Copenhagen, Denmark) Finding cycles in graphs is a fundamental problem in algorithmic graph theory. In this paper, we consider the problem of finding and reporting a cycle of length 2k in an undirected graph G with n nodes and m edges for constant k≥ 2. A classic result by Bondy and Simonovits [J. Combinatorial Theory, 1974] implies that if m ≥ 100k n^{1+1/k}, then G contains a 2kcycle, further implying that one needs to consider only graphs with m = O(n^{1+1/k}). Previously the best known algorithms were an O(n^{2}) algorithm due to Yuster and Zwick [J. Discrete Math 1997] as well as a O(m^{2−(1+⌈ k/2 ⌉−1)/(k+1)}) algorithm by Alon et. al. [Algorithmica 1997]. We present an algorithm that uses O( m^{2k/(k+1)} ) time and finds a 2kcycle if one exists. This bound is O(n^{2}) exactly when m = Θ(n^{1+1/k}). When finding 4cycles our new bound coincides with Alon et. al., while for every k>2 our new bound yields a polynomial improvement in m. Yuster and Zwick noted that it is “plausible to conjecture that O(n^{2}) is the best possible bound in terms of n”. We show “conditional optimality”: if this hypothesis holds then our O(m^{2k/(k+1)}) algorithm is tight as well. Furthermore, a folklore reduction implies that no combinatorial algorithm can determine if a graph contains a 6cycle in time O(m^{3/2−ε}) for any ε>0 unless boolean matrix multiplication can be solved combinatorially in time O(n^{3−ε′}) for some ε′ > 0, which is widely believed to be false. Coupled with our main result, this gives tight bounds for finding 6cycles combinatorially and also separates the complexity of finding 4 and 6cycles giving evidence that the exponent of m in the running time should indeed increase with k. The key ingredient in our algorithm is a new notion of capped kwalks, which are walks of length k that visit only nodes according to a fixed ordering. Our main technical contribution is an involved analysis proving several properties of such walks which may be of independent interest. @InProceedings{STOC17p112, author = {Søren Dahlgaard and Mathias Bæk Tejs Knudsen and Morten Stöckel}, title = {Finding Even Cycles Faster via Capped kWalks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {112120}, doi = {}, year = {2017}, } 

Kobayashi, Yusuke 
STOC '17: "A Weighted Linear Matroid ..."
A Weighted Linear Matroid Parity Algorithm
Satoru Iwata and Yusuke Kobayashi (University of Tokyo, Japan; University of Tsukuba, Japan) The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Lovász (1980) showed that this problem admits a minmax formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem. In this paper, we present a combinatorial, deterministic, strongly polynomial algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primaldual approach with the aid of the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem. @InProceedings{STOC17p264, author = {Satoru Iwata and Yusuke Kobayashi}, title = {A Weighted Linear Matroid Parity Algorithm}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {264276}, doi = {}, year = {2017}, } 

Kokainis, Martins 
STOC '17: "Quantum Algorithm for Tree ..."
Quantum Algorithm for Tree Size Estimation, with Applications to Backtracking and 2Player Games
Andris Ambainis and Martins Kokainis (University of Latvia, Latvia) We study quantum algorithms on search trees of unknown structure, in a model where the tree can be discovered by local exploration. That is, we are given the root of the tree and access to a black box which, given a vertex v, outputs the children of v. We construct a quantum algorithm which, given such access to a search tree of depth at most n, estimates the size of the tree T within a factor of 1± δ in Õ(√nT) steps. More generally, the same algorithm can be used to estimate size of directed acyclic graphs (DAGs) in a similar model. We then show two applications of this result: a) We show how to transform a classical backtracking search algorithm which examines T nodes of a search tree into an Õ(√Tn^{3/2}) time quantum algorithm, improving over an earlier quantum backtracking algorithm of Montanaro (arXiv:1509.02374). b) We give a quantum algorithm for evaluating ANDOR formulas in a model where the formula can be discovered by local exploration (modeling position trees in 2player games) which evaluates formulas of size T and depth T^{o(1)} in time O(T^{1/2+o(1)}). Thus, the quantum speedup is essentially the same as in the case when the formula is known in advance. @InProceedings{STOC17p989, author = {Andris Ambainis and Martins Kokainis}, title = {Quantum Algorithm for Tree Size Estimation, with Applications to Backtracking and 2Player Games}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {9891002}, doi = {}, year = {2017}, } 

Kol, Gillat 
STOC '17: "TimeSpace Hardness of Learning ..."
TimeSpace Hardness of Learning Sparse Parities
Gillat Kol, Ran Raz, and Avishay Tal (Princeton University, USA; IAS, USA) We define a concept class F to be timespace hard (or memorysamples hard) if any learning algorithm for F requires either a memory of size superlinear in n or a number of samples superpolynomial in n, where n is the length of one sample. A recent work shows that the class of all parity functions is timespace hard [Raz, FOCS’16]. Building on [Raz, FOCS’16], we show that the class of all sparse parities of Hamming weight ℓ is timespace hard, as long as ℓ ≥ ω(logn / loglogn). Consequently, linearsize DNF Formulas, linearsize Decision Trees and logarithmicsize Juntas are all timespace hard. Our result is more general and provides timespace lower bounds for learning any concept class of parity functions. We give applications of our results in the field of boundedstorage cryptography. For example, for every ω(logn) ≤ k ≤ n, we obtain an encryption scheme that requires a private key of length k, and time complexity of n per encryption/decryption of each bit, and is provably and unconditionally secure as long as the attacker uses at most o(nk) memory bits and the scheme is used at most 2^{o(k)} times. Previously, this was known only for k=n [Raz, FOCS’16]. @InProceedings{STOC17p1067, author = {Gillat Kol and Ran Raz and Avishay Tal}, title = {TimeSpace Hardness of Learning Sparse Parities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10671080}, doi = {}, year = {2017}, } 

Kopelowitz, Tsvi 
STOC '17: "Exponential Separations in ..."
Exponential Separations in the Energy Complexity of Leader Election
YiJun Chang, Tsvi Kopelowitz, Seth Pettie, Ruosong Wang, and Wei Zhan (University of Michigan, USA; Tsinghua University, China) Energy is often the most constrained resource for batterypowered wireless devices and the lion’s share of energy is often spent on transceiver usage (sending/receiving packets), not on computation. In this paper we study the energy complexity of Leader Election and Approximate Counting in several models of wireless radio networks. It turns out that energy complexity is very sensitive to whether the devices can generate random bits and their ability to detect collisions. We consider four collisiondetection models: StrongCD (in which transmitters and listeners detect collisions), SenderCD and ReceiverCD (in which only transmitters or only listeners detect collisions), and NoCD (in which no one detects collisions.) The takeaway message of our results is quite surprising. For randomized Leader Election algorithms, there is an exponential gap between the energy complexity of SenderCD and ReceiverCD: NoCD = SenderCD ≫ ReceiverCD = StrongCD and for deterministic Leader Election algorithms, there is another exponential gap in energy complexity, but in the reverse direction: NoCD = ReceiverCD ≫ SenderCD = StrongCD In particular, the randomized energy complexity of Leader Election is Θ(log^{*} n) in SenderCD but Θ(log(log^{*} n)) in ReceiverCD, where n is the (unknown) number of devices. Its deterministic complexity is Θ(logN) in ReceiverCD but Θ(loglogN) in SenderCD, where N is the (known) size of the devices’ ID space. There is a tradeoff between time and energy. We give a new upper bound on the timeenergy tradeoff curve for randomized Leader Election and Approximate Counting. A critical component of this algorithm is a new deterministic Leader Election algorithm for dense instances, when n=Θ(N), with inverseAckermanntype (O(α(N))) energy complexity. @InProceedings{STOC17p771, author = {YiJun Chang and Tsvi Kopelowitz and Seth Pettie and Ruosong Wang and Wei Zhan}, title = {Exponential Separations in the Energy Complexity of Leader Election}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {771783}, doi = {}, year = {2017}, } 

Kothari, Pravesh K. 
STOC '17: "Quantum Entanglement, Sum ..."
Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture
Boaz Barak, Pravesh K. Kothari, and David Steurer (Harvard University, USA; Princeton University, USA; IAS, USA; Cornell University, USA) For every constant є>0, we give an exp(Õ(√n))time algorithm for the 1 vs 1−є Best Separable State (BSS) problem of distinguishing, given an n^{2}× n^{2} matrix corresponding to a quantum measurement, between the case that there is a separable (i.e., nonentangled) state ρ that accepts with probability 1, and the case that every separable state is accepted with probability at most 1−є. Equivalently, our algorithm takes the description of a subspace ⊆ F^{n2} (where F can be either the real or complex field) and distinguishes between the case that contains a rank one matrix, and the case that every rank one matrix is at least є far (in ℓ_{2} distance) from . To the best of our knowledge, this is the first improvement over the bruteforce exp(n)time algorithm for this problem. Our algorithm is based on the sumofsquares hierarchy and its analysis is inspired by Lovett’s proof (STOC ’14, JACM ’16) that the communication complexity of every rankn Boolean matrix is bounded by Õ(√n). @InProceedings{STOC17p975, author = {Boaz Barak and Pravesh K. Kothari and David Steurer}, title = {Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {975988}, doi = {}, year = {2017}, } STOC '17: "Approximating Rectangles by ..." Approximating Rectangles by Juntas and WeaklyExponential Lower Bounds for LP Relaxations of CSPs Pravesh K. Kothari, Raghu Meka, and Prasad Raghavendra (Princeton University, USA; IAS, USA; University of California at Los Angeles, USA; University of California at Berkeley, USA) We show that for constraint satisfaction problems (CSPs), subexponential size linear programming relaxations are as powerful as n^{Ω(1)}rounds of the SheraliAdams linear programming hierarchy. As a corollary, we obtain subexponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAXCUT and MAX3SAT. This is a nearlyexponential improvement over previous results; previously, the best known lower bounds were quasipolynomial in n (Chan, Lee, Raghavendra, Steurer 2013). Our bounds are obtained by exploiting and extending the recent progress in communication complexity for ”lifting” query lower bounds to communication problems. The main ingredient in our results is a new structural result on “highentropy rectangles” that may of independent interest in communication complexity. @InProceedings{STOC17p590, author = {Pravesh K. Kothari and Raghu Meka and Prasad Raghavendra}, title = {Approximating Rectangles by Juntas and WeaklyExponential Lower Bounds for LP Relaxations of CSPs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {590603}, doi = {}, year = {2017}, } STOC '17: "Sum of Squares Lower Bounds ..." Sum of Squares Lower Bounds for Refuting any CSP Pravesh K. Kothari, Ryuhei Mori, Ryan O'Donnell, and David Witmer (Princeton University, USA; IAS, USA; Tokyo Institute of Technology, Japan; Carnegie Mellon University, USA) Let P:{0,1}^{k} → {0,1} be a nontrivial kary predicate. Consider a random instance of the constraint satisfaction problem (P) on n variables with Δ n constraints, each being P applied to k randomly chosen literals. Provided the constraint density satisfies Δ ≫ 1, such an instance is unsatisfiable with high probability. The refutation problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate P supports a twise uniform probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d = Θ(n/Δ^{2/(t−1)} logΔ) (which runs in time n^{O(d)}) cannot refute a random instance of (P). In particular, the polynomialtime SOS algorithm requires Ω(n^{(t+1)/2}) constraints to refute random instances of CSP(P) when P supports a twise uniform distribution on its satisfying assignments. Together with recent work of Lee et al.(Lee, Raghavendra, Steurer 2015), our result also implies that any polynomialsize semidefinite programming relaxation for refutation requires at least Ω(n^{(t+1)/2}) constraints. More generally, we consider the δrefutation problem, in which the goal is to certify that at most a (1−δ)fraction of constraints can be simultaneously satisfied. We show that if P is δclose to supporting a twise uniform distribution on satisfying assignments, then the degreeΩ(n/Δ^{2/(t−1)} logΔ) SOS algorithm cannot (δ+o(1))refute a random instance of CSP(P). This is the first result to show a distinction between the degree SOS needs to solve the refutation problem and the degree it needs to solve the harder δrefutation problem. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate P, they give a threeway hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen, O’Donnell, Witmer (2015) and Raghavendra, Rao, Schramm (2016), this full threeway tradeoff is tight, up to lowerorder factors. @InProceedings{STOC17p132, author = {Pravesh K. Kothari and Ryuhei Mori and Ryan O'Donnell and David Witmer}, title = {Sum of Squares Lower Bounds for Refuting any CSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {132145}, doi = {}, year = {2017}, } Info 

Krauthgamer, Robert 
STOC '17: "Streaming Symmetric Norms ..."
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang (Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel) We characterize the streaming space complexity of every symmetric norm l (a norm on ℝ^{n} invariant under signflips and coordinatepermutations), by relating this space complexity to the measureconcentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±є)approximation to the norm of the stream, for every 0<є≤ 1/2. (The bounds match up to (є^{−1} logn) factors.) We further extend those bounds to any large approximation ratio D≥ 1.1, showing that the decrease in space complexity is proportional to D^{2}, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l_{2} unit sphere. The same median governs many phenomena in highdimensional spaces, such as largedeviation bounds and the critical dimension in Dvoretzky’s Theorem. The family of symmetric norms contains several wellstudied norms, such as all l_{p} norms, and indeed we provide a new explanation for the disparity in space complexity between p≤ 2 and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the topk norm and the ksupport norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in http://sublinear.info). @InProceedings{STOC17p716, author = {Jarosław Błasiok and Vladimir Braverman and Stephen R. Chestnut and Robert Krauthgamer and Lin F. Yang}, title = {Streaming Symmetric Norms via Measure Concentration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716729}, doi = {}, year = {2017}, } 

Krishnaswamy, Ravishankar 
STOC '17: "Online and Dynamic Algorithms ..."
Online and Dynamic Algorithms for Set Cover
Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Debmalya Panigrahi (Carnegie Mellon University, USA; Microsoft Research, India; IIT Delhi, India; Duke University, USA) In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of “active” elements to be covered changes over time. The goal is to maintain a nearoptimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primaldual offline algorithms for set cover. The former leads to a competitive ratio of O(logn_{t}), where n_{t} is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on f_{t}, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research. @InProceedings{STOC17p537, author = {Anupam Gupta and Ravishankar Krishnaswamy and Amit Kumar and Debmalya Panigrahi}, title = {Online and Dynamic Algorithms for Set Cover}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {537550}, doi = {}, year = {2017}, } 

Krzakala, Florent 
STOC '17: "InformationTheoretic Thresholds ..."
InformationTheoretic Thresholds from the Cavity Method
Amin CojaOghlan, Florent Krzakala, Will Perkins, and Lenka Zdeborova (Goethe University Frankfurt, Germany; CNRS, France; PSL Research University, France; ENS, France; UPMC, France; University of Birmingham, UK; CEA, France; University of ParisSaclay, France) Vindicating a sophisticated but nonrigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the informationtheoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in LowDensity Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications. @InProceedings{STOC17p146, author = {Amin CojaOghlan and Florent Krzakala and Will Perkins and Lenka Zdeborova}, title = {InformationTheoretic Thresholds from the Cavity Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {146157}, doi = {}, year = {2017}, } 

Kuhn, Fabian 
STOC '17: "On the Complexity of Local ..."
On the Complexity of Local Distributed Graph Problems
Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus (ETH Zurich, Switzerland; University of Freiburg, Germany) This paper is centered on the complexity of graph problems in the wellstudied LOCAL model of distributed computing, introduced by Linial [FOCS ’87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2^{O(√logn)}. Understanding and potentially narrowing down this exponential gap is considered to be one of the central longstanding open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexitytheoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCALproblems (including MIS and (Δ+1)coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zeroround randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmictime randomized distributed algorithms in a simpler way. As a result, we provide a polylogtime distributed approximation scheme for arbitrary distributed covering and packing integer linear programs. @InProceedings{STOC17p784, author = {Mohsen Ghaffari and Fabian Kuhn and Yannic Maus}, title = {On the Complexity of Local Distributed Graph Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {784797}, doi = {}, year = {2017}, } 

Kumar, Amit 
STOC '17: "Online and Dynamic Algorithms ..."
Online and Dynamic Algorithms for Set Cover
Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Debmalya Panigrahi (Carnegie Mellon University, USA; Microsoft Research, India; IIT Delhi, India; Duke University, USA) In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of “active” elements to be covered changes over time. The goal is to maintain a nearoptimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primaldual offline algorithms for set cover. The former leads to a competitive ratio of O(logn_{t}), where n_{t} is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on f_{t}, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research. @InProceedings{STOC17p537, author = {Anupam Gupta and Ravishankar Krishnaswamy and Amit Kumar and Debmalya Panigrahi}, title = {Online and Dynamic Algorithms for Set Cover}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {537550}, doi = {}, year = {2017}, } 

Kuperberg, Greg 
STOC '17: "The Computational Complexity ..."
The Computational Complexity of Ball Permutations
Scott Aaronson, Adam Bouland, Greg Kuperberg, and Saeed Mehraban (University of Texas at Austin, USA; Massachusetts Institute of Technology, USA; University of California at Davis, USA) We define several models of computation based on permuting distinguishable particles (which we call balls) and characterize their computational complexity. In the quantum setting, we use the representation theory of the symmetric group to find variants of this model which are intermediate between BPP and DQC1 (the class of problems solvable with one clean qubit) and between DQC1 and BQP. Furthermore, we consider a restricted version of this model based on an exactly solvable scattering problem of particles moving on a line. Despite the simplicity of this model from the perspective of mathematical physics, we show that if we allow intermediate destructive measurements and specific input states, then the model cannot be efficiently simulated classically up to multiplicative error unless the polynomial hierarchy collapses. Finally, we define a classical version of this model in which one can probabilistically permute balls. We find this yields a complexity class which is intermediate between L and BPP, and that a nondeterministic version of this model is NPcomplete. @InProceedings{STOC17p317, author = {Scott Aaronson and Adam Bouland and Greg Kuperberg and Saeed Mehraban}, title = {The Computational Complexity of Ball Permutations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {317327}, doi = {}, year = {2017}, } 

Kupferman, Orna 
STOC '17: "Examining Classical GraphTheory ..."
Examining Classical GraphTheory Problems from the Viewpoint of FormalVerification Methods (Invited Talk)
Orna Kupferman (Hebrew University of Jerusalem, Israel) The talk surveys a series of works that lift the rich semantics and structure of graphs, and the experience of the formalverification community in reasoning about them, to classical graphtheoretical problems. @InProceedings{STOC17p6, author = {Orna Kupferman}, title = {Examining Classical GraphTheory Problems from the Viewpoint of FormalVerification Methods (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {66}, doi = {}, year = {2017}, } 

Kyng, Rasmus 
STOC '17: "Sampling Random Spanning Trees ..."
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva (Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA) We present an algorithm that, with high probability, generates a random spanning tree from an edgeweighted undirected graph in (n^{5/3 }m^{1/3}) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(n^{ω}). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{n^{ω},m√n,m^{4/3}}) for m ≫ n^{7/4} (Colbourn et al. ’96, KelnerMadry ’09, Madry et al. ’15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinantbased and random walkbased techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute approximate effective resistances for a set S of vertex pairs via approximate Schur complements in (m+(n + S)^{−2}) time, without using the JohnsonLindenstrauss lemma which requires ( min{(m + S)^{−2}, m+n^{−4} +S^{−2}}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn’t sufficiently accurate. @InProceedings{STOC17p730, author = {David Durfee and Rasmus Kyng and John Peebles and Anup B. Rao and Sushant Sachdeva}, title = {Sampling Random Spanning Trees Faster Than Matrix Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {730742}, doi = {}, year = {2017}, } Info 

Łącki, Jakub 
STOC '17: "Decremental SingleSource ..."
Decremental SingleSource Reachability in Planar Digraphs
Giuseppe F. Italiano, Adam Karczmarz, Jakub Łącki, and Piotr Sankowski (University of Rome Tor Vergata, Italy; University of Warsaw, Poland; Google Research, USA) In this paper we show a new algorithm for the decremental singlesource reachability problem in directed planar graphs. It processes any sequence of edge deletions in O(nlog^{2}nloglogn) total time and explicitly maintains the set of vertices reachable from a fixed source vertex. Hence, if all edges are eventually deleted, the amortized time of processing each edge deletion is only O(log^{2} n loglogn), which improves upon a previously known O(√n ) solution. We also show an algorithm for decremental maintenance of strongly connected components in directed planar graphs with the same total update time. These results constitute the first almost optimal (up to polylogarithmic factors) algorithms for both problems. To the best of our knowledge, these are the first dynamic algorithms with polylogarithmic update times on general directed planar graphs for nontrivial reachabilitytype problems, for which only polynomial bounds are known in general graphs. @InProceedings{STOC17p1108, author = {Giuseppe F. Italiano and Adam Karczmarz and Jakub Łącki and Piotr Sankowski}, title = {Decremental SingleSource Reachability in Planar Digraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11081121}, doi = {}, year = {2017}, } 

Laekhanukit, Bundit 
STOC '17: "Surviving in Directed Graphs: ..."
Surviving in Directed Graphs: A QuasiPolynomialTime Polylogarithmic Approximation for TwoConnected Directed Steiner Tree
Fabrizio Grandoni and Bundit Laekhanukit (IDSIA, Switzerland; University of Lugano, Switzerland; Weizmann Institute of Science, Israel) Realword networks are often prone to failures. A reliable network needs to cope with this situation and must provide a backup communication channel. This motivates the study of survivable network design, which has been a focus of research for a few decades. To date, survivable network design problems on undirected graphs are wellunderstood. For example, there is a 2 approximation in the case of edge failures [Jain, FOCS’98/Combinatorica’01]. The problems on directed graphs, in contrast, have seen very little progress. Most techniques for the undirected case like primaldual and iterative rounding methods do not seem to extend to the directed case. Almost no nontrivial approximation algorithm is known even for a simple case where we wish to design a network that tolerates a single failure. In this paper, we study a survivable network design problem on directed graphs, 2Connected Directed Steiner Tree (2DST): given an nvertex weighted directed graph, a root r, and a set of h terminals S, find a mincost subgraph H that has two edge/vertex disjoint paths from r to any t∈ S. 2DST is a natural generalization of the classical Directed Steiner Tree problem (DST), where we have an additional requirement that the network must tolerate one failure. No nontrivial approximation is known for 2DST. This was left as an open problem by Feldman et al., [SODA’09; JCSS] and has then been studied by Cheriyan et al. [SODA’12; TALG] and Laekhanukit [SODA’14]. However, no positive result was known except for the special case of a Dshallow instance [Laekhanukit, ICALP’16]. We present an O(D^{3}logD· h^{2/D}· logn) approximation algorithm for 2DST that runs in time O(n^{O(D)}), for any D∈[log_{2}h]. This implies a polynomialtime O(h^{є}logn) approximation for any constant є>0, and a polylogarithmic approximation running in quasipolynomial time. We remark that this is essentially the bestknown even for the classical DST, and the latter problem is O(log^{2−є}n)hard to approximate [Halperin and Krauthgamer, STOC’03]. As a by product, we obtain an algorithm with the same approximation guarantee for the 2Connected Directed Steiner Subgraph problem, where the goal is to find a mincost subgraph such that every pair of terminals are 2edge/vertex connected. Our approximation algorithm is based on a careful combination of several techniques. In more detail, we decompose an optimal solution into two (possibly not edge disjoint) divergent trees that induces two edge disjoint paths from the root to any given terminal. These divergent trees are then embedded into a shallow tree by means of Zelikovsky’s height reduction theorem. On the latter tree we solve a 2Connected Group Steiner Tree problem and then map back this solution to the original graph. Crucially, our tree embedding is achieved via a probabilistic mapping guided by an LP: This is the main technical novelty of our approach, and might be useful for future work. @InProceedings{STOC17p420, author = {Fabrizio Grandoni and Bundit Laekhanukit}, title = {Surviving in Directed Graphs: A QuasiPolynomialTime Polylogarithmic Approximation for TwoConnected Directed Steiner Tree}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {420428}, doi = {}, year = {2017}, } 

Larsen, Kasper Green 
STOC '17: "DecreaseKeys Are Expensive ..."
DecreaseKeys Are Expensive for External Memory Priority Queues
Kasper Eenberg, Kasper Green Larsen, and Huacheng Yu (Aarhus University, Denmark; Stanford University, USA) One of the biggest open problems in external memory data structures is the priority queue problem with DecreaseKey operations. If only Insert and ExtractMin operations need to be supported, one can design a comparisonbased priority queue performing O((N/B)lg_{M/B} N) I/Os over a sequence of N operations, where B is the disk block size in number of words and M is the main memory size in number of words. This matches the lower bound for comparisonbased sorting and is hence optimal for comparisonbased priority queues. However, if we also need to support DecreaseKeys, the performance of the best known priority queue is only O((N/B) lg_{2} N) I/Os. The big open question is whether a degradation in performance really is necessary. We answer this question affirmatively by proving a lower bound of Ω((N/B) lg_{lgN} B) I/Os for processing a sequence of N intermixed Insert, ExtraxtMin and DecreaseKey operations. Our lower bound is proved in the cell probe model and thus holds also for noncomparisonbased priority queues. @InProceedings{STOC17p1081, author = {Kasper Eenberg and Kasper Green Larsen and Huacheng Yu}, title = {DecreaseKeys Are Expensive for External Memory Priority Queues}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10811093}, doi = {}, year = {2017}, } 

Lee, Yin Tat 
STOC '17: "KernelBased Methods for Bandit ..."
KernelBased Methods for Bandit Convex Optimization
Sébastien Bubeck, Yin Tat Lee, and Ronen Eldan (Microsoft Research, USA; Weizmann Institute of Science, Israel) We consider the adversarial convex bandit problem and we build the first poly(T)time algorithm with poly(n) √Tregret for this problem. To do so we introduce three new ideas in the derivativefree optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves Õ(n^{9.5} √T)regret, and we show that a simple variant of this algorithm can be run in poly(n log(T))time per step at the cost of an additional poly(n) T^{o(1)} factor in the regret. These results improve upon the Õ(n^{11} √T)regret and exp(poly(T))time result of the first two authors, and the log(T)^{poly(n)} √Tregret and log(T)^{poly(n)}time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve Õ(n^{1.5} √T)regret, and moreover that this regret is unimprovable (the current best lower bound being Ω(n √T) and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order n^{3} / є^{2}. @InProceedings{STOC17p72, author = {Sébastien Bubeck and Yin Tat Lee and Ronen Eldan}, title = {KernelBased Methods for Bandit Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {7285}, doi = {}, year = {2017}, } Video Info STOC '17: "An SDPBased Algorithm for ..." An SDPBased Algorithm for LinearSized Spectral Sparsification Yin Tat Lee and He Sun (Microsoft Research, USA; University of Bristol, UK) For any undirected and weighted graph G=(V,E,w) with n vertices and m edges, we call a sparse subgraph H of G, with proper reweighting of the edges, a (1+ε)spectral sparsifier if (1−ε)x^{T}L_{G}x≤ x^{T} L_{H} x≤ (1+ε) x^{T} L_{G}x holds for any x∈ℝ^{n}, where L_{G} and L_{H} are the respective Laplacian matrices of G and H. Noticing that Ω(m) time is needed for any algorithm to construct a spectral sparsifier and a spectral sparsifier of G requires Ω(n) edges, a natural question is to investigate, for any constant ε, if a (1+ε)spectral sparsifier of G with O(n) edges can be constructed in Õ(m) time, where the Õ notation suppresses polylogarithmic factors. All previous constructions on spectral sparsification require either superlinear number of edges or m^{1+Ω(1)} time. In this work we answer this question affirmatively by presenting an algorithm that, for any undirected graph G and ε>0, outputs a (1+ε)spectral sparsifier of G with O(n/ε^{2}) edges in Õ(m/ε^{O(1)}) time. Our algorithm is based on three novel techniques: (1) a new potential function which is much easier to compute yet has similar guarantees as the potential functions used in previous references; (2) an efficient reduction from a twosided spectral sparsifier to a onesided spectral sparsifier; (3) constructing a onesided spectral sparsifier by a semidefinite program. @InProceedings{STOC17p678, author = {Yin Tat Lee and He Sun}, title = {An SDPBased Algorithm for LinearSized Spectral Sparsification}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {678687}, doi = {}, year = {2017}, } STOC '17: "Subquadratic Submodular Function ..." Subquadratic Submodular Function Minimization Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, and Sam Chiuwai Wong (Dartmouth College, USA; Microsoft Research, USA; Stanford University, USA; University of California at Berkeley, USA) Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n^{2}lognM·+n^{3}log^{O(1)}nM) time and O(n^{3}log^{2}n·+n^{4}log^{O(1)}n) time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], The main contribution of this paper are subquadratic SFM algorithms. For integervalued submodular functions, we give an SFM algorithm which runs in O(nM^{3}logn·) time giving the first nearly linear time algorithm in any known regime. For realvalued submodular functions with range in [−1,1], we give an algorithm which in Õ(n^{5/3}·/ε^{2}) time returns an εadditive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n^{2} bound – we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast FujishigeWolfe heuristic [Fujishige, 1980; Wolfe, 1976] @InProceedings{STOC17p1220, author = {Deeparnab Chakrabarty and Yin Tat Lee and Aaron Sidford and Sam Chiuwai Wong}, title = {Subquadratic Submodular Function Minimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12201231}, doi = {}, year = {2017}, } STOC '17: "Geodesic Walks in Polytopes ..." Geodesic Walks in Polytopes Yin Tat Lee and Santosh S. Vempala (Microsoft Research, USA; University of Washington, USA; Georgia Institute of Technology, USA) We introduce the geodesic walk for sampling Riemannian manifolds and apply it to the problem of generating uniform random points from the interior of polytopes in R^n specified by m inequalities. The walk is a discretetime simulation of a stochastic differential equation (SDE) on the Riemannian manifold equipped with the metric induced by the Hessian of a convex function; each step is the solution of an ordinary differential equation (ODE). The resulting sampling algorithm for polytopes mixes in O^*(mn^3/4) steps. This is the first walk that breaks the quadratic barrier for mixing in high dimension, improving on the previous best bound of O^*(mn) by Kannan and Narayanan for the Dikin walk. We also show that each step of the geodesic walk (solving an ODE) can be implemented efficiently, thus improving the time complexity for sampling polytopes. Our analysis of the geodesic walk for general Hessian manifolds does not assume positive curvature and might be of independent interest. @InProceedings{STOC17p927, author = {Yin Tat Lee and Santosh S. Vempala}, title = {Geodesic Walks in Polytopes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {927940}, doi = {}, year = {2017}, } 

Li, Wei 
STOC '17: "Deciding Parity Games in Quasipolynomial ..."
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan (University of Auckland, New Zealand; National University of Singapore, Singapore) It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime n^{O(√n)} to O(n^{log(m)+6}) and from an XPalgorithm with runtime O(n^{Θ(m)}) for fixed parameter m to an FPTalgorithm with runtime O(n^{5})+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((m^{m} · n)^{5}); it is also shown that this bound cannot be improved to O((2^{m} · n)^{c}), for any c, unless FPT = W[1]. @InProceedings{STOC17p252, author = {Cristian S. Calude and Sanjay Jain and Bakhadyr Khoussainov and Wei Li and Frank Stephan}, title = {Deciding Parity Games in Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {252263}, doi = {}, year = {2017}, } 

Li, Xin 
STOC '17: "Nonmalleable Codes and Extractors ..."
Nonmalleable Codes and Extractors for SmallDepth Circuits, and Affine Functions
Eshan Chattopadhyay and Xin Li (IAS, USA; Johns Hopkins University, USA) Nonmalleable codes were introduced by Dziembowski, Pietrzak and Wichs as an elegant relaxation of error correcting codes, where the motivation is to handle more general forms of tampering while still providing meaningful guarantees. This has led to many elegant constructions and applications in cryptography. However, most works so far only studied tampering in the splitstate model where different parts of the codeword are tampered independently, and thus do not apply to many other natural classes of tampering functions. The only exceptions are the work of Agrawal et al. which studied nonmalleable codes against bit permutation composed with bitwise tampering, and the works of Faust et al/ and Ball et al. which studied nonmalleable codes against local functions. However, in both cases each tampered bit only depends on a subset of input bits. In this work, we study the problem of constructing nonmalleable codes against more general tampering functions that act on the entire codeword. We give the first efficient constructions of nonmalleable codes against tampering functions and affine tampering functions. These are the first explicit nonmalleable codes against tampering functions where each tampered bit can depend on all input bits. We also give efficient nonmalleable codes against tlocal functions for t=o(√n), where a tlocal function has the property that any output bit depends on at most t input bits. In the case of deterministic decoders, this improves upon the results of Ball et al, which can handle t≤ n^{1/4}. All our results on nonmalleable codes are obtained by using the connection between nonmalleable codes and seedless nonmalleable extractors discovered by Cheraghchi and Guruswami. Therefore, we also give the first efficient constructions of seedless nonmalleable extractors against tampering functions, tlocal tampering functions for t=o(√n), and affine tampering functions. To derive our results on nonmalleable codes, we design efficient algorithms to almost uniformly sample from the preimage of any given output of our nonmalleable extractor. @InProceedings{STOC17p1171, author = {Eshan Chattopadhyay and Xin Li}, title = {Nonmalleable Codes and Extractors for SmallDepth Circuits, and Affine Functions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11711184}, doi = {}, year = {2017}, } Video STOC '17: "Improved Nonmalleable Extractors, ..." Improved Nonmalleable Extractors, Nonmalleable Codes and Independent Source Extractors Xin Li (Johns Hopkins University, USA) In this paper we give improved constructions of several central objects in the literature of randomness extraction and tamperresilient cryptography. Our main results are: (1) An explicit seeded nonmalleable extractor with error є and seed length d=O(logn)+O(log(1/є)loglog(1/є)), that supports minentropy k=Ω(d) and outputs Ω(k) bits. Combined with the protocol by Dodis and Wichs, this gives a two round privacy amplification protocol with optimal entropy loss in the presence of an active adversary, for all security parameters up to Ω(k/logk), where k is the minentropy of the shared weak random source. Previously, the best known seeded nonmalleable extractors require seed length and minentropy O(logn)+log(1/є)2^{O√loglog(1/є)}, and only give two round privacy amplification protocols with optimal entropy loss for security parameter up to k/2^{O(√logk)}. (2) An explicit nonmalleable twosource extractor for min entropy k ≥ (1−γ)n, some constant γ>0, that outputs Ω(k) bits with error 2^{−Ω(n/logn)}. We further show that we can efficiently uniformly sample from the preimage of any output of the extractor. Combined with the connection found by Cheraghchi and Guruswami this gives a nonmalleable code in the twosplitstate model with relative rate Ω(1/logn). This exponentially improves previous constructions, all of which only achieve rate n^{−Ω(1)}. (3) Combined with the techniques by BenAroya et. al, our nonmalleable extractors give a twosource extractor for minentropy O(logn loglogn), which also implies a KRamsey graph on N vertices with K=(logN)^{O(logloglogN)}. Previously the best known twosource extractor by BenAroya et. al requires minentropy logn 2^{O(√logn)}, which gives a Ramsey graph with K=(logN)^{2O(√logloglogN)}. We further show a way to reduce the problem of constructing seeded nonmalleable extractors to the problem of constructing nonmalleable independent source extractors. Using the nonmalleable 10source extractor with optimal error by Chattopadhyay and Zuckerman, we give a 10source extractor for minentropy O(logn). Previously the best known extractor for such minentropy by Cohen and Schulman requires O(loglogn) sources. Independent of our work, Cohen obtained similar results to (1) and the twosource extractor, except the dependence on є is log(1/є)poly loglog(1/є) and the twosource extractor requires minentropy logn poly loglogn. @InProceedings{STOC17p1144, author = {Xin Li}, title = {Improved Nonmalleable Extractors, Nonmalleable Codes and Independent Source Extractors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11441156}, doi = {}, year = {2017}, } 

Liu, Jingcheng 
STOC '17: "Uniform Sampling through the ..."
Uniform Sampling through the Lovász Local Lemma
Heng Guo, Mark Jerrum, and Jingcheng Liu (Queen Mary University of London, UK; University of California at Berkeley, USA) We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lovász Local Lemma and some clas sical sampling algorithms such as the “cyclepopping” algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of kCNF formulas with bounded variable occurrences. @InProceedings{STOC17p342, author = {Heng Guo and Mark Jerrum and Jingcheng Liu}, title = {Uniform Sampling through the Lovász Local Lemma}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {342355}, doi = {}, year = {2017}, } 

Lokshtanov, Daniel 
STOC '17: "Lossy Kernelization ..."
Lossy Kernelization
Daniel Lokshtanov, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh (University of Bergen, Norway; Vienna University of Technology, Austria; Institute of Mathematical Sciences, India) In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions com bine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size αapproximate kernel. Loosely speaking, a polynomial size αapproximate kernel is a polynomial time preprocessing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance (I′,k′) to the same problem, such that I′ + k′ ≤ k^{O(1)}. Additionally, for every c≥ 1, a capproximate solution s′ to the preprocessed instance (I′, k′) can be turned in polynomial time into a (c · α)approximate solution s to the original instance (I,k). Amongst our main technical contributions are αapproximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆ coNP/Poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an αapproximate kernel of polynomial size, for any α≥ 1, unless NP ⊆ coNP/Poly. In order to prove this lower bound we need to combine in a nontrivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation. @InProceedings{STOC17p224, author = {Daniel Lokshtanov and Fahad Panolan and M. S. Ramanujan and Saket Saurabh}, title = {Lossy Kernelization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {224237}, doi = {}, year = {2017}, } 

Lu, Zhenjian 
STOC '17: "A Polynomial Restriction Lemma ..."
A Polynomial Restriction Lemma with Applications
Valentine Kabanets, Daniel M. Kane, and Zhenjian Lu (Simon Fraser University, Canada; University of California at San Diego, USA) A polynomial threshold function (PTF) of degree d is a boolean function of the form f=sgn(p), where p is a degreed polynomial, and sgn is the sign function. The main result of the paper is an almost optimal bound on the probability that a random restriction of a PTF is not close to a constant function, where a boolean function g is called δclose to constant if, for some v∈{1,−1}, we have g(x)=v for all but at most δ fraction of inputs. We show for every PTF f of degree d≥ 1, and parameters 0<δ, r≤ 1/16, that Pr_{ρ∼ Rr} [f_{ρ} is not δ close to constant] ≤ √r · (logr^{−1} · logδ^{−1})^{O(d2)}, where ρ∼ R_{r} is a random restriction leaving each variable, independently, free with probability r, and otherwise assigning it 1 or −1 uniformly at random. In fact, we show a more general result for random block restrictions: given an arbitrary partitioning of input variables into m blocks, a random block restriction picks a uniformly random block ℓ∈ [m] and assigns 1 or −1, uniformly at random, to all variable outside the chosen block ℓ. We prove the Block Restriction Lemma saying that a PTF f of degree d becomes δclose to constant when hit with a random block restriction, except with probability at most m^{−1/2} · (logm· logδ^{−1})^{O(d2)}. As an application of our Restriction Lemma, we prove lower bounds against constantdepth circuits with PTF gates of any degree 1≤ d≪ √logn/loglogn, generalizing the recent bounds against constantdepth circuits with linear threshold gates (LTF gates) proved by Kane and Williams (STOC, 2016) and Chen, Santhanam, and Srinivasan (CCC, 2016). In particular, we show that there is an nvariate boolean function F_{n} ∈ P such that every depth2 circuit with PTF gates of degree d≥ 1 that computes F_{n} must have at least (n^{3/2+1/d})· (logn)^{−O(d2)} wires. For constant depths greater than 2, we also show averagecase lower bounds for such circuits with superlinear number of wires. These are the first superlinear bounds on the number of wires for circuits with PTF gates. We also give short proofs of the optimalexponent average sensitivity bound for degreed PTFs due to Kane (Computational Complexity, 2014), and the LittlewoodOfford type anticoncentration bound for degreed multilinear polynomials due to Meka, Nguyen, and Vu (Theory of Computing, 2016). Finally, we give derandomized versions of our Block Restriction Lemma and LittlewoodOfford type anticoncentration bounds, using a pseudorandom generator for PTFs due to Meka and Zuckerman (SICOMP, 2013). @InProceedings{STOC17p615, author = {Valentine Kabanets and Daniel M. Kane and Zhenjian Lu}, title = {A Polynomial Restriction Lemma with Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {615628}, doi = {}, year = {2017}, } 

Lucier, Brendan 
STOC '17: "Beating 11/e for Ordered ..."
Beating 11/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier (University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA) Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a thresholdbased algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to nonuniform distributions and discuss its applications in mechanism design. @InProceedings{STOC17p61, author = {Melika Abolhassani and Soheil Ehsani and Hossein Esfandiari and MohammadTaghi HajiAghayi and Robert Kleinberg and Brendan Lucier}, title = {Beating 11/e for Ordered Prophets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {6171}, doi = {}, year = {2017}, } 

Ma, Tengyu 
STOC '17: "Provable Learning of Noisyor ..."
Provable Learning of Noisyor Networks
Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski (Princeton University, USA; Duke University, USA) Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Learning the model that is, the mapping from hidden variables to visible ones and vice versais NPhard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structure: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the singlelayer noisyOR network, which is a textbook example of a bayes net, and used for example in the classic QMRDT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable. @InProceedings{STOC17p1057, author = {Sanjeev Arora and Rong Ge and Tengyu Ma and Andrej Risteski}, title = {Provable Learning of Noisyor Networks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10571066}, doi = {}, year = {2017}, } STOC '17: "Finding Approximate Local ..." Finding Approximate Local Minima Faster than Gradient Descent Naman Agarwal, Zeyuan AllenZhu, Brian Bullins, Elad Hazan, and Tengyu Ma (Princeton University, USA; IAS, USA) We design a nonconvex secondorder optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other nonconvex objectives arising in machine learning. @InProceedings{STOC17p1195, author = {Naman Agarwal and Zeyuan AllenZhu and Brian Bullins and Elad Hazan and Tengyu Ma}, title = {Finding Approximate Local Minima Faster than Gradient Descent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11951199}, doi = {}, year = {2017}, } 

Makarychev, Konstantin 
STOC '17: "Algorithms for Stable and ..."
Algorithms for Stable and PerturbationResilient Problems
Haris Angelidakis, Konstantin Makarychev, and Yury Makarychev (Toyota Technological Institute at Chicago, USA; Northwestern University, USA) We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is αstable or αperturbationresilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2perturbation resilient instances of clustering problems with natural centerbased objectives. The class of clustering problems with natural centerbased objectives includes such problems as kmeans, kmedian, and kcenter. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√2≈2.41 perturbationresilient instances. Our result is tight in the sense that no polynomialtime algorithm can solve (2−ε)perturbation resilient instances of kcenter unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2−2/k)stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4stable instances. We also give an algorithm for (2−2/k+δ)weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomialtime algorithms for n^{1−ε}stable instances of Set Cover, Minimum Vertex Cover, and Min 2Horn Deletion (unless P = NP). @InProceedings{STOC17p438, author = {Haris Angelidakis and Konstantin Makarychev and Yury Makarychev}, title = {Algorithms for Stable and PerturbationResilient Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {438451}, doi = {}, year = {2017}, } 

Makarychev, Yury 
STOC '17: "Algorithms for Stable and ..."
Algorithms for Stable and PerturbationResilient Problems
Haris Angelidakis, Konstantin Makarychev, and Yury Makarychev (Toyota Technological Institute at Chicago, USA; Northwestern University, USA) We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is αstable or αperturbationresilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2perturbation resilient instances of clustering problems with natural centerbased objectives. The class of clustering problems with natural centerbased objectives includes such problems as kmeans, kmedian, and kcenter. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√2≈2.41 perturbationresilient instances. Our result is tight in the sense that no polynomialtime algorithm can solve (2−ε)perturbation resilient instances of kcenter unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2−2/k)stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4stable instances. We also give an algorithm for (2−2/k+δ)weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomialtime algorithms for n^{1−ε}stable instances of Set Cover, Minimum Vertex Cover, and Min 2Horn Deletion (unless P = NP). @InProceedings{STOC17p438, author = {Haris Angelidakis and Konstantin Makarychev and Yury Makarychev}, title = {Algorithms for Stable and PerturbationResilient Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {438451}, doi = {}, year = {2017}, } 

Manurangsi, Pasin 
STOC '17: "AlmostPolynomial Ratio ETHHardness ..."
AlmostPolynomial Ratio ETHHardness of Approximating Densest kSubgraph
Pasin Manurangsi (University of California at Berkeley, USA) In the Densest kSubgraph (DkS) problem, given an undirected graph G and an integer k, the goal is to find a subgraph of G on k vertices that contains maximum number of edges. Even though Bhaskara et al.’s stateoftheart algorithm for the problem achieves only O(n^{1/4 + ε}) approximation ratio, previous attempts at proving hardness of approximation, including those under average case assumptions, fail to achieve a polynomial ratio; the best ratios ruled out under any worst case assumption and any average case assumption are only any constant (Raghavendra and Steurer) and 2^{Ω(log2/3 n)} (Alon et al.) respectively. In this work, we show, assuming the exponential time hypothesis (ETH), that there is no polynomialtime algorithm that approximates Densest kSubgraph to within n^{1/(loglogn)c} factor of the optimum, where c > 0 is a universal constant independent of n. In addition, our result has perfect completeness, meaning that we prove that it is ETHhard to even distinguish between the case in which G contains a kclique and the case in which every induced ksubgraph of G has density at most 1/n^{−1/(loglogn)c} in polynomial time. Moreover, if we make a stronger assumption that there is some constant ε > 0 such that no subexponentialtime algorithm can distinguish between a satisfiable 3SAT formula and one which is only (1 − ε)satisfiable (also known as GapETH), then the ratio above can be improved to n^{f(n)} for any function f whose limit is zero as n goes to infinity (i.e. f ∈ o(1)). @InProceedings{STOC17p954, author = {Pasin Manurangsi}, title = {AlmostPolynomial Ratio ETHHardness of Approximating Densest kSubgraph}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {954961}, doi = {}, year = {2017}, } 

Martens, Wim 
STOC '17: "Optimizing Tree Pattern Queries: ..."
Optimizing Tree Pattern Queries: Why Cutting Is Not Enough (Invited Talk)
Wim Martens (University of Bayreuth, Germany) Tree pattern queries are a natural language for querying graph and treestructured data. A central question for understanding their optimization problem was whether they can be minimized by cutting away redundant parts. This question has been studied since the early 2000's and was recently resolved. @InProceedings{STOC17p3, author = {Wim Martens}, title = {Optimizing Tree Pattern Queries: Why Cutting Is Not Enough (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {33}, doi = {}, year = {2017}, } 

Martin, James B. 
STOC '17: "Stability of Service under ..."
Stability of Service under TimeofUse Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of WisconsinMadison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider timeofuse pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a pertimeunit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under TimeofUse Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184197}, doi = {}, year = {2017}, } 

Marx, Dániel 
STOC '17: "Homomorphisms Are a Good Basis ..."
Homomorphisms Are a Good Basis for Counting Small Subgraphs
Radu Curticapean, Holger Dell, and Dániel Marx (Hungarian Academy of Sciences, Hungary; Saarland University, Germany) We introduce graph motif parameters, a class of graph parameters that depend only on the frequencies of constantsize induced subgraphs. Classical works by Lovász show that many interesting quantities have this form, including, for fixed graphs H, the number of Hcopies (induced or not) in an input graph G, and the number of homomorphisms from H to G. We use the framework of graph motif parameters to obtain faster algorithms for counting subgraph copies of fixed graphs H in host graphs G. More precisely, for graphs H on k edges, we show how to count subgraph copies of H in time k^{O(k)}· n^{0.174k + o(k)} by a surprisingly simple algorithm. This improves upon previously known running times, such as O(n^{0.91k + c}) time for kedge matchings or O(n^{0.46k + c}) time for kcycles. Furthermore, we prove a general complexity dichotomy for evaluating graph motif parameters: Given a class C of such parameters, we consider the problem of evaluating f∈ C on input graphs G, parameterized by the number of induced subgraphs that f depends upon. For every recursively enumerable class C, we prove the above problem to be either FPT or #W[1]hard, with an explicit dichotomy criterion. This allows us to recover known dichotomies for counting subgraphs, induced subgraphs, and homomorphisms in a uniform and simplified way, together with improved lower bounds. Finally, we extend graph motif parameters to colored subgraphs and prove a complexity trichotomy: For vertexcolored graphs H and G, where H is from a fixed class of graphs, we want to count colorpreserving Hcopies in G. We show that this problem is either polynomialtime solvable or FPT or #W[1]hard, and that the FPT cases indeed need FPT time under reasonable assumptions. @InProceedings{STOC17p210, author = {Radu Curticapean and Holger Dell and Dániel Marx}, title = {Homomorphisms Are a Good Basis for Counting Small Subgraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {210223}, doi = {}, year = {2017}, } 

Maus, Yannic 
STOC '17: "On the Complexity of Local ..."
On the Complexity of Local Distributed Graph Problems
Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus (ETH Zurich, Switzerland; University of Freiburg, Germany) This paper is centered on the complexity of graph problems in the wellstudied LOCAL model of distributed computing, introduced by Linial [FOCS ’87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2^{O(√logn)}. Understanding and potentially narrowing down this exponential gap is considered to be one of the central longstanding open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexitytheoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCALproblems (including MIS and (Δ+1)coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zeroround randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmictime randomized distributed algorithms in a simpler way. As a result, we provide a polylogtime distributed approximation scheme for arbitrary distributed covering and packing integer linear programs. @InProceedings{STOC17p784, author = {Mohsen Ghaffari and Fabian Kuhn and Yannic Maus}, title = {On the Complexity of Local Distributed Graph Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {784797}, doi = {}, year = {2017}, } 

Mehraban, Saeed 
STOC '17: "The Computational Complexity ..."
The Computational Complexity of Ball Permutations
Scott Aaronson, Adam Bouland, Greg Kuperberg, and Saeed Mehraban (University of Texas at Austin, USA; Massachusetts Institute of Technology, USA; University of California at Davis, USA) We define several models of computation based on permuting distinguishable particles (which we call balls) and characterize their computational complexity. In the quantum setting, we use the representation theory of the symmetric group to find variants of this model which are intermediate between BPP and DQC1 (the class of problems solvable with one clean qubit) and between DQC1 and BQP. Furthermore, we consider a restricted version of this model based on an exactly solvable scattering problem of particles moving on a line. Despite the simplicity of this model from the perspective of mathematical physics, we show that if we allow intermediate destructive measurements and specific input states, then the model cannot be efficiently simulated classically up to multiplicative error unless the polynomial hierarchy collapses. Finally, we define a classical version of this model in which one can probabilistically permute balls. We find this yields a complexity class which is intermediate between L and BPP, and that a nondeterministic version of this model is NPcomplete. @InProceedings{STOC17p317, author = {Scott Aaronson and Adam Bouland and Greg Kuperberg and Saeed Mehraban}, title = {The Computational Complexity of Ball Permutations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {317327}, doi = {}, year = {2017}, } 

Mehta, Ruta 
STOC '17: "Settling the Complexity of ..."
Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria
Jugal Garg, Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod (University of Illinois at UrbanaChampaign, USA; Georgia Institute of Technology, USA) Our first result shows membership in PPAD for the problem of computing approximate equilibria for an ArrowDebreu exchange market for piecewiselinear concave (PLC) utility functions. As a corollary we also obtain membership in PPAD for Leontief utility functions. This settles an open question of Vazirani and Yannakakis (2011). Next we show FIXPhardness of computing equilibria in ArrowDebreu exchange markets under Leontief utility functions, and ArrowDebreu markets under linear utility functions and Leontief production sets, thereby settling these open questions of Vazirani and Yannakakis (2011). As corollaries, we obtain FIXPhardness for PLC utilities and for ArrowDebreu markets under linear utility functions and polyhedral production sets. In all cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be "yes" instances. If all instances are under consideration, then in all cases we prove that the problem of deciding if a given instance admits an equilibrium is ETRcomplete, where ETR is the class Existential Theory of Reals. As a consequence of the results stated above, and the fact that membership in FIXP has been established for PLC utilities, the entire computational difficulty of ArrowDebreu markets under PLC utility functions lies in the Leontief utility subcase. This is perhaps the most unexpected aspect of our result, since Leontief utilities are meant for the case that goods are perfect complements, whereas PLC utilities are very general, capturing not only the cases when goods are complements and substitutes, but also arbitrary combinations of these and much more. Finally, we give a polynomial time algorithm for finding an equilibrium in ArrowDebreu exchange markets under Leontief utility functions provided the number of agents is a constant. This settles part of an open problem of Devanur and Kannan (2008). @InProceedings{STOC17p890, author = {Jugal Garg and Ruta Mehta and Vijay V. Vazirani and Sadra Yazdanbod}, title = {Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890901}, doi = {}, year = {2017}, } 

Meka, Raghu 
STOC '17: "Approximating Rectangles by ..."
Approximating Rectangles by Juntas and WeaklyExponential Lower Bounds for LP Relaxations of CSPs
Pravesh K. Kothari, Raghu Meka, and Prasad Raghavendra (Princeton University, USA; IAS, USA; University of California at Los Angeles, USA; University of California at Berkeley, USA) We show that for constraint satisfaction problems (CSPs), subexponential size linear programming relaxations are as powerful as n^{Ω(1)}rounds of the SheraliAdams linear programming hierarchy. As a corollary, we obtain subexponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAXCUT and MAX3SAT. This is a nearlyexponential improvement over previous results; previously, the best known lower bounds were quasipolynomial in n (Chan, Lee, Raghavendra, Steurer 2013). Our bounds are obtained by exploiting and extending the recent progress in communication complexity for ”lifting” query lower bounds to communication problems. The main ingredient in our results is a new structural result on “highentropy rectangles” that may of independent interest in communication complexity. @InProceedings{STOC17p590, author = {Pravesh K. Kothari and Raghu Meka and Prasad Raghavendra}, title = {Approximating Rectangles by Juntas and WeaklyExponential Lower Bounds for LP Relaxations of CSPs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {590603}, doi = {}, year = {2017}, } 

Meunier, PierreÉtienne 
STOC '17: "The Noncooperative Tile Assembly ..."
The Noncooperative Tile Assembly Model Is Not Intrinsically Universal or Capable of Bounded Turing Machine Simulation
PierreÉtienne Meunier and Damien Woods (Inria, France) The field of algorithmic selfassembly is concerned with the computational and expressive power of nanoscale selfassembling molecular systems. In the wellstudied cooperative, or temperature 2, abstract tile assembly model it is known that there is a tile set to simulate any Turing machine and an intrinsically universal tile set that simulates the shapes and dynamics of any instance of the model, up to spatial rescaling. It has been an open question as to whether the seemingly simpler noncooperative, or temperature 1, model is capable of such behaviour. Here we show that this is not the case by showing that there is no tile set in the noncooperative model that is intrinsically universal, nor one capable of timebounded Turing machine simulation within a bounded region of the plane. Although the noncooperative model intuitively seems to lack the complexity and power of the cooperative model it has been exceedingly hard to prove this. One reason is that there have been few tools to analyse the structure of complicated paths in the plane. This paper provides a number of such tools. A second reason is that almost every obvious and small generalisation to the model (e.g. allowing error, 3D, nonsquare tiles, signals/wires on tiles, tiles that repel each other, parallel synchronous growth) endows it with great computational, and sometimes simulation, power. Our main results show that all of these generalisations provably increase computational and/or simulation power. Our results hold for both deterministic and nondeterministic noncooperative systems. Our first main result stands in stark contrast with the fact that for both the cooperative tile assembly model, and for 3D noncooperative tile assembly, there are respective intrinsically universal tilesets. Our second main result gives a new technique (reduction to simulation) for proving negative results about computation in tile assembly. @InProceedings{STOC17p328, author = {PierreÉtienne Meunier and Damien Woods}, title = {The Noncooperative Tile Assembly Model Is Not Intrinsically Universal or Capable of Bounded Turing Machine Simulation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {328341}, doi = {}, year = {2017}, } 

Minzer, Dor 
STOC '17: "On Independent Sets, 2to2 ..."
On Independent Sets, 2to2 Games, and Grassmann Graphs
Subhash Khot, Dor Minzer, and Muli Safra (New York University, USA; Tel Aviv University, Israel) We present a candidate reduction from the 3Lin problem to the 2to2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain nonstandard sense. A reduction that is sound in this nonstandard sense implies that it is NPhard to distinguish whether an nvertex graph has an independent set of size ( 1− 1/√2 ) n − o(n) or whether every independent set has size o(n), and consequently, that it is NPhard to approximate the Vertex Cover problem within a factor √2−o(1). @InProceedings{STOC17p576, author = {Subhash Khot and Dor Minzer and Muli Safra}, title = {On Independent Sets, 2to2 Games, and Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {576589}, doi = {}, year = {2017}, } 

Moitra, Ankur 
STOC '17: "Approximate Counting, the ..."
Approximate Counting, the Lovász Local Lemma, and Inference in Graphical Models
Ankur Moitra (Massachusetts Institute of Technology, USA) In this paper we introduce a new approach for approximately counting in bounded degree systems with higherorder constraints. Our main result is an algorithm to approximately count the number of solutions to a CNF formula Φ when the width is logarithmic in the maximum degree. This closes an exponential gap between the known upper and lower bounds. Moreover our algorithm extends straightforwardly to approximate sampling, which shows that under Lovasz Local Lemmalike conditions it is not only possible to find a satisfying assignment, it is also possible to generate one approximately uniformly at random from the set of all satisfying assignments. Our approach is a significant departure from earlier techniques in approximate counting, and is based on a framework to bootstrap an oracle for computing marginal probabilities on individual variables. Finally, we give an application of our results to show that it is algorithmically possible to sample from the posterior distribution in an interesting class of graphical models. @InProceedings{STOC17p356, author = {Ankur Moitra}, title = {Approximate Counting, the Lovász Local Lemma, and Inference in Graphical Models}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {356369}, doi = {}, year = {2017}, } 

Montanari, Andrea 
STOC '17: "How Well Do Local Algorithms ..."
How Well Do Local Algorithms Solve Semidefinite Programs?
Zhou Fan and Andrea Montanari (Stanford University, USA) Several probabilistic models from highdimensional statistics and machine learning reveal an intriguing and yet poorly understood dichotomy. Either simple local algorithms succeed in estimating the object of interest, or even sophisticated semidefinite programming (SDP) relaxations fail. In order to explore this phenomenon, we study a classical SDP relaxation of the minimum graph bisection problem, when applied to ErdosRényi random graphs with bounded average degree d > 1, and obtain several types of results. First, we use a dual witness construction (using the socalled nonbacktracking matrix of the graph) to upper bound the SDP value. Second, we prove that a simple local algorithm approximately solves the SDP to within a factor 2d^2/(2d^2 + d  1) of the upper bound. In particular, the local algorithm is at most 8/9 suboptimal, and 1 + O(d^{1}) suboptimal for large degree. We then analyze a more sophisticated local algorithm, which aggregates information according to the harmonic measure on the limiting GaltonWatson (GW) tree. The resulting lower bound is expressed in terms of the conductance of the GW tree and matches surprisingly well the empirically determined SDP values on largescale ErdosRényi graphs. We finally consider the planted partition model. In this case, purely local algorithms are known to fail, but they do succeed if a small amount of side information is available. Our results imply quantitative bounds on the threshold for partial recovery using SDP in this model. @InProceedings{STOC17p604, author = {Zhou Fan and Andrea Montanari}, title = {How Well Do Local Algorithms Solve Semidefinite Programs?}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {604614}, doi = {}, year = {2017}, } 

Moran, Shay 
STOC '17: "Twenty (Simple) Questions ..."
Twenty (Simple) Questions
Yuval Dagan, Yuval Filmus, Ariel Gabizon, and Shay Moran (Technion, Israel; Zerocoin Electronic Coin, USA; University of California at San Diego, USA; Simons Institute for the Theory of Computing Berkeley, USA) A basic combinatorial interpretation of Shannon’s entropy function is via the ”20 questions” game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the ”20 questions” game is given by a Huffman code for π: Bob’s questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately?* Our first main result shows that for every distribution π, Bob has a strategy that uses only questions of the form ”x < c?” and ”x = c?”, and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn^{1/r}) questions that achieve a performance of at most H(π)+r, and show that Ω(rn^{1/r}) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25^{n+o(n)} questions such that for every distribution π, Bob can implement an optimal strategy for π using only questions from Q. We also show that 1.25^{n−o(n)} questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)^{Θ(1/r)} questions are necessary and sufficient. @InProceedings{STOC17p9, author = {Yuval Dagan and Yuval Filmus and Ariel Gabizon and Shay Moran}, title = {Twenty (Simple) Questions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {921}, doi = {}, year = {2017}, } 

Mori, Ryuhei 
STOC '17: "Sum of Squares Lower Bounds ..."
Sum of Squares Lower Bounds for Refuting any CSP
Pravesh K. Kothari, Ryuhei Mori, Ryan O'Donnell, and David Witmer (Princeton University, USA; IAS, USA; Tokyo Institute of Technology, Japan; Carnegie Mellon University, USA) Let P:{0,1}^{k} → {0,1} be a nontrivial kary predicate. Consider a random instance of the constraint satisfaction problem (P) on n variables with Δ n constraints, each being P applied to k randomly chosen literals. Provided the constraint density satisfies Δ ≫ 1, such an instance is unsatisfiable with high probability. The refutation problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate P supports a twise uniform probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d = Θ(n/Δ^{2/(t−1)} logΔ) (which runs in time n^{O(d)}) cannot refute a random instance of (P). In particular, the polynomialtime SOS algorithm requires Ω(n^{(t+1)/2}) constraints to refute random instances of CSP(P) when P supports a twise uniform distribution on its satisfying assignments. Together with recent work of Lee et al.(Lee, Raghavendra, Steurer 2015), our result also implies that any polynomialsize semidefinite programming relaxation for refutation requires at least Ω(n^{(t+1)/2}) constraints. More generally, we consider the δrefutation problem, in which the goal is to certify that at most a (1−δ)fraction of constraints can be simultaneously satisfied. We show that if P is δclose to supporting a twise uniform distribution on satisfying assignments, then the degreeΩ(n/Δ^{2/(t−1)} logΔ) SOS algorithm cannot (δ+o(1))refute a random instance of CSP(P). This is the first result to show a distinction between the degree SOS needs to solve the refutation problem and the degree it needs to solve the harder δrefutation problem. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate P, they give a threeway hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen, O’Donnell, Witmer (2015) and Raghavendra, Rao, Schramm (2016), this full threeway tradeoff is tight, up to lowerorder factors. @InProceedings{STOC17p132, author = {Pravesh K. Kothari and Ryuhei Mori and Ryan O'Donnell and David Witmer}, title = {Sum of Squares Lower Bounds for Refuting any CSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {132145}, doi = {}, year = {2017}, } Info 

Moseley, Benjamin 
STOC '17: "Efficient Massively Parallel ..."
Efficient Massively Parallel Methods for Dynamic Programming
Sungjin Im, Benjamin Moseley, and Xiaorui Sun (University of California at Merced, USA; Washington University at St. Louis, USA; Simons Institute for the Theory of Computing Berkeley, USA) Modern science and engineering is driven by massively large data sets and its advance heavily relies on massively parallel computing platforms such as Spark, MapReduce, and Hadoop. Theoretical models have been proposed to understand the power and limitations of such platforms. Recent study of developed theoretical models has led to the discovery of new algorithms that are fast and efficient in both theory and practice, thereby beginning to unlock their underlying power. Given recent promising results, the area has turned its focus on discovering widely applicable algorithmic techniques for solving problems efficiently. In this paper we make progress towards this goal by giving a principled framework for simulating sequential dynamic programs in the distributed setting. In particular, we identify two key properties, monotonicity and decomposability, which allow us to derive efficient distributed algorithms for problems possessing the properties. We showcase our framework by considering several core dynamic programming applications, Longest Increasing Subsequence, Optimal Binary Search Tree, and Weighted Interval Selection. For these problems, we derive algorithms yielding solutions that are arbitrarily close to the optimum, using O(1) rounds and Õ(n/m) memory on each machine where n is the input size and m is the number of machines available. @InProceedings{STOC17p798, author = {Sungjin Im and Benjamin Moseley and Xiaorui Sun}, title = {Efficient Massively Parallel Methods for Dynamic Programming}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {798811}, doi = {}, year = {2017}, } 

Mukhopadhyay, Partha 
STOC '17: "Randomized Polynomial Time ..."
Randomized Polynomial Time Identity Testing for Noncommutative Circuits
V. Arvind, Pushkar S Joglekar, Partha Mukhopadhyay, and S. Raja (Institute of Mathematical Sciences, India; Vishwakarma Institute of Technology Pune, India; Chennai Mathematical Institute, India) In this paper we show that blackbox polynomial identity testing for noncommutative polynomials f∈F⟨ z_{1},z_{2},⋯,z_{n} ⟩ of degree D and sparsity t, can be done in randomized (n,logt,logD) time. As a consequence, given a circuit C of size s computing a polynomial f∈F⟨ z_{1},z_{2},⋯,z_{n} ⟩ with at most t nonzero monomials, then testing if f is identically zero can be done by a randomized algorithm with running time polynomial in s and n and logt. This makes significant progress on a question that has been open for over ten years. Our algorithm is based on automatatheoretic ideas that can efficiently isolate a monomial in the given polynomial. In particular, we carry out the monomial isolation using nondeterministic automata. In general, noncommutative circuits of size s can compute polynomials of degree exponential in s and number of monomials doubleexponential in s. In this paper, we consider a natural class of homogeneous noncommutative circuits, that we call +regular circuits, and give a whitebox polynomial time deterministic polynomial identity test. These circuits can compute noncommutative polynomials with number of monomials doubleexponential in the circuit size. Our algorithm combines some new structural results for +regular circuits with known results for noncommutative ABP identity testing, rank bound of commutative depth three identities, and equivalence testing problem for words. Finally, we consider the blackbox identity testing problem for depth three +regular circuits and give a randomized polynomial time identity test. In particular, we show if f∈⟨ Z⟩ is a nonzero noncommutative polynomial computed by a depth three +regular circuit of size s, then f cannot be a polynomial identity for the matrix algebra M_{s}(F) when F is sufficiently large depending on the degree of f. @InProceedings{STOC17p831, author = {V. Arvind and Pushkar S Joglekar and Partha Mukhopadhyay and S. Raja}, title = {Randomized Polynomial Time Identity Testing for Noncommutative Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {831841}, doi = {}, year = {2017}, } 

Nanongkai, Danupon 
STOC '17: "Dynamic Spanning Forest with ..."
Dynamic Spanning Forest with WorstCase Update Time: Adaptive, Las Vegas, and O(n^{1/2  ε})Time
Danupon Nanongkai and Thatchaphol Saranurak (KTH, Sweden) We present two algorithms for dynamically maintaining a spanning forest of a graph undergoing edge insertions and deletions. Our algorithms guarantee worstcase update time and work against an adaptive adversary, meaning that an edge update can depend on previous outputs of the algorithms. We provide the first polynomial improvement over the longstanding O(√n) bound of [Frederickson STOC’84, Eppstein, Galil, Italiano and Nissenzweig FOCS’92] for such type of algorithms. The previously best improvement was O(√n (loglogn)^{2}/logn) [KejlbergRasmussen, Kopelowitz, Pettie and Thorup ESA’16]. We note however that these bounds were obtained by deterministic algorithms while our algorithms are randomized. Our first algorithm is Monte Carlo and guarantees an O(n^{0.4+o(1)}) worstcase update time, where the o(1) term hides the O(√loglogn/logn) factor. Our second algorithm is Las Vegas and guarantee an O(n^{0.49306}) worstcase update time with high probability. Algorithms with better update time either needed to assume that the adversary is oblivious (e.g. [Kapron, King and Mountjoy SODA’13]) or can only guarantee an amortized update time. Our second result answers an open problem by Kapron et al. To the best of our knowledge, our algorithms are among a few nontrivial randomized dynamic algorithms that work against adaptive adversaries. The key to our results is a decomposition of graphs into subgraphs that either have high expansion or sparse. This decomposition serves as an interface between recent developments on (static) flow computation and many old ideas in dynamic graph algorithms: On the one hand, we can combine previous dynamic graph techniques to get faster dynamic spanning forest algorithms if such decomposition is given. On the other hand, we can adapt flowrelated techniques (e.g. those from [Khandekar, Rao and Vazirani STOC’06], [Peng SODA’16], and [Orecchia and Zhu SODA’14]) to maintain such decomposition. To the best of our knowledge, this is the first time these flow techniques are used in fully dynamic graph algorithms. @InProceedings{STOC17p1122, author = {Danupon Nanongkai and Thatchaphol Saranurak}, title = {Dynamic Spanning Forest with WorstCase Update Time: Adaptive, Las Vegas, and O(n<sup>1/2  ε</sup>)Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11221129}, doi = {}, year = {2017}, } 

Naor, Assaf 
STOC '17: "The Integrality Gap of the ..."
The Integrality Gap of the GoemansLinial SDP Relaxation for Sparsest Cut Is at Least a Constant Multiple of √log n
Assaf Naor and Robert Young (Princeton University, USA; New York University, USA) We prove that the integrality gap of the Goemans–Linial semidefinite programming relaxation for the Sparsest Cut Problem is Ω(√logn) on inputs with n vertices, thus matching the previously best known upper bound (logn)^{1/2+o(1)} up to lowerorder factors. This statement is a consequence of the following new isoperimetrictype inequality. Consider the 8regular graph whose vertex set is the 5dimensional integer grid ℤ^{5} and where each vertex (a,b,c,d,e)∈ ℤ^{5} is connected to the 8 vertices (a± 1,b,c,d,e), (a,b± 1,c,d,e), (a,b,c± 1,d,e± a), (a,b,c,d± 1,e± b). This graph is known as the Cayley graph of the 5dimensional discrete Heisenberg group. Given Ω⊂ ℤ^{5}, denote the size of its edge boundary in this graph (a.k.a. the horizontal perimeter of Ω) by ∂_{h}Ω. For t∈ ℕ, denote by ∂_{v}^{t}Ω the number of (a,b,c,d,e)∈ ℤ^{5} such that exactly one of the two vectors (a,b,c,d,e),(a,b,c,d,e+t) is in Ω. The vertical perimeter of Ω is defined to be ∂_{v}Ω= √∑_{t=1}^{∞}∂_{v}^{t}Ω^{2}/t^{2}. We show that every subset Ω⊂ ℤ^{5} satisfies ∂_{v}Ω=O(∂_{h}Ω). This verticalversushorizontal isoperimetric inequality yields the abovestated integrality gap for Sparsest Cut and answers several geometric and analytic questions of independent interest. The theorem stated above is the culmination of a program whose aim is to understand the performance of the Goemans–Linial semidefinite program through the embeddability properties of Heisenberg groups. These investigations have mathematical significance even beyond their established relevance to approximation algorithms and combinatorial optimization. In particular they contribute to a range of mathematical disciplines including functional analysis, geometric group theory, harmonic analysis, subRiemannian geometry, geometric measure theory, ergodic theory, group representations, and metric differentiation. This article builds on the above cited works, with the “twist” that while those works were equally valid for any finite dimensional Heisenberg group, our result holds for the Heisenberg group of dimension 5 (or higher) but fails for the 3dimensional Heisenberg group. This insight leads to our core contribution, which is a deduction of an endpoint L_{1}boundedness of a certain singular integral on ℝ^{5} from the (local) L_{2}boundedness of the corresponding singular integral on ℝ^{3}. To do this, we devise a coronatype decomposition of subsets of a Heisenberg group, in the spirit of the construction that David and Semmes performed in ℝ^{n}, but with two main conceptual differences (in addition to more technical differences that arise from the peculiarities of the geometry of Heisenberg group). Firstly, the“atoms” of our decomposition are perturbations of intrinsic Lipschitz graphs in the sense of Franchi, Serapioni, and Serra Cassano (plus the requisite “wild” regions that satisfy a Carleson packing condition). Secondly, we control the local overlap of our corona decomposition by using quantitative monotonicity rather than Jonestype βnumbers. @InProceedings{STOC17p564, author = {Assaf Naor and Robert Young}, title = {The Integrality Gap of the GoemansLinial SDP Relaxation for Sparsest Cut Is at Least a Constant Multiple of √log n}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564575}, doi = {}, year = {2017}, } 

Natarajan, Anand 
STOC '17: "A Quantum Linearity Test for ..."
A Quantum Linearity Test for Robustly Verifying Entanglement
Anand Natarajan and Thomas Vidick (Massachusetts Institute of Technology, USA; California Institute of Technology, USA) We introduce a simple twoplayer test which certifies that the players apply tensor products of Pauli σ_{X} and σ_{Z} observables on the tensor product of n EPR pairs. The test has constant robustness: any strategy achieving success probability within an additive of the optimal must be poly(ε)close, in the appropriate distance measure, to the honest nqubit strategy. The test involves 2nbit questions and 2bit answers. The key technical ingredient is a quantum version of the classical linearity test of Blum, Luby, and Rubinfeld. As applications of our result we give (i) the first robust selftest for n EPR pairs; (ii) a quantum multiprover interactive proof system for the local Hamiltonian problem with a constant number of provers and classical questions and answers, and a constant completenesssoundness gap independent of system size; (iii) a robust protocol for verifiable delegated quantum computation with a constant number of quantum polynomialtime provers sharing entanglement. @InProceedings{STOC17p1003, author = {Anand Natarajan and Thomas Vidick}, title = {A Quantum Linearity Test for Robustly Verifying Entanglement}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10031015}, doi = {}, year = {2017}, } 

Nazarov, Fedor 
STOC '17: "Trace Reconstruction with ..."
Trace Reconstruction with exp(O(n^{1/3})) Samples
Fedor Nazarov and Yuval Peres (Kent State University, USA; Microsoft Research, USA) In the trace reconstruction problem, an unknown bit string x ∈ {0,1}^{n} is observed through the deletion channel, which deletes each bit of x with some constant probability q, yielding a contracted string x. How many independent copies of x are needed to reconstruct x with high probability? Prior to this work, the best upper bound, due to Holenstein, Mitzenmacher, Panigrahy, and Wieder (2008), was exp(O(n^{1/2})). We improve this bound to exp(O(n^{1/3})) using statistics of individual bits in the output and show that this bound is sharp in the restricted model where this is the only information used. Our method, that uses elementary complex analysis, can also handle insertions. Similar results were obtained independently and simultaneously by Anindya De, Ryan O’Donnell and Rocco Servedio. @InProceedings{STOC17p1042, author = {Fedor Nazarov and Yuval Peres}, title = {Trace Reconstruction with exp(O(n<sup>1/3</sup>)) Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10421046}, doi = {}, year = {2017}, } 

Nederlof, Jesper 
STOC '17: "Faster SpaceEfficient Algorithms ..."
Faster SpaceEfficient Algorithms for Subset Sum and kSum
Nikhil Bansal, Shashwat Garg, Jesper Nederlof, and Nikhil Vyas (Eindhoven University of Technology, Netherlands; IIT Bombay, India) We present randomized algorithms that solve Subset Sum and Knapsack instances with n items in O^{*}(2^{0.86n}) time, where the O^{*}(·) notation suppresses factors polynomial in the input size, and polynomial space, assuming random readonly access to exponentially many random bits. These results can be extended to solve Binary Linear Programming on n variables with few constraints in a similar running time. We also show that for any constant k≥ 2, random instances of kSum can be solved using O(n^{k−0.5}(n)) time and O(logn) space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random readonly access to random bits, we show that this problem can be solved using O(logn) space significantly faster than the trivial O(n^{2}) time algorithm if no value occurs too often in the same list. @InProceedings{STOC17p198, author = {Nikhil Bansal and Shashwat Garg and Jesper Nederlof and Nikhil Vyas}, title = {Faster SpaceEfficient Algorithms for Subset Sum and kSum}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {198209}, doi = {}, year = {2017}, } 

Nguyen, Danny 
STOC '17: "Complexity of Short Presburger ..."
Complexity of Short Presburger Arithmetic
Danny Nguyen and Igor Pak (University of California at Los Angeles, USA) We study complexity of short sentences in Presburger arithmetic (ShortPA). Here by “short” we mean sentences with a bounded number of variables, quantifers, inequalities and Boolean operations; the input consists only of the integers involved in the inequalities. We prove that assuming Kannan’s partition can be found in polynomial time, the satisfability of ShortPA sentences can be decided in polynomial time. Furthermore, under the same assumption, we show that the numbers of satisfying assignments of short Presburger sentences can also be computed in polynomial time. @InProceedings{STOC17p812, author = {Danny Nguyen and Igor Pak}, title = {Complexity of Short Presburger Arithmetic}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {812820}, doi = {}, year = {2017}, } 

Nguyen, Huy L. 
STOC '17: "Approximate Near Neighbors ..."
Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA) We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doublylogarithmic approximation. Specifically, for every n, d = n^{o(1)}, and every ddimensional symmetric norm ·, there exists a data structure for (loglogn)approximate nearest neighbor search over · for npoint datasets achieving n^{o(1)} query time and n^{1+o(1)} space. The main technical ingredient of the algorithm is a lowdistortion embedding of a symmetric norm into a lowdimensional iterated product of topk norms. We also show that our techniques cannot be extended to *general* norms. @InProceedings{STOC17p902, author = {Alexandr Andoni and Huy L. Nguyen and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Approximate Near Neighbors for General Symmetric Norms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902913}, doi = {}, year = {2017}, } 

Niazadeh, Rad 
STOC '17: "Bernoulli Factories and BlackBox ..."
Bernoulli Factories and BlackBox Reductions in Mechanism Design
Shaddin Dughmi, Jason D. Hartline, Robert Kleinberg, and Rad Niazadeh (University of Southern California, USA; Northwestern University, USA; Cornell University, USA) We provide a polynomialtime reduction from Bayesian incentivecompatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multidimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior blackbox reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism’s output, which is typically #Phard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on ”Bernoulli Factories”. In a Bernoulli factory problem, one is given a function mapping the bias of an “input coin” to that of an “output coin”, and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the ”expectations from samples” computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by ”exponential weights”: expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of HartlineMalekianKleinberg [2015] exactly incentive compatible. @InProceedings{STOC17p158, author = {Shaddin Dughmi and Jason D. Hartline and Robert Kleinberg and Rad Niazadeh}, title = {Bernoulli Factories and BlackBox Reductions in Mechanism Design}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {158169}, doi = {}, year = {2017}, } 

Nikolaenko, Valeria 
STOC '17: "Practical Postquantum Key ..."
Practical Postquantum Key Agreement from Generic Lattices (Invited Talk)
Valeria Nikolaenko (Stanford University, USA) Latticebased cryptography offers some of the most attractive primitives believed to be resistant to quantum computers. This work introduces "Frodo"  a concrete instantiation of a key agreement mechanism based on hard problems in generic lattices. @InProceedings{STOC17p8, author = {Valeria Nikolaenko}, title = {Practical Postquantum Key Agreement from Generic Lattices (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {88}, doi = {}, year = {2017}, } 

Nikolov, Aleksandar 
STOC '17: "Approximate Near Neighbors ..."
Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA) We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doublylogarithmic approximation. Specifically, for every n, d = n^{o(1)}, and every ddimensional symmetric norm ·, there exists a data structure for (loglogn)approximate nearest neighbor search over · for npoint datasets achieving n^{o(1)} query time and n^{1+o(1)} space. The main technical ingredient of the algorithm is a lowdistortion embedding of a symmetric norm into a lowdimensional iterated product of topk norms. We also show that our techniques cannot be extended to *general* norms. @InProceedings{STOC17p902, author = {Alexandr Andoni and Huy L. Nguyen and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Approximate Near Neighbors for General Symmetric Norms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902913}, doi = {}, year = {2017}, } 

Nimavat, Rachit 
STOC '17: "New Hardness Results for Routing ..."
New Hardness Results for Routing on Disjoint Paths
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat (Toyota Technological Institute at Chicago, USA; University of Chicago, USA) In the classical NodeDisjoint Paths (NDP) problem, the input consists of an undirected nvertex graph G, and a collection M={(s_{1},t_{1}),…,(s_{k},t_{k})} of pairs of its vertices, called sourcedestination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via nodedisjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is O(√n), while the best current negative result is an Ω(log^{1/2−δ}n)hardness of approximation for any constant δ, under standard complexity assumptions. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves an Õ(n^{1/4})approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is Õ(n^{9/19}). The best currently known lower bound for both these versions of the problem is APXhardness. In this paper we prove that NDP is 2^{Ω(√logn)}hard to approximate, unless all problems in NP have algorithms with running time n^{O(logn)}. Our result holds even when the underlying graph is a planar graph with maximum vertex degree 4, and all source vertices lie on the boundary of a single face (but the destination vertices may lie anywhere in the graph). We extend this result to the closely related EdgeDisjoint Paths problem, showing the same hardness of approximation ratio even for subcubic planar graphs with all sources lying on the boundary of a single face. @InProceedings{STOC17p86, author = {Julia Chuzhoy and David H. K. Kim and Rachit Nimavat}, title = {New Hardness Results for Routing on Disjoint Paths}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {8699}, doi = {}, year = {2017}, } 

Nisan, Noam 
STOC '17: "Efficient Empirical Revenue ..."
Efficient Empirical Revenue Maximization in SingleParameter Auction Environments
Yannai A. Gonczarowski and Noam Nisan (Hebrew University of Jerusalem, Israel; Microsoft Research, Israel) We present a polynomialtime algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of singleparameter auction environments including matroid environments, position environments, and the public project environment. The valuation distributions may be arbitrary bounded distributions (in particular, they may be irregular, and may differ for the various bidders), thus resolving a problem left open by previous papers. The analysis uses basic tools, is performed in its entirety in valuespace, and simplifies the analysis of previously known results for special cases. Furthermore, the analysis extends to certain singleparameter auction environments where precise revenue maximization is known to be intractable, such as knapsack environments. @InProceedings{STOC17p856, author = {Yannai A. Gonczarowski and Noam Nisan}, title = {Efficient Empirical Revenue Maximization in SingleParameter Auction Environments}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {856868}, doi = {}, year = {2017}, } STOC '17: "The MenuSize Complexity of ..." The MenuSize Complexity of Revenue Approximation Moshe Babaioff, Yannai A. Gonczarowski, and Noam Nisan (Microsoft Research, Israel; Hebrew University of Jerusalem, Israel) We consider a monopolist that is selling n items to a single additive buyer, where the buyer’s values for the items are drawn according to independent distributions F_{1},F_{2},…,F_{n} that possibly have unbounded support. It is well known that — unlike in the single item case — the revenueoptimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open. In this paper, we give an affirmative answer to this open question, showing that for every n and for every ε>0, there exists a complexity bound C=C(n,ε) such that auctions of menu size at most C suffice for obtaining a (1−ε) fraction of the optimal revenue from any F_{1},…,F_{n}. We prove upper and lower bounds on the revenue approximation complexity C(n,ε), as well as on the deterministic communication complexity required to run an auction that achieves such an approximation. @InProceedings{STOC17p869, author = {Moshe Babaioff and Yannai A. Gonczarowski and Noam Nisan}, title = {The MenuSize Complexity of Revenue Approximation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {869877}, doi = {}, year = {2017}, } 

O'Donnell, Ryan 
STOC '17: "Optimal MeanBased Algorithms ..."
Optimal MeanBased Algorithms for Trace Reconstruction
Anindya De, Ryan O'Donnell, and Rocco A. Servedio (Northwestern University, USA; Carnegie Mellon University, USA; Columbia University, USA) In the (deletionchannel) trace reconstruction problem, there is an unknown nbit source string x. An algorithm is given access to independent “traces” of x, where a trace is formed by deleting each bit of x independently with probability δ. The goal of the algorithm is to recover x exactly (with high probability), while minimizing samples (number of traces) and running time. Previously, the best known algorithm for the trace reconstruction problem was due to Holenstein et al. [HMPW08]; it uses exp(O(n^{1/2})) samples and running time for any fixed 0 < δ < 1. It is also what we call a “meanbased algorithm”, meaning that it only uses the empirical means of the individual bits of the traces. Holenstein et al. also gave a lower bound, showing that any meanbased algorithm must use at least n^{Ω(logn)} samples. In this paper we improve both of these results, obtaining matching upper and lower bounds for meanbased trace reconstruction. For any constant deletion rate 0 < δ < 1, we give a meanbased algorithm that uses exp(O(n^{1/3})) time and traces; we also prove that any meanbased algorithm must use at least exp(Ω(n^{1/3})) traces. In fact, we obtain matching upper and lower bounds even for δ subconstant and ρ 1−δ subconstant: when (log^{3} n)/n ≪ δ ≤ 1/2 the bound is exp(−Θ(δ n)^{1/3}), and when 1/√n ≪ ρ ≤ 1/2 the bound is exp(−Θ(n/ρ)^{1/3}). Our proofs involve estimates for the maxima of Littlewood polynomials on complex disks. We show that these techniques can also be used to perform trace reconstruction with random insertions and bitflips in addition to deletions. We also find a surprising result: for deletion probabilities δ > 1/2, the presence of insertions can actually help with trace reconstruction. @InProceedings{STOC17p1047, author = {Anindya De and Ryan O'Donnell and Rocco A. Servedio}, title = {Optimal MeanBased Algorithms for Trace Reconstruction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10471056}, doi = {}, year = {2017}, } STOC '17: "Efficient Quantum Tomography ..." Efficient Quantum Tomography II Ryan O'Donnell and John Wright (Carnegie Mellon University, USA; Massachusetts Institute of Technology, USA) We continue our analysis of: (i) “Quantum tomography”, i.e., learning a quantum state, i.e., the quantum generalization of learning a discrete probability distribution; (ii) The distribution of Young diagrams output by the RSK algorithm on random words. Regarding (ii), we introduce two powerful new tools: first, a precise upper bound on the expected length of the longest union of k disjoint increasing subsequences in a random lengthn word with letter distribution α_{1} ≥ α_{2} ≥ ⋯ ≥ α_{d}. Our bound has the correct main term and secondorder term, and holds for all n, not just in the largen limit. Second, a new majorization property of the RSK algorithm that allows one to analyze the Young diagram formed by the lower rows λ_{k}, λ_{k+1}, … of its output. These tools allow us to prove several new theorems concerning the distribution of random Young diagrams in the nonasymptotic regime, giving concrete error bounds that are optimal, or nearly so, in all parameters. As one example, we give a fundamentally new proof of the celebrated fact that the expected length of the longest increasing sequence in a random lengthn permutation is bounded by 2√n. This is the k = 1, α_{i} ≡ 1/d, d → ∞ special case of a much more general result we prove: the expected length of the kth Young diagram row produced by an αrandom word is α_{k} n ± 2√α_{k}d n. From our new analyses of random Young diagrams we derive several new results in quantum tomography, including: (i) learning the eigenvalues of an unknown state to єaccuracy in Hellingersquared, chisquared, or KL distance, using n = O(d^{2}/є) copies; (ii) learning the topk eigenvalues of an unknown state to єaccuracy in Hellingersquared or chisquared distance using n = O(kd/є) copies or in ℓ_{2}^{2} distance using n = O(k/є) copies; (iii) learning the optimal rankk approximation of an unknown state to єfidelity (Hellingersquared distance) using n = O(kd/є) copies. We believe our new techniques will lead to further advances in quantum learning; indeed, they have already subsequently been used for efficient von Neumann entropy estimation. @InProceedings{STOC17p962, author = {Ryan O'Donnell and John Wright}, title = {Efficient Quantum Tomography II}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {962974}, doi = {}, year = {2017}, } STOC '17: "Sum of Squares Lower Bounds ..." Sum of Squares Lower Bounds for Refuting any CSP Pravesh K. Kothari, Ryuhei Mori, Ryan O'Donnell, and David Witmer (Princeton University, USA; IAS, USA; Tokyo Institute of Technology, Japan; Carnegie Mellon University, USA) Let P:{0,1}^{k} → {0,1} be a nontrivial kary predicate. Consider a random instance of the constraint satisfaction problem (P) on n variables with Δ n constraints, each being P applied to k randomly chosen literals. Provided the constraint density satisfies Δ ≫ 1, such an instance is unsatisfiable with high probability. The refutation problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate P supports a twise uniform probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d = Θ(n/Δ^{2/(t−1)} logΔ) (which runs in time n^{O(d)}) cannot refute a random instance of (P). In particular, the polynomialtime SOS algorithm requires Ω(n^{(t+1)/2}) constraints to refute random instances of CSP(P) when P supports a twise uniform distribution on its satisfying assignments. Together with recent work of Lee et al.(Lee, Raghavendra, Steurer 2015), our result also implies that any polynomialsize semidefinite programming relaxation for refutation requires at least Ω(n^{(t+1)/2}) constraints. More generally, we consider the δrefutation problem, in which the goal is to certify that at most a (1−δ)fraction of constraints can be simultaneously satisfied. We show that if P is δclose to supporting a twise uniform distribution on satisfying assignments, then the degreeΩ(n/Δ^{2/(t−1)} logΔ) SOS algorithm cannot (δ+o(1))refute a random instance of CSP(P). This is the first result to show a distinction between the degree SOS needs to solve the refutation problem and the degree it needs to solve the harder δrefutation problem. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate P, they give a threeway hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen, O’Donnell, Witmer (2015) and Raghavendra, Rao, Schramm (2016), this full threeway tradeoff is tight, up to lowerorder factors. @InProceedings{STOC17p132, author = {Pravesh K. Kothari and Ryuhei Mori and Ryan O'Donnell and David Witmer}, title = {Sum of Squares Lower Bounds for Refuting any CSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {132145}, doi = {}, year = {2017}, } Info 

Oliveira, Igor C. 
STOC '17: "Addition Is Exponentially ..."
Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits
Xi Chen, Igor C. Oliveira, and Rocco A. Servedio (Columbia University, USA; Charles University in Prague, Czechia) Let Add_{k,N} denote the Boolean function which takes as input k strings of N bits each, representing k numbers a^{(1)},…,a^{(k)} in {0,1,…,2^{N}−1}, and outputs 1 if and only if a^{(1)} + ⋯ + a^{(k)} ≥ 2^{N}. Let MAJ_{t,n} denote a monotone unweighted threshold gate, i.e., the Boolean function which takes as input a single string x ∈ {0,1}^{n} and outputs 1 if and only if x_{1} + ⋯ + x_{n} ≥ t. The function Add_{k,N} may be viewed as a monotone function that performs addition, and MAJ_{t,n} may be viewed as a monotone gate that performs counting. We refer to circuits that are composed of MAJ gates as monotone majority circuits. The main result of this paper is an exponential lower bound on the size of boundeddepth monotone majority circuits that compute Add_{k,N}. More precisely, we show that for any constant d ≥ 2, any depthd monotone majority circuit that computes Add_{d,N} must have size 2^{Ω(N1/d)}. As Add_{k,N} can be computed by a single monotone weighted threshold gate (that uses exponentially large weights), our lower bound implies that constantdepth monotone majority circuits require exponential size to simulate monotone weighted threshold gates. This answers a question posed by Goldmann and Karpinski (STOC’93) and recently restated by Håstad (2010, 2014). We also show that our lower bound is essentially best possible, by constructing a depthd, size 2^{O(N1/d)} monotone majority circuit for Add_{d,N}. As a corollary of our lower bound, we significantly strengthen a classical theorem in circuit complexity due to Ajtai and Gurevich (JACM’87). They exhibited a monotone function that is in AC^{0} but requires superpolynomial size for any constantdepth monotone circuit composed of unbounded fanin AND and OR gates. We describe a monotone function that is in depth3 AC^{0} but requires exponential size monotone circuits of any constant depth, even if the circuits are composed of MAJ gates. @InProceedings{STOC17p1232, author = {Xi Chen and Igor C. Oliveira and Rocco A. Servedio}, title = {Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12321245}, doi = {}, year = {2017}, } STOC '17: "Pseudodeterministic Constructions ..." Pseudodeterministic Constructions in Subexponential Time Igor C. Oliveira and Rahul Santhanam (Charles University in Prague, Czechia; University of Oxford, UK) We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {p_{n}} of primes and a randomized algorithm A running in expected subexponential time such that for each n, on input 1^{pn}, A outputs p_{n} with probability 1. In other words, our result provides a pseudodeterministic construction of primes in subexponential time which works infinitely often. This result follows from a more general theorem about pseudodeterministic constructions. A property Q ⊆ {0,1}^{*} is γdense if for large enough n, Q ∩ {0,1}^{n} ≥ γ 2^{n}. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {H_{n}} of sets, H_{n} ⊆ {0,1}^{n}, such that for each (1/n^{c})dense property Q ∈ DTIME(n^{c}) and every large enough n, H_{n} ∩ Q ≠ ∅; or (2) There is a deterministic subexponential time construction of a family {H′_{n}} of sets, H′_{n} ⊆ {0,1}^{n}, such that for each (1/n^{c})dense property Q ∈ DTIME(n^{c}) and for infinitely many values of n, H′_{n} ∩ Q ≠ ∅. We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a nonconstructive element, arising from a sequence of applications of the hardness versus randomness paradigm. @InProceedings{STOC17p665, author = {Igor C. Oliveira and Rahul Santhanam}, title = {Pseudodeterministic Constructions in Subexponential Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {665677}, doi = {}, year = {2017}, } 

Oliveira, Rafael 
STOC '17: "Algorithmic and Optimization ..."
Algorithmic and Optimization Aspects of BrascampLieb Inequalities, via Operator Scaling
Ankit Garg, Leonid Gurvits, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; City College of New York, USA; Princeton University, USA; IAS, USA) The celebrated BrascampLieb (BL) inequalities [BL76, Lie90], and their reverse form of Barthe [Bar98], are an important mathematical tool, unifying and generalizing numerous in equalities in analysis, convex geometry and information theory, with many used in computer science. While their structural theory is very well understood, far less is known about computing their main parameters below (which we later define). Prior to this work, the best known algorithms for any of these optimization tasks required at least exponential time. In this work, we give polynomial time algorithms to compute: (1) Feasibility of BLdatum, (2) Optimal BL constant, (3) Weak separation oracle for BLpolytopes. What is particularly exciting about this progress, beyond the better understanding of BL inequalities, is that the objects above naturally encode rich families of optimization problems which had no prior efficient algorithms. In particular, the BLconstants (which we efficiently compute) are solutions to nonconvex optimization problems, and the BLpolytopes (for which we provide efficient membership and separation oracles) are linear programs with exponentially many facets. Thus we hope that new combinatorial optimization problems can be solved via reductions to the ones above, and make modest initial steps in exploring this possibility. Our algorithms are obtained by a simple efficient reduction of a given BLdatum to an instance of the Operator Scaling problem defined by [Gur04]. To obtain the results above, we utilize the two (very recent and different) algorithms for the operator scaling problem [GGOW16, IQS15a]. Our reduction implies algorithmic versions of many of the known structural results on BLinequalities, and in some cases provide proofs that are different or simpler than existing ones. Further, the analytic properties of the [GGOW16] algorithm provide new, effective bounds on the magnitude and continuity of BLconstants, with applications to nonlinear versions of BLinequalities; prior work relied on compactness, and thus provided no bounds. On a higher level, our application of operator scaling algorithm to BLinequalities further connects analysis and optimization with the diverse mathematical areas used so far to mo tivate and solve the operator scaling problem, which include commutative invariant theory, noncommutative algebra, computational complexity and quantum information theory. @InProceedings{STOC17p397, author = {Ankit Garg and Leonid Gurvits and Rafael Oliveira and Avi Wigderson}, title = {Algorithmic and Optimization Aspects of BrascampLieb Inequalities, via Operator Scaling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {397409}, doi = {}, year = {2017}, } 

Olver, Neil 
STOC '17: "A Simpler and Faster Strongly ..."
A Simpler and Faster Strongly Polynomial Algorithm for Generalized Flow Maximization
Neil Olver and László A. Végh (VU University Amsterdam, Netherlands; CWI, Netherlands; London School of Economics, UK) We present a new strongly polynomial algorithm for generalized flow maximization. The first strongly polynomial algorithm for this problem was given very recently by Végh; our new algorithm is much simpler, and much faster. The complexity bound O((m+nlogn)mnlog(n^{2}/m)) improves on the previous estimate obtained by Végh by almost a factor O(n^{2}). Even for small numerical parameter values, our algorithm is essentially as fast as the best weakly polynomial algorithms. The key new technical idea is relaxing primal feasibility conditions. This allows us to work almost exclusively with integral flows, in contrast to all previous algorithms. @InProceedings{STOC17p100, author = {Neil Olver and László A. Végh}, title = {A Simpler and Faster Strongly Polynomial Algorithm for Generalized Flow Maximization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {100111}, doi = {}, year = {2017}, } 

Pagh, Rasmus 
STOC '17: "Set Similarity Search Beyond ..."
Set Similarity Search Beyond MinHash
Tobias Christiani and Rasmus Pagh (IT University of Copenhagen, Denmark) We consider the problem of approximate set similarity search under BraunBlanquet similarity B(x, y) = x ∩ y / max(x, y). The (b_{1}, b_{2})approximate BraunBlanquet similarity search problem is to preprocess a collection of sets P such that, given a query set q, if there exists x ∈ P with B(q, x) ≥ b_{1}, then we can efficiently return x′ ∈ P with B(q, x′) > b_{2}. We present a simple data structure that solves this problem with space usage O(n^{1+ρ}logn + ∑_{x ∈ P}x) and query time O(qn^{ρ} logn) where n = P and ρ = log(1/b_{1})/log(1/b_{2}). Making use of existing lower bounds for localitysensitive hashing by O’Donnell et al. (TOCT 2014) we show that this value of ρ is tight across the parameter space, i.e., for every choice of constants 0 < b_{2} < b_{1} < 1. In the case where all sets have the same size our solution strictly improves upon the value of ρ that can be obtained through the use of stateoftheart dataindependent techniques in the IndykMotwani localitysensitive hashing framework (STOC 1998) such as Broder’s MinHash (CCS 1997) for Jaccard similarity and Andoni et al.’s crosspolytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is dataindependent, for a large part of the parameter space we outperform the currently best datadependent method by Andoni and Razenshteyn (STOC 2015). @InProceedings{STOC17p1094, author = {Tobias Christiani and Rasmus Pagh}, title = {Set Similarity Search Beyond MinHash}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10941107}, doi = {}, year = {2017}, } 

Pak, Igor 
STOC '17: "Complexity of Short Presburger ..."
Complexity of Short Presburger Arithmetic
Danny Nguyen and Igor Pak (University of California at Los Angeles, USA) We study complexity of short sentences in Presburger arithmetic (ShortPA). Here by “short” we mean sentences with a bounded number of variables, quantifers, inequalities and Boolean operations; the input consists only of the integers involved in the inequalities. We prove that assuming Kannan’s partition can be found in polynomial time, the satisfability of ShortPA sentences can be decided in polynomial time. Furthermore, under the same assumption, we show that the numbers of satisfying assignments of short Presburger sentences can also be computed in polynomial time. @InProceedings{STOC17p812, author = {Danny Nguyen and Igor Pak}, title = {Complexity of Short Presburger Arithmetic}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {812820}, doi = {}, year = {2017}, } 

Pandurangan, Gopal 
STOC '17: "A Time and MessageOptimal ..."
A Time and MessageOptimal Distributed Algorithm for Minimum Spanning Trees
Gopal Pandurangan, Peter Robinson, and Michele Scquizzato (University of Houston, USA; Royal Holloway University of London, UK) This paper presents a randomized (Las Vegas) distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in Õ(D + √n) time and exchanges Õ(m) messages (both with high probability), where n is the number of nodes of the network, D is the diameter, and m is the number of edges. This is the first distributed MST algorithm that matches simultaneously the time lower bound of Ω(D + √n) [Elkin, SIAM J. Comput. 2006] and the message lower bound of Ω(m) [Kutten et al., J. ACM 2015], which both apply to randomized Monte Carlo algorithms. The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires both Ω(D + √n) rounds and Ω(m) messages. @InProceedings{STOC17p743, author = {Gopal Pandurangan and Peter Robinson and Michele Scquizzato}, title = {A Time and MessageOptimal Distributed Algorithm for Minimum Spanning Trees}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {743756}, doi = {}, year = {2017}, } 

Panigrahi, Debmalya 
STOC '17: "Online Service with Delay ..."
Online Service with Delay
Yossi Azar, Arun Ganesh, Rong Ge, and Debmalya Panigrahi (Tel Aviv University, Israel; Duke University, USA) In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a polylogarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems. @InProceedings{STOC17p551, author = {Yossi Azar and Arun Ganesh and Rong Ge and Debmalya Panigrahi}, title = {Online Service with Delay}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {551563}, doi = {}, year = {2017}, } STOC '17: "Online and Dynamic Algorithms ..." Online and Dynamic Algorithms for Set Cover Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Debmalya Panigrahi (Carnegie Mellon University, USA; Microsoft Research, India; IIT Delhi, India; Duke University, USA) In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of “active” elements to be covered changes over time. The goal is to maintain a nearoptimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primaldual offline algorithms for set cover. The former leads to a competitive ratio of O(logn_{t}), where n_{t} is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on f_{t}, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research. @InProceedings{STOC17p537, author = {Anupam Gupta and Ravishankar Krishnaswamy and Amit Kumar and Debmalya Panigrahi}, title = {Online and Dynamic Algorithms for Set Cover}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {537550}, doi = {}, year = {2017}, } 

Panolan, Fahad 
STOC '17: "Lossy Kernelization ..."
Lossy Kernelization
Daniel Lokshtanov, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh (University of Bergen, Norway; Vienna University of Technology, Austria; Institute of Mathematical Sciences, India) In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions com bine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size αapproximate kernel. Loosely speaking, a polynomial size αapproximate kernel is a polynomial time preprocessing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance (I′,k′) to the same problem, such that I′ + k′ ≤ k^{O(1)}. Additionally, for every c≥ 1, a capproximate solution s′ to the preprocessed instance (I′, k′) can be turned in polynomial time into a (c · α)approximate solution s to the original instance (I,k). Amongst our main technical contributions are αapproximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆ coNP/Poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an αapproximate kernel of polynomial size, for any α≥ 1, unless NP ⊆ coNP/Poly. In order to prove this lower bound we need to combine in a nontrivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation. @InProceedings{STOC17p224, author = {Daniel Lokshtanov and Fahad Panolan and M. S. Ramanujan and Saket Saurabh}, title = {Lossy Kernelization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {224237}, doi = {}, year = {2017}, } 

Peebles, John 
STOC '17: "Sampling Random Spanning Trees ..."
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva (Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA) We present an algorithm that, with high probability, generates a random spanning tree from an edgeweighted undirected graph in (n^{5/3 }m^{1/3}) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(n^{ω}). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{n^{ω},m√n,m^{4/3}}) for m ≫ n^{7/4} (Colbourn et al. ’96, KelnerMadry ’09, Madry et al. ’15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinantbased and random walkbased techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute approximate effective resistances for a set S of vertex pairs via approximate Schur complements in (m+(n + S)^{−2}) time, without using the JohnsonLindenstrauss lemma which requires ( min{(m + S)^{−2}, m+n^{−4} +S^{−2}}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn’t sufficiently accurate. @InProceedings{STOC17p730, author = {David Durfee and Rasmus Kyng and John Peebles and Anup B. Rao and Sushant Sachdeva}, title = {Sampling Random Spanning Trees Faster Than Matrix Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {730742}, doi = {}, year = {2017}, } Info STOC '17: "AlmostLinearTime Algorithms ..." AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graphtheoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almostlineartime directed Laplacian system solver, and, by leveraging the recent framework of [CohenKelnerPeeblesPengSidfordVladu, FOCS’16], we also obtain almostlineartime algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm^{3/4} + n^{2/3} m) log^{O(1)} (n κ є^{−1})) to O((m + n2^{O(√lognloglogn)}) log^{O(1)} (n κ є^{−1})) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410419}, doi = {}, year = {2017}, } 

Peikert, Chris 
STOC '17: "Pseudorandomness of RingLWE ..."
Pseudorandomness of RingLWE for Any Ring and Modulus
Chris Peikert, Oded Regev, and Noah StephensDavidowitz (University of Michigan, USA; New York University, USA) We give a polynomialtime quantum reduction from worstcase (ideal) lattice problems directly to decision (Ring)LWE. This extends to decision all the worstcase hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision RingLWE with any number field and any modulus. @InProceedings{STOC17p461, author = {Chris Peikert and Oded Regev and Noah StephensDavidowitz}, title = {Pseudorandomness of RingLWE for Any Ring and Modulus}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {461473}, doi = {}, year = {2017}, } 

Peng, Richard 
STOC '17: "AlmostLinearTime Algorithms ..."
AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graphtheoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almostlineartime directed Laplacian system solver, and, by leveraging the recent framework of [CohenKelnerPeeblesPengSidfordVladu, FOCS’16], we also obtain almostlineartime algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm^{3/4} + n^{2/3} m) log^{O(1)} (n κ є^{−1})) to O((m + n2^{O(√lognloglogn)}) log^{O(1)} (n κ є^{−1})) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410419}, doi = {}, year = {2017}, } 

Peres, Yuval 
STOC '17: "Trace Reconstruction with ..."
Trace Reconstruction with exp(O(n^{1/3})) Samples
Fedor Nazarov and Yuval Peres (Kent State University, USA; Microsoft Research, USA) In the trace reconstruction problem, an unknown bit string x ∈ {0,1}^{n} is observed through the deletion channel, which deletes each bit of x with some constant probability q, yielding a contracted string x. How many independent copies of x are needed to reconstruct x with high probability? Prior to this work, the best upper bound, due to Holenstein, Mitzenmacher, Panigrahy, and Wieder (2008), was exp(O(n^{1/2})). We improve this bound to exp(O(n^{1/3})) using statistics of individual bits in the output and show that this bound is sharp in the restricted model where this is the only information used. Our method, that uses elementary complex analysis, can also handle insertions. Similar results were obtained independently and simultaneously by Anindya De, Ryan O’Donnell and Rocco Servedio. @InProceedings{STOC17p1042, author = {Fedor Nazarov and Yuval Peres}, title = {Trace Reconstruction with exp(O(n<sup>1/3</sup>)) Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10421046}, doi = {}, year = {2017}, } STOC '17: "Local MaxCut in Smoothed ..." Local MaxCut in Smoothed Polynomial Time Omer Angel, Sébastien Bubeck, Yuval Peres, and Fan Wei (University of British Columbia, Canada; Microsoft Research, USA; Stanford University, USA) In 1988, Johnson, Papadimitriou and Yannakakis wrote that “Practically all the empirical evidence would lead us to conclude that finding locally optimal solutions is much easier than solving NPhard problems”. Since then the empirical evidence has continued to amass, but formal proofs of this phenomenon have remained elusive. A canonical (and indeed complete) example is the local maxcut problem, for which no polynomial time method is known. In a breakthrough paper, Etscheid and R'oglin proved that the smoothed complexity of local maxcut is quasipolynomial, i.e., if arbitrary bounded weights are randomly perturbed, a local maximum can be found in φ n^{O(logn)} steps where φ is an upper bound on the random edge weight density. In this paper we prove smoothed polynomial complexity for local maxcut, thus confirming that finding local optima for maxcut is much easier than solving it. @InProceedings{STOC17p429, author = {Omer Angel and Sébastien Bubeck and Yuval Peres and Fan Wei}, title = {Local MaxCut in Smoothed Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {429437}, doi = {}, year = {2017}, } 

Perkins, Will 
STOC '17: "InformationTheoretic Thresholds ..."
InformationTheoretic Thresholds from the Cavity Method
Amin CojaOghlan, Florent Krzakala, Will Perkins, and Lenka Zdeborova (Goethe University Frankfurt, Germany; CNRS, France; PSL Research University, France; ENS, France; UPMC, France; University of Birmingham, UK; CEA, France; University of ParisSaclay, France) Vindicating a sophisticated but nonrigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the informationtheoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in LowDensity Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications. @InProceedings{STOC17p146, author = {Amin CojaOghlan and Florent Krzakala and Will Perkins and Lenka Zdeborova}, title = {InformationTheoretic Thresholds from the Cavity Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {146157}, doi = {}, year = {2017}, } 

Pettie, Seth 
STOC '17: "Exponential Separations in ..."
Exponential Separations in the Energy Complexity of Leader Election
YiJun Chang, Tsvi Kopelowitz, Seth Pettie, Ruosong Wang, and Wei Zhan (University of Michigan, USA; Tsinghua University, China) Energy is often the most constrained resource for batterypowered wireless devices and the lion’s share of energy is often spent on transceiver usage (sending/receiving packets), not on computation. In this paper we study the energy complexity of Leader Election and Approximate Counting in several models of wireless radio networks. It turns out that energy complexity is very sensitive to whether the devices can generate random bits and their ability to detect collisions. We consider four collisiondetection models: StrongCD (in which transmitters and listeners detect collisions), SenderCD and ReceiverCD (in which only transmitters or only listeners detect collisions), and NoCD (in which no one detects collisions.) The takeaway message of our results is quite surprising. For randomized Leader Election algorithms, there is an exponential gap between the energy complexity of SenderCD and ReceiverCD: NoCD = SenderCD ≫ ReceiverCD = StrongCD and for deterministic Leader Election algorithms, there is another exponential gap in energy complexity, but in the reverse direction: NoCD = ReceiverCD ≫ SenderCD = StrongCD In particular, the randomized energy complexity of Leader Election is Θ(log^{*} n) in SenderCD but Θ(log(log^{*} n)) in ReceiverCD, where n is the (unknown) number of devices. Its deterministic complexity is Θ(logN) in ReceiverCD but Θ(loglogN) in SenderCD, where N is the (known) size of the devices’ ID space. There is a tradeoff between time and energy. We give a new upper bound on the timeenergy tradeoff curve for randomized Leader Election and Approximate Counting. A critical component of this algorithm is a new deterministic Leader Election algorithm for dense instances, when n=Θ(N), with inverseAckermanntype (O(α(N))) energy complexity. @InProceedings{STOC17p771, author = {YiJun Chang and Tsvi Kopelowitz and Seth Pettie and Ruosong Wang and Wei Zhan}, title = {Exponential Separations in the Energy Complexity of Leader Election}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {771783}, doi = {}, year = {2017}, } 

Pitassi, Toniann 
STOC '17: "Strongly Exponential Lower ..."
Strongly Exponential Lower Bounds for Monotone Computation
Toniann Pitassi and Robert Robere (University of Toronto, Canada) We prove size lower bounds of 2^{Ω(n)} for an explicit function in monotone NP in the following models of computation: monotone formulas, monotone switching networks, monotone span programs, and monotone comparator circuits, where n is the number of variables of the underlying function. Our lower bounds improve on the best previous bounds in each of these models, and are the best possible for any function up to constant factors in the exponent. Moreover, we give one unified proof that is short and fairly elementary. @InProceedings{STOC17p1246, author = {Toniann Pitassi and Robert Robere}, title = {Strongly Exponential Lower Bounds for Monotone Computation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12461255}, doi = {}, year = {2017}, } 

Poburinnaya, Oxana 
STOC '17: "Equivocating Yao: ConstantRound ..."
Equivocating Yao: ConstantRound Adaptively Secure Multiparty Computation in the Plain Model
Ran Canetti, Oxana Poburinnaya, and Muthuramakrishnan Venkitasubramaniam (Boston University, USA; Tel Aviv University, Israel; University of Rochester, USA) Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable twomessage, twoparty secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds? We provide a positive answer to this question. We define a new type of encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function. Combining our scheme with noncommitting encryption, we obtain the first twomessage, twoparty computation protocol, and the first constantrounds multiparty computation protocol, in the plain model, that are secure against semihonest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within. @InProceedings{STOC17p497, author = {Ran Canetti and Oxana Poburinnaya and Muthuramakrishnan Venkitasubramaniam}, title = {Equivocating Yao: ConstantRound Adaptively Secure Multiparty Computation in the Plain Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {497509}, doi = {}, year = {2017}, } Info 

Raghavendra, Prasad 
STOC '17: "Strongly Refuting Random CSPs ..."
Strongly Refuting Random CSPs Below the Spectral Threshold
Prasad Raghavendra, Satish Rao, and Tselil Schramm (University of California at Berkeley, USA) Random constraint satisfaction problems (CSPs) are known to exhibit threshold phenomena: given a uniformly random instance of a CSP with n variables and m clauses, there is a value of m = Ω(n) beyond which the CSP will be unsatisfiable with high probability. Strong refutation is the problem of certifying that no variable assignment satisfies more than a constant fraction of clauses; this is the natural algorithmic problem in the unsatisfiable regime (when m/n = ω(1)). Intuitively, strong refutation should become easier as the clause density m/n grows, because the contradictions introduced by the random clauses become more locally apparent. For CSPs such as kSAT and kXOR, there is a longstanding gap between the clause density at which efficient strong refutation algorithms are known, m/n ≥ Õ(n^{k/2−1}), and the clause density at which instances become unsatisfiable with high probability, m/n = ω (1). In this paper, we give spectral and sumofsquares algorithms for strongly refuting random kXOR instances with clause density m/n ≥ Õ(n^{(k/2−1)(1−δ)}) in time exp(Õ(n^{δ})) or in Õ(n^{δ}) rounds of the sumofsquares hierarchy, for any δ ∈ [0,1) and any integer k ≥ 3. Our algorithms provide a smooth transition between the clause density at which polynomialtime algorithms are known at δ = 0, and bruteforce refutation at the satisfiability threshold when δ = 1. We also leverage our kXOR results to obtain strong refutation algorithms for SAT (or any other Boolean CSP) at similar clause densities. Our algorithms match the known sumofsquares lower bounds due to Grigoriev and Schonebeck, up to logarithmic factors. @InProceedings{STOC17p121, author = {Prasad Raghavendra and Satish Rao and Tselil Schramm}, title = {Strongly Refuting Random CSPs Below the Spectral Threshold}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {121131}, doi = {}, year = {2017}, } STOC '17: "Approximating Rectangles by ..." Approximating Rectangles by Juntas and WeaklyExponential Lower Bounds for LP Relaxations of CSPs Pravesh K. Kothari, Raghu Meka, and Prasad Raghavendra (Princeton University, USA; IAS, USA; University of California at Los Angeles, USA; University of California at Berkeley, USA) We show that for constraint satisfaction problems (CSPs), subexponential size linear programming relaxations are as powerful as n^{Ω(1)}rounds of the SheraliAdams linear programming hierarchy. As a corollary, we obtain subexponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAXCUT and MAX3SAT. This is a nearlyexponential improvement over previous results; previously, the best known lower bounds were quasipolynomial in n (Chan, Lee, Raghavendra, Steurer 2013). Our bounds are obtained by exploiting and extending the recent progress in communication complexity for ”lifting” query lower bounds to communication problems. The main ingredient in our results is a new structural result on “highentropy rectangles” that may of independent interest in communication complexity. @InProceedings{STOC17p590, author = {Pravesh K. Kothari and Raghu Meka and Prasad Raghavendra}, title = {Approximating Rectangles by Juntas and WeaklyExponential Lower Bounds for LP Relaxations of CSPs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {590603}, doi = {}, year = {2017}, } 

Raja, S. 
STOC '17: "Randomized Polynomial Time ..."
Randomized Polynomial Time Identity Testing for Noncommutative Circuits
V. Arvind, Pushkar S Joglekar, Partha Mukhopadhyay, and S. Raja (Institute of Mathematical Sciences, India; Vishwakarma Institute of Technology Pune, India; Chennai Mathematical Institute, India) In this paper we show that blackbox polynomial identity testing for noncommutative polynomials f∈F⟨ z_{1},z_{2},⋯,z_{n} ⟩ of degree D and sparsity t, can be done in randomized (n,logt,logD) time. As a consequence, given a circuit C of size s computing a polynomial f∈F⟨ z_{1},z_{2},⋯,z_{n} ⟩ with at most t nonzero monomials, then testing if f is identically zero can be done by a randomized algorithm with running time polynomial in s and n and logt. This makes significant progress on a question that has been open for over ten years. Our algorithm is based on automatatheoretic ideas that can efficiently isolate a monomial in the given polynomial. In particular, we carry out the monomial isolation using nondeterministic automata. In general, noncommutative circuits of size s can compute polynomials of degree exponential in s and number of monomials doubleexponential in s. In this paper, we consider a natural class of homogeneous noncommutative circuits, that we call +regular circuits, and give a whitebox polynomial time deterministic polynomial identity test. These circuits can compute noncommutative polynomials with number of monomials doubleexponential in the circuit size. Our algorithm combines some new structural results for +regular circuits with known results for noncommutative ABP identity testing, rank bound of commutative depth three identities, and equivalence testing problem for words. Finally, we consider the blackbox identity testing problem for depth three +regular circuits and give a randomized polynomial time identity test. In particular, we show if f∈⟨ Z⟩ is a nonzero noncommutative polynomial computed by a depth three +regular circuit of size s, then f cannot be a polynomial identity for the matrix algebra M_{s}(F) when F is sufficiently large depending on the degree of f. @InProceedings{STOC17p831, author = {V. Arvind and Pushkar S Joglekar and Partha Mukhopadhyay and S. Raja}, title = {Randomized Polynomial Time Identity Testing for Noncommutative Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {831841}, doi = {}, year = {2017}, } 

Ramanujan, M. S. 
STOC '17: "Lossy Kernelization ..."
Lossy Kernelization
Daniel Lokshtanov, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh (University of Bergen, Norway; Vienna University of Technology, Austria; Institute of Mathematical Sciences, India) In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions com bine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size αapproximate kernel. Loosely speaking, a polynomial size αapproximate kernel is a polynomial time preprocessing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance (I′,k′) to the same problem, such that I′ + k′ ≤ k^{O(1)}. Additionally, for every c≥ 1, a capproximate solution s′ to the preprocessed instance (I′, k′) can be turned in polynomial time into a (c · α)approximate solution s to the original instance (I,k). Amongst our main technical contributions are αapproximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆ coNP/Poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an αapproximate kernel of polynomial size, for any α≥ 1, unless NP ⊆ coNP/Poly. In order to prove this lower bound we need to combine in a nontrivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation. @InProceedings{STOC17p224, author = {Daniel Lokshtanov and Fahad Panolan and M. S. Ramanujan and Saket Saurabh}, title = {Lossy Kernelization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {224237}, doi = {}, year = {2017}, } 

Rao, Anup B. 
STOC '17: "Sampling Random Spanning Trees ..."
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva (Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA) We present an algorithm that, with high probability, generates a random spanning tree from an edgeweighted undirected graph in (n^{5/3 }m^{1/3}) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(n^{ω}). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{n^{ω},m√n,m^{4/3}}) for m ≫ n^{7/4} (Colbourn et al. ’96, KelnerMadry ’09, Madry et al. ’15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinantbased and random walkbased techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute approximate effective resistances for a set S of vertex pairs via approximate Schur complements in (m+(n + S)^{−2}) time, without using the JohnsonLindenstrauss lemma which requires ( min{(m + S)^{−2}, m+n^{−4} +S^{−2}}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn’t sufficiently accurate. @InProceedings{STOC17p730, author = {David Durfee and Rasmus Kyng and John Peebles and Anup B. Rao and Sushant Sachdeva}, title = {Sampling Random Spanning Trees Faster Than Matrix Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {730742}, doi = {}, year = {2017}, } Info STOC '17: "AlmostLinearTime Algorithms ..." AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graphtheoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almostlineartime directed Laplacian system solver, and, by leveraging the recent framework of [CohenKelnerPeeblesPengSidfordVladu, FOCS’16], we also obtain almostlineartime algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm^{3/4} + n^{2/3} m) log^{O(1)} (n κ є^{−1})) to O((m + n2^{O(√lognloglogn)}) log^{O(1)} (n κ є^{−1})) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410419}, doi = {}, year = {2017}, } 

Rao, Satish 
STOC '17: "Strongly Refuting Random CSPs ..."
Strongly Refuting Random CSPs Below the Spectral Threshold
Prasad Raghavendra, Satish Rao, and Tselil Schramm (University of California at Berkeley, USA) Random constraint satisfaction problems (CSPs) are known to exhibit threshold phenomena: given a uniformly random instance of a CSP with n variables and m clauses, there is a value of m = Ω(n) beyond which the CSP will be unsatisfiable with high probability. Strong refutation is the problem of certifying that no variable assignment satisfies more than a constant fraction of clauses; this is the natural algorithmic problem in the unsatisfiable regime (when m/n = ω(1)). Intuitively, strong refutation should become easier as the clause density m/n grows, because the contradictions introduced by the random clauses become more locally apparent. For CSPs such as kSAT and kXOR, there is a longstanding gap between the clause density at which efficient strong refutation algorithms are known, m/n ≥ Õ(n^{k/2−1}), and the clause density at which instances become unsatisfiable with high probability, m/n = ω (1). In this paper, we give spectral and sumofsquares algorithms for strongly refuting random kXOR instances with clause density m/n ≥ Õ(n^{(k/2−1)(1−δ)}) in time exp(Õ(n^{δ})) or in Õ(n^{δ}) rounds of the sumofsquares hierarchy, for any δ ∈ [0,1) and any integer k ≥ 3. Our algorithms provide a smooth transition between the clause density at which polynomialtime algorithms are known at δ = 0, and bruteforce refutation at the satisfiability threshold when δ = 1. We also leverage our kXOR results to obtain strong refutation algorithms for SAT (or any other Boolean CSP) at similar clause densities. Our algorithms match the known sumofsquares lower bounds due to Grigoriev and Schonebeck, up to logarithmic factors. @InProceedings{STOC17p121, author = {Prasad Raghavendra and Satish Rao and Tselil Schramm}, title = {Strongly Refuting Random CSPs Below the Spectral Threshold}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {121131}, doi = {}, year = {2017}, } 

Raz, Ran 
STOC '17: "TimeSpace Hardness of Learning ..."
TimeSpace Hardness of Learning Sparse Parities
Gillat Kol, Ran Raz, and Avishay Tal (Princeton University, USA; IAS, USA) We define a concept class F to be timespace hard (or memorysamples hard) if any learning algorithm for F requires either a memory of size superlinear in n or a number of samples superpolynomial in n, where n is the length of one sample. A recent work shows that the class of all parity functions is timespace hard [Raz, FOCS’16]. Building on [Raz, FOCS’16], we show that the class of all sparse parities of Hamming weight ℓ is timespace hard, as long as ℓ ≥ ω(logn / loglogn). Consequently, linearsize DNF Formulas, linearsize Decision Trees and logarithmicsize Juntas are all timespace hard. Our result is more general and provides timespace lower bounds for learning any concept class of parity functions. We give applications of our results in the field of boundedstorage cryptography. For example, for every ω(logn) ≤ k ≤ n, we obtain an encryption scheme that requires a private key of length k, and time complexity of n per encryption/decryption of each bit, and is provably and unconditionally secure as long as the attacker uses at most o(nk) memory bits and the scheme is used at most 2^{o(k)} times. Previously, this was known only for k=n [Raz, FOCS’16]. @InProceedings{STOC17p1067, author = {Gillat Kol and Ran Raz and Avishay Tal}, title = {TimeSpace Hardness of Learning Sparse Parities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10671080}, doi = {}, year = {2017}, } 

Razenshteyn, Ilya 
STOC '17: "Approximate Near Neighbors ..."
Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA) We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doublylogarithmic approximation. Specifically, for every n, d = n^{o(1)}, and every ddimensional symmetric norm ·, there exists a data structure for (loglogn)approximate nearest neighbor search over · for npoint datasets achieving n^{o(1)} query time and n^{1+o(1)} space. The main technical ingredient of the algorithm is a lowdistortion embedding of a symmetric norm into a lowdimensional iterated product of topk norms. We also show that our techniques cannot be extended to *general* norms. @InProceedings{STOC17p902, author = {Alexandr Andoni and Huy L. Nguyen and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Approximate Near Neighbors for General Symmetric Norms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902913}, doi = {}, year = {2017}, } 

Regev, Oded 
STOC '17: "Pseudorandomness of RingLWE ..."
Pseudorandomness of RingLWE for Any Ring and Modulus
Chris Peikert, Oded Regev, and Noah StephensDavidowitz (University of Michigan, USA; New York University, USA) We give a polynomialtime quantum reduction from worstcase (ideal) lattice problems directly to decision (Ring)LWE. This extends to decision all the worstcase hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision RingLWE with any number field and any modulus. @InProceedings{STOC17p461, author = {Chris Peikert and Oded Regev and Noah StephensDavidowitz}, title = {Pseudorandomness of RingLWE for Any Ring and Modulus}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {461473}, doi = {}, year = {2017}, } STOC '17: "A Reverse Minkowski Theorem ..." A Reverse Minkowski Theorem Oded Regev and Noah StephensDavidowitz (New York University, USA) We prove a conjecture due to Dadush, showing that if L⊂ ℝ^{n} is a lattice such that det(L′) ≥ 1 for all sublattices L′ ⊆ L, then ∑→y ∈ L e−t^{2} →y ^{2} ≤ 3/2 , where t := 10(logn + 2). This implies bounds on the number of lattice points in Euclidean balls for various different radii, which can be seen as a reverse form of Minkowski’s First Theorem. @InProceedings{STOC17p941, author = {Oded Regev and Noah StephensDavidowitz}, title = {A Reverse Minkowski Theorem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {941953}, doi = {}, year = {2017}, } 

Risteski, Andrej 
STOC '17: "Provable Learning of Noisyor ..."
Provable Learning of Noisyor Networks
Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski (Princeton University, USA; Duke University, USA) Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Learning the model that is, the mapping from hidden variables to visible ones and vice versais NPhard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structure: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the singlelayer noisyOR network, which is a textbook example of a bayes net, and used for example in the classic QMRDT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable. @InProceedings{STOC17p1057, author = {Sanjeev Arora and Rong Ge and Tengyu Ma and Andrej Risteski}, title = {Provable Learning of Noisyor Networks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10571066}, doi = {}, year = {2017}, } 

Robere, Robert 
STOC '17: "Strongly Exponential Lower ..."
Strongly Exponential Lower Bounds for Monotone Computation
Toniann Pitassi and Robert Robere (University of Toronto, Canada) We prove size lower bounds of 2^{Ω(n)} for an explicit function in monotone NP in the following models of computation: monotone formulas, monotone switching networks, monotone span programs, and monotone comparator circuits, where n is the number of variables of the underlying function. Our lower bounds improve on the best previous bounds in each of these models, and are the best possible for any function up to constant factors in the exponent. Moreover, we give one unified proof that is short and fairly elementary. @InProceedings{STOC17p1246, author = {Toniann Pitassi and Robert Robere}, title = {Strongly Exponential Lower Bounds for Monotone Computation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12461255}, doi = {}, year = {2017}, } 

Robinson, Peter 
STOC '17: "A Time and MessageOptimal ..."
A Time and MessageOptimal Distributed Algorithm for Minimum Spanning Trees
Gopal Pandurangan, Peter Robinson, and Michele Scquizzato (University of Houston, USA; Royal Holloway University of London, UK) This paper presents a randomized (Las Vegas) distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in Õ(D + √n) time and exchanges Õ(m) messages (both with high probability), where n is the number of nodes of the network, D is the diameter, and m is the number of edges. This is the first distributed MST algorithm that matches simultaneously the time lower bound of Ω(D + √n) [Elkin, SIAM J. Comput. 2006] and the message lower bound of Ω(m) [Kutten et al., J. ACM 2015], which both apply to randomized Monte Carlo algorithms. The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires both Ω(D + √n) rounds and Ω(m) messages. @InProceedings{STOC17p743, author = {Gopal Pandurangan and Peter Robinson and Michele Scquizzato}, title = {A Time and MessageOptimal Distributed Algorithm for Minimum Spanning Trees}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {743756}, doi = {}, year = {2017}, } 

Rosen, Alon 
STOC '17: "AverageCase FineGrained ..."
AverageCase FineGrained Hardness
Marshall Ball, Alon Rosen, Manuel Sabin, and Prashant Nalini Vasudevan (Columbia University, USA; IDC Herzliya, Israel; University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widelyconjectured worstcase hardness for problems from the study of finegrained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL ’94), but these have been canonical functions that have not found further use, while our functions are closely related to wellstudied problems and have considerable algebraic structure. Based on the averagecase hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing finegrained OneWay Functions. We also show how our reductions make conjectures regarding the worstcase hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO ’03). We prove our hardness results in each case by showing finegrained reductions from solving one of three problems – namely, Orthogonal Vectors (OV), 3SUM, and AllPairs Shortest Paths (APSP) – in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n^{2−o(1)} time to compute on average, and that of APSP gives us a function that requires n^{3−o(1)} time. Using the same techniques we also obtain a conditional averagecase time hierarchy of functions. @InProceedings{STOC17p483, author = {Marshall Ball and Alon Rosen and Manuel Sabin and Prashant Nalini Vasudevan}, title = {AverageCase FineGrained Hardness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {483496}, doi = {}, year = {2017}, } 

Roughgarden, Tim 
STOC '17: "Why Prices Need Algorithms ..."
Why Prices Need Algorithms (Invited Talk)
Tim Roughgarden and Inbal TalgamCohen (Stanford University, USA; Hebrew University of Jerusalem, Israel) Computational complexity has already had plenty to say about the computation of economic equilibria. However, understanding when equilibria are guaranteed to exist is a central theme in economic theory, seemingly unrelated to computation. In this talk we survey our main results presented at EC’15, which show that the existence of equilibria in markets is inextricably connected to the computational complexity of related optimization problems, such as revenue or welfare maximization. We demonstrate how this relationship implies, under suitable complexity assumptions, a host of impossibility results. We also suggest a complexitytheoretic explanation for the lack of useful extensions of the Walrasian equilibrium concept: such extensions seem to require the invention of novel polynomialtime algorithms for welfare maximization. @InProceedings{STOC17p2, author = {Tim Roughgarden and Inbal TalgamCohen}, title = {Why Prices Need Algorithms (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {22}, doi = {}, year = {2017}, } 

Rubinstein, Aviad 
STOC '17: "The Limitations of Optimization ..."
The Limitations of Optimization from Samples
Eric Balkanski, Aviad Rubinstein, and Yaron Singer (Harvard University, USA; University of California at Berkeley, USA) In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomiallymany samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unitdemand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature. @InProceedings{STOC17p1016, author = {Eric Balkanski and Aviad Rubinstein and Yaron Singer}, title = {The Limitations of Optimization from Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10161027}, doi = {}, year = {2017}, } STOC '17: "Communication Complexity of ..." Communication Complexity of Approximate Nash Equilibria Yakov Babichenko and Aviad Rubinstein (Technion, Israel; University of California at Berkeley, USA) For a constant є, we prove a (N) lower bound on the (randomized) communication complexity of єNash equilibrium in twoplayer N× N games. For nplayer binaryaction games we prove an exp(n) lower bound for the (randomized) communication complexity of (є,є)weak approximate Nash equilibrium, which is a profile of mixed actions such that at least (1−є)fraction of the players are єbest replying. @InProceedings{STOC17p878, author = {Yakov Babichenko and Aviad Rubinstein}, title = {Communication Complexity of Approximate Nash Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {878889}, doi = {}, year = {2017}, } 

Rudra, Atri 
STOC '17: "Answering FAQs in CSPs, Probabilistic ..."
Answering FAQs in CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix Operations (Invited Talk)
Atri Rudra (SUNY Buffalo, USA) In this talk we will discuss a general framework to solve certain sums of products of functions over semirings. This captures many wellknown problems in disparate areas such as CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix Operations. This talk is based on joint work titled FAQ: Questions Asked Frequently with Mahmoud Abo Khamis and Hung Q. Ngo, which appeared in PODS 2016. @InProceedings{STOC17p4, author = {Atri Rudra}, title = {Answering FAQs in CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix Operations (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {44}, doi = {}, year = {2017}, } 

Sabin, Manuel 
STOC '17: "AverageCase FineGrained ..."
AverageCase FineGrained Hardness
Marshall Ball, Alon Rosen, Manuel Sabin, and Prashant Nalini Vasudevan (Columbia University, USA; IDC Herzliya, Israel; University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widelyconjectured worstcase hardness for problems from the study of finegrained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL ’94), but these have been canonical functions that have not found further use, while our functions are closely related to wellstudied problems and have considerable algebraic structure. Based on the averagecase hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing finegrained OneWay Functions. We also show how our reductions make conjectures regarding the worstcase hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO ’03). We prove our hardness results in each case by showing finegrained reductions from solving one of three problems – namely, Orthogonal Vectors (OV), 3SUM, and AllPairs Shortest Paths (APSP) – in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n^{2−o(1)} time to compute on average, and that of APSP gives us a function that requires n^{3−o(1)} time. Using the same techniques we also obtain a conditional averagecase time hierarchy of functions. @InProceedings{STOC17p483, author = {Marshall Ball and Alon Rosen and Manuel Sabin and Prashant Nalini Vasudevan}, title = {AverageCase FineGrained Hardness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {483496}, doi = {}, year = {2017}, } 

Sachdeva, Sushant 
STOC '17: "Sampling Random Spanning Trees ..."
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva (Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA) We present an algorithm that, with high probability, generates a random spanning tree from an edgeweighted undirected graph in (n^{5/3 }m^{1/3}) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(n^{ω}). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{n^{ω},m√n,m^{4/3}}) for m ≫ n^{7/4} (Colbourn et al. ’96, KelnerMadry ’09, Madry et al. ’15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinantbased and random walkbased techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute approximate effective resistances for a set S of vertex pairs via approximate Schur complements in (m+(n + S)^{−2}) time, without using the JohnsonLindenstrauss lemma which requires ( min{(m + S)^{−2}, m+n^{−4} +S^{−2}}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn’t sufficiently accurate. @InProceedings{STOC17p730, author = {David Durfee and Rasmus Kyng and John Peebles and Anup B. Rao and Sushant Sachdeva}, title = {Sampling Random Spanning Trees Faster Than Matrix Multiplication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {730742}, doi = {}, year = {2017}, } Info 

Safra, Muli 
STOC '17: "On Independent Sets, 2to2 ..."
On Independent Sets, 2to2 Games, and Grassmann Graphs
Subhash Khot, Dor Minzer, and Muli Safra (New York University, USA; Tel Aviv University, Israel) We present a candidate reduction from the 3Lin problem to the 2to2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain nonstandard sense. A reduction that is sound in this nonstandard sense implies that it is NPhard to distinguish whether an nvertex graph has an independent set of size ( 1− 1/√2 ) n − o(n) or whether every independent set has size o(n), and consequently, that it is NPhard to approximate the Vertex Cover problem within a factor √2−o(1). @InProceedings{STOC17p576, author = {Subhash Khot and Dor Minzer and Muli Safra}, title = {On Independent Sets, 2to2 Games, and Grassmann Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {576589}, doi = {}, year = {2017}, } 

Sankowski, Piotr 
STOC '17: "Decremental SingleSource ..."
Decremental SingleSource Reachability in Planar Digraphs
Giuseppe F. Italiano, Adam Karczmarz, Jakub Łącki, and Piotr Sankowski (University of Rome Tor Vergata, Italy; University of Warsaw, Poland; Google Research, USA) In this paper we show a new algorithm for the decremental singlesource reachability problem in directed planar graphs. It processes any sequence of edge deletions in O(nlog^{2}nloglogn) total time and explicitly maintains the set of vertices reachable from a fixed source vertex. Hence, if all edges are eventually deleted, the amortized time of processing each edge deletion is only O(log^{2} n loglogn), which improves upon a previously known O(√n ) solution. We also show an algorithm for decremental maintenance of strongly connected components in directed planar graphs with the same total update time. These results constitute the first almost optimal (up to polylogarithmic factors) algorithms for both problems. To the best of our knowledge, these are the first dynamic algorithms with polylogarithmic update times on general directed planar graphs for nontrivial reachabilitytype problems, for which only polynomial bounds are known in general graphs. @InProceedings{STOC17p1108, author = {Giuseppe F. Italiano and Adam Karczmarz and Jakub Łącki and Piotr Sankowski}, title = {Decremental SingleSource Reachability in Planar Digraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11081121}, doi = {}, year = {2017}, } 

Santhanam, Rahul 
STOC '17: "Pseudodeterministic Constructions ..."
Pseudodeterministic Constructions in Subexponential Time
Igor C. Oliveira and Rahul Santhanam (Charles University in Prague, Czechia; University of Oxford, UK) We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {p_{n}} of primes and a randomized algorithm A running in expected subexponential time such that for each n, on input 1^{pn}, A outputs p_{n} with probability 1. In other words, our result provides a pseudodeterministic construction of primes in subexponential time which works infinitely often. This result follows from a more general theorem about pseudodeterministic constructions. A property Q ⊆ {0,1}^{*} is γdense if for large enough n, Q ∩ {0,1}^{n} ≥ γ 2^{n}. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {H_{n}} of sets, H_{n} ⊆ {0,1}^{n}, such that for each (1/n^{c})dense property Q ∈ DTIME(n^{c}) and every large enough n, H_{n} ∩ Q ≠ ∅; or (2) There is a deterministic subexponential time construction of a family {H′_{n}} of sets, H′_{n} ⊆ {0,1}^{n}, such that for each (1/n^{c})dense property Q ∈ DTIME(n^{c}) and for infinitely many values of n, H′_{n} ∩ Q ≠ ∅. We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a nonconstructive element, arising from a sequence of applications of the hardness versus randomness paradigm. @InProceedings{STOC17p665, author = {Igor C. Oliveira and Rahul Santhanam}, title = {Pseudodeterministic Constructions in Subexponential Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {665677}, doi = {}, year = {2017}, } 

Saranurak, Thatchaphol 
STOC '17: "Dynamic Spanning Forest with ..."
Dynamic Spanning Forest with WorstCase Update Time: Adaptive, Las Vegas, and O(n^{1/2  ε})Time
Danupon Nanongkai and Thatchaphol Saranurak (KTH, Sweden) We present two algorithms for dynamically maintaining a spanning forest of a graph undergoing edge insertions and deletions. Our algorithms guarantee worstcase update time and work against an adaptive adversary, meaning that an edge update can depend on previous outputs of the algorithms. We provide the first polynomial improvement over the longstanding O(√n) bound of [Frederickson STOC’84, Eppstein, Galil, Italiano and Nissenzweig FOCS’92] for such type of algorithms. The previously best improvement was O(√n (loglogn)^{2}/logn) [KejlbergRasmussen, Kopelowitz, Pettie and Thorup ESA’16]. We note however that these bounds were obtained by deterministic algorithms while our algorithms are randomized. Our first algorithm is Monte Carlo and guarantees an O(n^{0.4+o(1)}) worstcase update time, where the o(1) term hides the O(√loglogn/logn) factor. Our second algorithm is Las Vegas and guarantee an O(n^{0.49306}) worstcase update time with high probability. Algorithms with better update time either needed to assume that the adversary is oblivious (e.g. [Kapron, King and Mountjoy SODA’13]) or can only guarantee an amortized update time. Our second result answers an open problem by Kapron et al. To the best of our knowledge, our algorithms are among a few nontrivial randomized dynamic algorithms that work against adaptive adversaries. The key to our results is a decomposition of graphs into subgraphs that either have high expansion or sparse. This decomposition serves as an interface between recent developments on (static) flow computation and many old ideas in dynamic graph algorithms: On the one hand, we can combine previous dynamic graph techniques to get faster dynamic spanning forest algorithms if such decomposition is given. On the other hand, we can adapt flowrelated techniques (e.g. those from [Khandekar, Rao and Vazirani STOC’06], [Peng SODA’16], and [Orecchia and Zhu SODA’14]) to maintain such decomposition. To the best of our knowledge, this is the first time these flow techniques are used in fully dynamic graph algorithms. @InProceedings{STOC17p1122, author = {Danupon Nanongkai and Thatchaphol Saranurak}, title = {Dynamic Spanning Forest with WorstCase Update Time: Adaptive, Las Vegas, and O(n<sup>1/2  ε</sup>)Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11221129}, doi = {}, year = {2017}, } 

Saurabh, Saket 
STOC '17: "Lossy Kernelization ..."
Lossy Kernelization
Daniel Lokshtanov, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh (University of Bergen, Norway; Vienna University of Technology, Austria; Institute of Mathematical Sciences, India) In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions com bine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size αapproximate kernel. Loosely speaking, a polynomial size αapproximate kernel is a polynomial time preprocessing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance (I′,k′) to the same problem, such that I′ + k′ ≤ k^{O(1)}. Additionally, for every c≥ 1, a capproximate solution s′ to the preprocessed instance (I′, k′) can be turned in polynomial time into a (c · α)approximate solution s to the original instance (I,k). Amongst our main technical contributions are αapproximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆ coNP/Poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an αapproximate kernel of polynomial size, for any α≥ 1, unless NP ⊆ coNP/Poly. In order to prove this lower bound we need to combine in a nontrivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation. @InProceedings{STOC17p224, author = {Daniel Lokshtanov and Fahad Panolan and M. S. Ramanujan and Saket Saurabh}, title = {Lossy Kernelization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {224237}, doi = {}, year = {2017}, } 

Scarlett, Jonathan 
STOC '17: "An Adaptive SublinearTime ..."
An Adaptive SublinearTime Block Sparse Fourier Transform
Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh (EPFL, Switzerland) The problem of approximately computing the k dominant Fourier coefficients of a vector X quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with O(klognlog(n/k)) runtime [Hassanieh et al., STOC’12] and O(klogn) sample complexity [Indyk et al., FOCS’14]. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a (k_{0},k_{1})block sparse model. In this model, signal frequencies are clustered in k_{0} intervals with width k_{1} in Fourier space, and k= k_{0}k_{1} is the total sparsity. Our main result is the first sparse FFT algorithm for (k_{0}, k_{1})block sparse signals with a sample complexity of O^{*}(k_{0}k_{1} + k_{0}log(1+ k_{0})logn) at constant signaltonoise ratios, and sublinear runtime. Our algorithm crucially uses adaptivity to achieve the improved sample complexity bound, and we provide a lower bound showing that this is essential in the Fourier setting: Any nonadaptive algorithm must use Ω(k_{0}k_{1}logn/k_{0}k_{1}) samples for the (k_{0},k_{1})block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energybased importance sampling technique that may be of independent interest. @InProceedings{STOC17p702, author = {Volkan Cevher and Michael Kapralov and Jonathan Scarlett and Amir Zandieh}, title = {An Adaptive SublinearTime Block Sparse Fourier Transform}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {702715}, doi = {}, year = {2017}, } 

Schramm, Tselil 
STOC '17: "Strongly Refuting Random CSPs ..."
Strongly Refuting Random CSPs Below the Spectral Threshold
Prasad Raghavendra, Satish Rao, and Tselil Schramm (University of California at Berkeley, USA) Random constraint satisfaction problems (CSPs) are known to exhibit threshold phenomena: given a uniformly random instance of a CSP with n variables and m clauses, there is a value of m = Ω(n) beyond which the CSP will be unsatisfiable with high probability. Strong refutation is the problem of certifying that no variable assignment satisfies more than a constant fraction of clauses; this is the natural algorithmic problem in the unsatisfiable regime (when m/n = ω(1)). Intuitively, strong refutation should become easier as the clause density m/n grows, because the contradictions introduced by the random clauses become more locally apparent. For CSPs such as kSAT and kXOR, there is a longstanding gap between the clause density at which efficient strong refutation algorithms are known, m/n ≥ Õ(n^{k/2−1}), and the clause density at which instances become unsatisfiable with high probability, m/n = ω (1). In this paper, we give spectral and sumofsquares algorithms for strongly refuting random kXOR instances with clause density m/n ≥ Õ(n^{(k/2−1)(1−δ)}) in time exp(Õ(n^{δ})) or in Õ(n^{δ}) rounds of the sumofsquares hierarchy, for any δ ∈ [0,1) and any integer k ≥ 3. Our algorithms provide a smooth transition between the clause density at which polynomialtime algorithms are known at δ = 0, and bruteforce refutation at the satisfiability threshold when δ = 1. We also leverage our kXOR results to obtain strong refutation algorithms for SAT (or any other Boolean CSP) at similar clause densities. Our algorithms match the known sumofsquares lower bounds due to Grigoriev and Schonebeck, up to logarithmic factors. @InProceedings{STOC17p121, author = {Prasad Raghavendra and Satish Rao and Tselil Schramm}, title = {Strongly Refuting Random CSPs Below the Spectral Threshold}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {121131}, doi = {}, year = {2017}, } 

Scquizzato, Michele 
STOC '17: "A Time and MessageOptimal ..."
A Time and MessageOptimal Distributed Algorithm for Minimum Spanning Trees
Gopal Pandurangan, Peter Robinson, and Michele Scquizzato (University of Houston, USA; Royal Holloway University of London, UK) This paper presents a randomized (Las Vegas) distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in Õ(D + √n) time and exchanges Õ(m) messages (both with high probability), where n is the number of nodes of the network, D is the diameter, and m is the number of edges. This is the first distributed MST algorithm that matches simultaneously the time lower bound of Ω(D + √n) [Elkin, SIAM J. Comput. 2006] and the message lower bound of Ω(m) [Kutten et al., J. ACM 2015], which both apply to randomized Monte Carlo algorithms. The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires both Ω(D + √n) rounds and Ω(m) messages. @InProceedings{STOC17p743, author = {Gopal Pandurangan and Peter Robinson and Michele Scquizzato}, title = {A Time and MessageOptimal Distributed Algorithm for Minimum Spanning Trees}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {743756}, doi = {}, year = {2017}, } 

Servedio, Rocco A. 
STOC '17: "Addition Is Exponentially ..."
Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits
Xi Chen, Igor C. Oliveira, and Rocco A. Servedio (Columbia University, USA; Charles University in Prague, Czechia) Let Add_{k,N} denote the Boolean function which takes as input k strings of N bits each, representing k numbers a^{(1)},…,a^{(k)} in {0,1,…,2^{N}−1}, and outputs 1 if and only if a^{(1)} + ⋯ + a^{(k)} ≥ 2^{N}. Let MAJ_{t,n} denote a monotone unweighted threshold gate, i.e., the Boolean function which takes as input a single string x ∈ {0,1}^{n} and outputs 1 if and only if x_{1} + ⋯ + x_{n} ≥ t. The function Add_{k,N} may be viewed as a monotone function that performs addition, and MAJ_{t,n} may be viewed as a monotone gate that performs counting. We refer to circuits that are composed of MAJ gates as monotone majority circuits. The main result of this paper is an exponential lower bound on the size of boundeddepth monotone majority circuits that compute Add_{k,N}. More precisely, we show that for any constant d ≥ 2, any depthd monotone majority circuit that computes Add_{d,N} must have size 2^{Ω(N1/d)}. As Add_{k,N} can be computed by a single monotone weighted threshold gate (that uses exponentially large weights), our lower bound implies that constantdepth monotone majority circuits require exponential size to simulate monotone weighted threshold gates. This answers a question posed by Goldmann and Karpinski (STOC’93) and recently restated by Håstad (2010, 2014). We also show that our lower bound is essentially best possible, by constructing a depthd, size 2^{O(N1/d)} monotone majority circuit for Add_{d,N}. As a corollary of our lower bound, we significantly strengthen a classical theorem in circuit complexity due to Ajtai and Gurevich (JACM’87). They exhibited a monotone function that is in AC^{0} but requires superpolynomial size for any constantdepth monotone circuit composed of unbounded fanin AND and OR gates. We describe a monotone function that is in depth3 AC^{0} but requires exponential size monotone circuits of any constant depth, even if the circuits are composed of MAJ gates. @InProceedings{STOC17p1232, author = {Xi Chen and Igor C. Oliveira and Rocco A. Servedio}, title = {Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12321245}, doi = {}, year = {2017}, } STOC '17: "Optimal MeanBased Algorithms ..." Optimal MeanBased Algorithms for Trace Reconstruction Anindya De, Ryan O'Donnell, and Rocco A. Servedio (Northwestern University, USA; Carnegie Mellon University, USA; Columbia University, USA) In the (deletionchannel) trace reconstruction problem, there is an unknown nbit source string x. An algorithm is given access to independent “traces” of x, where a trace is formed by deleting each bit of x independently with probability δ. The goal of the algorithm is to recover x exactly (with high probability), while minimizing samples (number of traces) and running time. Previously, the best known algorithm for the trace reconstruction problem was due to Holenstein et al. [HMPW08]; it uses exp(O(n^{1/2})) samples and running time for any fixed 0 < δ < 1. It is also what we call a “meanbased algorithm”, meaning that it only uses the empirical means of the individual bits of the traces. Holenstein et al. also gave a lower bound, showing that any meanbased algorithm must use at least n^{Ω(logn)} samples. In this paper we improve both of these results, obtaining matching upper and lower bounds for meanbased trace reconstruction. For any constant deletion rate 0 < δ < 1, we give a meanbased algorithm that uses exp(O(n^{1/3})) time and traces; we also prove that any meanbased algorithm must use at least exp(Ω(n^{1/3})) traces. In fact, we obtain matching upper and lower bounds even for δ subconstant and ρ 1−δ subconstant: when (log^{3} n)/n ≪ δ ≤ 1/2 the bound is exp(−Θ(δ n)^{1/3}), and when 1/√n ≪ ρ ≤ 1/2 the bound is exp(−Θ(n/ρ)^{1/3}). Our proofs involve estimates for the maxima of Littlewood polynomials on complex disks. We show that these techniques can also be used to perform trace reconstruction with random insertions and bitflips in addition to deletions. We also find a surprising result: for deletion probabilities δ > 1/2, the presence of insertions can actually help with trace reconstruction. @InProceedings{STOC17p1047, author = {Anindya De and Ryan O'Donnell and Rocco A. Servedio}, title = {Optimal MeanBased Algorithms for Trace Reconstruction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10471056}, doi = {}, year = {2017}, } 

Shahrasbi, Amirbehshad 
STOC '17: "Synchronization Strings: Codes ..."
Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound
Bernhard Haeupler and Amirbehshad Shahrasbi (Carnegie Mellon University, USA) We introduce synchronization strings, which provide a novel way of efficiently dealing with synchronization errors, i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to deal with than more commonly considered halferrors, i.e., symbol corruptions and erasures. For every є >0, synchronization strings allow to index a sequence with an є^{−O(1)} size alphabet such that one can efficiently transform k synchronization errors into (1 + є)k halferrors. This powerful new technique has many applications. In this paper we focus on designing insdel codes, i.e., error correcting block codes (ECCs) for insertion deletion channels. While ECCs for both halferrors and synchronization errors have been intensely studied, the later has largely resisted progress. As Mitzenmacher puts it in his 2009 survey: “Channels with synchronization errors ... are simply not adequately understood by current theory. Given the nearcomplete knowledge we have for channels with erasures and errors ... our lack of understanding about channels with synchronization errors is truly remarkable.” Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed and only since 2016 are there constructions of constant rate indel codes for asymptotically large noise rates. Even in the asymptotically large or small noise regime these codes are polynomially far from the optimal ratedistance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A straight forward application of our synchronization strings based indexing method gives a simple blackbox construction which transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs over large constant alphabets into the realm of insdel codes. Most notably, for the complete noise spectrum we obtain efficient “nearMDS” insdel codes which get arbitrarily close to the optimal ratedistance tradeoff given by the Singleton bound. In particular, for any δ ∈ (0,1) and >0 we give insdel codes achieving a rate of 1 − δ − є over a constant size alphabet that efficiently correct a δ fraction of insertions or deletions. @InProceedings{STOC17p33, author = {Bernhard Haeupler and Amirbehshad Shahrasbi}, title = {Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3346}, doi = {}, year = {2017}, } 

Shapira, Asaf 
STOC '17: "Removal Lemmas with Polynomial ..."
Removal Lemmas with Polynomial Bounds
Lior Gishboliner and Asaf Shapira (Tel Aviv University, Israel) We give new sufficient and necessary criteria guaranteeing that a hereditary graph property can be tested with a polynomial query complexity. Although both are simple combinatorial criteria, they imply almost all prior positive and negative results of this type, as well as many new ones. One striking application of our results is that every semialgebraic graph property (e.g., being an interval graph, a unitdisc graph etc.) can be tested with a polynomial query complexity. This confirms a conjecture of Alon. The proofs combine probabilistic ideas together with a novel application of a conditional regularity lemma for matrices, due to Alon, Fischer and Newman. @InProceedings{STOC17p510, author = {Lior Gishboliner and Asaf Shapira}, title = {Removal Lemmas with Polynomial Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {510522}, doi = {}, year = {2017}, } 

Sherman, Jonah 
STOC '17: "AreaConvexity, ℓ_{∞} ..."
AreaConvexity, ℓ_{∞} Regularization, and Undirected Multicommodity Flow
Jonah Sherman (University of California at Berkeley, USA) We show the strongconvexity assumption of regularizationbased methods for solving bilinear saddle point problems may be relaxed to a weaker notion of areaconvexity with respect to an alternating bilinear form. This allows bypassing the infamous ℓ_{∞} barrier for strongly convex regularizers that has stalled progress on a number of algorithmic problems. Applying areaconvex regularization, we present a nearlylinear time approximation algorithm for solving matrix inequality systems A X ≤ B over rightstochastic matrices X. By combining that algorithm with existing work on preconditioning maximumflow, we obtain a nearlylinear time approximation algorithm for maximum concurrent flow in undirected graphs: given an undirected, capacitated graph with m edges and k demand vectors, the algorithm takes Õ(mkє^{−1}) time and outputs k flows routing the specified demands with total congestion at most (1+є) times optimal. @InProceedings{STOC17p452, author = {Jonah Sherman}, title = {AreaConvexity, ℓ<sub>∞</sub> Regularization, and Undirected Multicommodity Flow}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {452460}, doi = {}, year = {2017}, } 

Shpilka, Amir 
STOC '17: "Succinct Hitting Sets and ..."
Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds
Michael A. Forbes, Amir Shpilka, and Ben Lee Volk (Simons Institute for the Theory of Computing Berkeley, USA; Tel Aviv University, Israel) We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining superpolynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits. Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)degree polylog(N)size circuits is a hitting set for the class of poly(N)degree poly(N)size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits. @InProceedings{STOC17p653, author = {Michael A. Forbes and Amir Shpilka and Ben Lee Volk}, title = {Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {653664}, doi = {}, year = {2017}, } 

Sidford, Aaron 
STOC '17: "Subquadratic Submodular Function ..."
Subquadratic Submodular Function Minimization
Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, and Sam Chiuwai Wong (Dartmouth College, USA; Microsoft Research, USA; Stanford University, USA; University of California at Berkeley, USA) Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n^{2}lognM·+n^{3}log^{O(1)}nM) time and O(n^{3}log^{2}n·+n^{4}log^{O(1)}n) time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], The main contribution of this paper are subquadratic SFM algorithms. For integervalued submodular functions, we give an SFM algorithm which runs in O(nM^{3}logn·) time giving the first nearly linear time algorithm in any known regime. For realvalued submodular functions with range in [−1,1], we give an algorithm which in Õ(n^{5/3}·/ε^{2}) time returns an εadditive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n^{2} bound – we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast FujishigeWolfe heuristic [Fujishige, 1980; Wolfe, 1976] @InProceedings{STOC17p1220, author = {Deeparnab Chakrabarty and Yin Tat Lee and Aaron Sidford and Sam Chiuwai Wong}, title = {Subquadratic Submodular Function Minimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12201231}, doi = {}, year = {2017}, } STOC '17: "AlmostLinearTime Algorithms ..." AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graphtheoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almostlineartime directed Laplacian system solver, and, by leveraging the recent framework of [CohenKelnerPeeblesPengSidfordVladu, FOCS’16], we also obtain almostlineartime algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm^{3/4} + n^{2/3} m) log^{O(1)} (n κ є^{−1})) to O((m + n2^{O(√lognloglogn)}) log^{O(1)} (n κ є^{−1})) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410419}, doi = {}, year = {2017}, } 

Singer, Yaron 
STOC '17: "The Limitations of Optimization ..."
The Limitations of Optimization from Samples
Eric Balkanski, Aviad Rubinstein, and Yaron Singer (Harvard University, USA; University of California at Berkeley, USA) In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomiallymany samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unitdemand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature. @InProceedings{STOC17p1016, author = {Eric Balkanski and Aviad Rubinstein and Yaron Singer}, title = {The Limitations of Optimization from Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10161027}, doi = {}, year = {2017}, } 

Sivan, Balasubramanian 
STOC '17: "Stability of Service under ..."
Stability of Service under TimeofUse Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan (University of WisconsinMadison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA) We consider timeofuse pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a pertimeunit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm. @InProceedings{STOC17p184, author = {Shuchi Chawla and Nikhil R. Devanur and Alexander E. Holroyd and Anna R. Karlin and James B. Martin and Balasubramanian Sivan}, title = {Stability of Service under TimeofUse Pricing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {184197}, doi = {}, year = {2017}, } 

Song, Zhao 
STOC '17: "Low Rank Approximation with ..."
Low Rank Approximation with Entrywise ℓ₁Norm Error
Zhao Song, David P. Woodruff, and Peilin Zhong (University of Texas at Austin, USA; IBM Research, USA; Columbia University, USA) We study the ℓ_{1}low rank approximation problem, where for a given n × d matrix A and approximation factor α ≥ 1, the goal is to output a rankk matrix A for which A−A_{1} ≤ α · minrankk matrices A′ A−A′_{1}, where for an n × d matrix C, we let C_{1} = ∑_{i=1}^{n} ∑_{j=1}^{d} C_{i,j}. This error measure is known to be more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. The problem was shown to be NPhard by Gillis and Vavasis and a number of heuristics have been proposed. It was asked in multiple places if there are any approximation algorithms. We give the first provable approximation algorithms for ℓ_{1}low rank approximation, showing that it is possible to achieve approximation factor α = (logd) · poly(k) in nnz(A) + (n+d) poly(k) time, where nnz(A) denotes the number of nonzero entries of A. If k is constant, we further improve the approximation ratio to O(1) with a poly(nd)time algorithm. Under the Exponential Time Hypothesis, we show there is no poly(nd)time algorithm achieving a (1+1/log^{1+γ}(nd))approximation, for γ > 0 an arbitrarily small constant, even when k = 1. We give a number of additional results for ℓ_{1}low rank approximation: nearly tight upper and lower bounds for column subset selection, CUR decompositions, extensions to low rank approximation with respect to ℓ_{p}norms for 1 ≤ p < 2 and earthmover distance, lowcommunication distributed protocols and lowmemory streaming algorithms, algorithms with limited randomness, and bicriteria algorithms. We also give a preliminary empirical evaluation. @InProceedings{STOC17p688, author = {Zhao Song and David P. Woodruff and Peilin Zhong}, title = {Low Rank Approximation with Entrywise ℓ₁Norm Error}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {688701}, doi = {}, year = {2017}, } Info 

Steinhardt, Jacob 
STOC '17: "Learning from Untrusted Data ..."
Learning from Untrusted Data
Moses Charikar, Jacob Steinhardt, and Gregory Valiant (Stanford University, USA) The vast majority of theoretical results in machine learning and statistics assume that the training data is a reliable reflection of the phenomena to be learned. Similarly, most learning techniques used in practice are brittle to the presence of large amounts of biased or malicious data. Motivated by this, we consider two frameworks for studying estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, listdecodable learning, asks whether it is possible to return a list of answers such that at least one is accurate. For example, given a dataset of n points for which an unknown subset of α n points are drawn from a distribution of interest, and no assumptions are made about the remaining (1−α)n points, is it possible to return a list of poly(1/α) answers? The second framework, which we term the semiverified model, asks whether a small dataset of trusted data (drawn from the distribution in question) can be used to extract accurate information from a much larger but untrusted dataset (of which only an αfraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This result has immediate implications for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary. @InProceedings{STOC17p47, author = {Moses Charikar and Jacob Steinhardt and Gregory Valiant}, title = {Learning from Untrusted Data}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {4760}, doi = {}, year = {2017}, } 

Stephan, Frank 
STOC '17: "Deciding Parity Games in Quasipolynomial ..."
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan (University of Auckland, New Zealand; National University of Singapore, Singapore) It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime n^{O(√n)} to O(n^{log(m)+6}) and from an XPalgorithm with runtime O(n^{Θ(m)}) for fixed parameter m to an FPTalgorithm with runtime O(n^{5})+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((m^{m} · n)^{5}); it is also shown that this bound cannot be improved to O((2^{m} · n)^{c}), for any c, unless FPT = W[1]. @InProceedings{STOC17p252, author = {Cristian S. Calude and Sanjay Jain and Bakhadyr Khoussainov and Wei Li and Frank Stephan}, title = {Deciding Parity Games in Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {252263}, doi = {}, year = {2017}, } 

StephensDavidowitz, Noah 
STOC '17: "Pseudorandomness of RingLWE ..."
Pseudorandomness of RingLWE for Any Ring and Modulus
Chris Peikert, Oded Regev, and Noah StephensDavidowitz (University of Michigan, USA; New York University, USA) We give a polynomialtime quantum reduction from worstcase (ideal) lattice problems directly to decision (Ring)LWE. This extends to decision all the worstcase hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision RingLWE with any number field and any modulus. @InProceedings{STOC17p461, author = {Chris Peikert and Oded Regev and Noah StephensDavidowitz}, title = {Pseudorandomness of RingLWE for Any Ring and Modulus}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {461473}, doi = {}, year = {2017}, } STOC '17: "A Reverse Minkowski Theorem ..." A Reverse Minkowski Theorem Oded Regev and Noah StephensDavidowitz (New York University, USA) We prove a conjecture due to Dadush, showing that if L⊂ ℝ^{n} is a lattice such that det(L′) ≥ 1 for all sublattices L′ ⊆ L, then ∑→y ∈ L e−t^{2} →y ^{2} ≤ 3/2 , where t := 10(logn + 2). This implies bounds on the number of lattice points in Euclidean balls for various different radii, which can be seen as a reverse form of Minkowski’s First Theorem. @InProceedings{STOC17p941, author = {Oded Regev and Noah StephensDavidowitz}, title = {A Reverse Minkowski Theorem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {941953}, doi = {}, year = {2017}, } 

Steurer, David 
STOC '17: "Quantum Entanglement, Sum ..."
Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture
Boaz Barak, Pravesh K. Kothari, and David Steurer (Harvard University, USA; Princeton University, USA; IAS, USA; Cornell University, USA) For every constant є>0, we give an exp(Õ(√n))time algorithm for the 1 vs 1−є Best Separable State (BSS) problem of distinguishing, given an n^{2}× n^{2} matrix corresponding to a quantum measurement, between the case that there is a separable (i.e., nonentangled) state ρ that accepts with probability 1, and the case that every separable state is accepted with probability at most 1−є. Equivalently, our algorithm takes the description of a subspace ⊆ F^{n2} (where F can be either the real or complex field) and distinguishes between the case that contains a rank one matrix, and the case that every rank one matrix is at least є far (in ℓ_{2} distance) from . To the best of our knowledge, this is the first improvement over the bruteforce exp(n)time algorithm for this problem. Our algorithm is based on the sumofsquares hierarchy and its analysis is inspired by Lovett’s proof (STOC ’14, JACM ’16) that the communication complexity of every rankn Boolean matrix is bounded by Õ(√n). @InProceedings{STOC17p975, author = {Boaz Barak and Pravesh K. Kothari and David Steurer}, title = {Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {975988}, doi = {}, year = {2017}, } 

Stöckel, Morten 
STOC '17: "Finding Even Cycles Faster ..."
Finding Even Cycles Faster via Capped kWalks
Søren Dahlgaard, Mathias Bæk Tejs Knudsen, and Morten Stöckel (University of Copenhagen, Denmark) Finding cycles in graphs is a fundamental problem in algorithmic graph theory. In this paper, we consider the problem of finding and reporting a cycle of length 2k in an undirected graph G with n nodes and m edges for constant k≥ 2. A classic result by Bondy and Simonovits [J. Combinatorial Theory, 1974] implies that if m ≥ 100k n^{1+1/k}, then G contains a 2kcycle, further implying that one needs to consider only graphs with m = O(n^{1+1/k}). Previously the best known algorithms were an O(n^{2}) algorithm due to Yuster and Zwick [J. Discrete Math 1997] as well as a O(m^{2−(1+⌈ k/2 ⌉−1)/(k+1)}) algorithm by Alon et. al. [Algorithmica 1997]. We present an algorithm that uses O( m^{2k/(k+1)} ) time and finds a 2kcycle if one exists. This bound is O(n^{2}) exactly when m = Θ(n^{1+1/k}). When finding 4cycles our new bound coincides with Alon et. al., while for every k>2 our new bound yields a polynomial improvement in m. Yuster and Zwick noted that it is “plausible to conjecture that O(n^{2}) is the best possible bound in terms of n”. We show “conditional optimality”: if this hypothesis holds then our O(m^{2k/(k+1)}) algorithm is tight as well. Furthermore, a folklore reduction implies that no combinatorial algorithm can determine if a graph contains a 6cycle in time O(m^{3/2−ε}) for any ε>0 unless boolean matrix multiplication can be solved combinatorially in time O(n^{3−ε′}) for some ε′ > 0, which is widely believed to be false. Coupled with our main result, this gives tight bounds for finding 6cycles combinatorially and also separates the complexity of finding 4 and 6cycles giving evidence that the exponent of m in the running time should indeed increase with k. The key ingredient in our algorithm is a new notion of capped kwalks, which are walks of length k that visit only nodes according to a fixed ordering. Our main technical contribution is an involved analysis proving several properties of such walks which may be of independent interest. @InProceedings{STOC17p112, author = {Søren Dahlgaard and Mathias Bæk Tejs Knudsen and Morten Stöckel}, title = {Finding Even Cycles Faster via Capped kWalks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {112120}, doi = {}, year = {2017}, } 

Straszak, Damian 
STOC '17: "Real Stable Polynomials and ..."
Real Stable Polynomials and Matroids: Optimization and Counting
Damian Straszak and Nisheeth K. Vishnoi (EPFL, Switzerland) Several fundamental optimization and counting problems arising in computer science, mathematics and physics can be reduced to one of the following computational tasks involving polynomials and set systems: given an oracle access to an mvariate real polynomial g and to a family of (multi)subsets B of [m], (1) compute the sum of coefficients of monomials in g corresponding to all the sets that appear in B(1), or find S ∈ B such that the monomial in g corresponding to S has the largest coefficient in g. Special cases of these problems, such as computing permanents and mixed discriminants, sampling from determinantal point processes, and maximizing subdeterminants with combinatorial constraints have been topics of much recent interest in theoretical computer science. In this paper we present a general convex programming framework geared to solve both of these problems. Subsequently, we show that roughly, when g is a real stable polynomial with nonnegative coefficients and B is a matroid, the integrality gap of our convex relaxation is finite and depends only on m (and not on the coefficients of g). Prior to this work, such results were known only in important but sporadic cases that relied heavily on the structure of either g or B; it was not even a priori clear if one could formulate a convex relaxation that has a finite integrality gap beyond these special cases. Two notable examples are a result by Gurvits for real stable polynomials g when B contains one element, and a result by Nikolov and Singh for a family of multilinear real stable polynomials when B is the partition matroid. This work, which encapsulates almost all interesting cases of g and B, benefits from both – it is inspired by the latter in coming up with the right convex programming relaxation and the former in deriving the integrality gap. However, proving our results requires extensions of both; in that process we come up with new notions and connections between real stable polynomials and matroids which might be of independent interest. @InProceedings{STOC17p370, author = {Damian Straszak and Nisheeth K. Vishnoi}, title = {Real Stable Polynomials and Matroids: Optimization and Counting}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {370383}, doi = {}, year = {2017}, } 

Sun, He 
STOC '17: "An SDPBased Algorithm for ..."
An SDPBased Algorithm for LinearSized Spectral Sparsification
Yin Tat Lee and He Sun (Microsoft Research, USA; University of Bristol, UK) For any undirected and weighted graph G=(V,E,w) with n vertices and m edges, we call a sparse subgraph H of G, with proper reweighting of the edges, a (1+ε)spectral sparsifier if (1−ε)x^{T}L_{G}x≤ x^{T} L_{H} x≤ (1+ε) x^{T} L_{G}x holds for any x∈ℝ^{n}, where L_{G} and L_{H} are the respective Laplacian matrices of G and H. Noticing that Ω(m) time is needed for any algorithm to construct a spectral sparsifier and a spectral sparsifier of G requires Ω(n) edges, a natural question is to investigate, for any constant ε, if a (1+ε)spectral sparsifier of G with O(n) edges can be constructed in Õ(m) time, where the Õ notation suppresses polylogarithmic factors. All previous constructions on spectral sparsification require either superlinear number of edges or m^{1+Ω(1)} time. In this work we answer this question affirmatively by presenting an algorithm that, for any undirected graph G and ε>0, outputs a (1+ε)spectral sparsifier of G with O(n/ε^{2}) edges in Õ(m/ε^{O(1)}) time. Our algorithm is based on three novel techniques: (1) a new potential function which is much easier to compute yet has similar guarantees as the potential functions used in previous references; (2) an efficient reduction from a twosided spectral sparsifier to a onesided spectral sparsifier; (3) constructing a onesided spectral sparsifier by a semidefinite program. @InProceedings{STOC17p678, author = {Yin Tat Lee and He Sun}, title = {An SDPBased Algorithm for LinearSized Spectral Sparsification}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {678687}, doi = {}, year = {2017}, } 

Sun, Xiaorui 
STOC '17: "Efficient Massively Parallel ..."
Efficient Massively Parallel Methods for Dynamic Programming
Sungjin Im, Benjamin Moseley, and Xiaorui Sun (University of California at Merced, USA; Washington University at St. Louis, USA; Simons Institute for the Theory of Computing Berkeley, USA) Modern science and engineering is driven by massively large data sets and its advance heavily relies on massively parallel computing platforms such as Spark, MapReduce, and Hadoop. Theoretical models have been proposed to understand the power and limitations of such platforms. Recent study of developed theoretical models has led to the discovery of new algorithms that are fast and efficient in both theory and practice, thereby beginning to unlock their underlying power. Given recent promising results, the area has turned its focus on discovering widely applicable algorithmic techniques for solving problems efficiently. In this paper we make progress towards this goal by giving a principled framework for simulating sequential dynamic programs in the distributed setting. In particular, we identify two key properties, monotonicity and decomposability, which allow us to derive efficient distributed algorithms for problems possessing the properties. We showcase our framework by considering several core dynamic programming applications, Longest Increasing Subsequence, Optimal Binary Search Tree, and Weighted Interval Selection. For these problems, we derive algorithms yielding solutions that are arbitrarily close to the optimum, using O(1) rounds and Õ(n/m) memory on each machine where n is the input size and m is the number of machines available. @InProceedings{STOC17p798, author = {Sungjin Im and Benjamin Moseley and Xiaorui Sun}, title = {Efficient Massively Parallel Methods for Dynamic Programming}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {798811}, doi = {}, year = {2017}, } 

Syrgkanis, Vasilis 
STOC '17: "Fast Convergence of Learning ..."
Fast Convergence of Learning in Games (Invited Talk)
Vasilis Syrgkanis (Microsoft Research, USA) A plethora of recent work has analyzed properties of outcomes in games when each player employs a noregret learning algorithm. Many algorithms achieve regret against the best fixed action in hindisght that decays at a rate of O(1/√T), when the game is played for T iterations. The latter rate is optimal in adversarial settings. However, in a game a player’s opponents are minimizing their own regret, rather than maximizing the player’s regret. (Daskalakis et al. 2014) and (Rakhlin and Sridharan 2013) showed that in two player zerosum games O(1/T) rates are achievable. In (Syrgkanis et al. 2015), we show that O(1/T^{3/4}) rates are achievable in general multiplayer games and also analyze convergence of the dynamics to approximately optimal social welfare, where we show a convergence rate of O(1/T). The latter result was subsequently generalized to a broader class of learning algorithms by (Foster et al. 2016). This is based on joint work with Alekh Agarwal, Haipeng Luo and Robert E. Schapire. @InProceedings{STOC17p5, author = {Vasilis Syrgkanis}, title = {Fast Convergence of Learning in Games (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {55}, doi = {}, year = {2017}, } 

Tal, Avishay 
STOC '17: "Formula Lower Bounds via the ..."
Formula Lower Bounds via the Quantum Method
Avishay Tal (IAS, USA) A de Morgan formula over Boolean variables x_{1}, …, x_{n} is a binary tree whose internal nodes are marked with AND or OR gates and whose leaves are marked with variables or their negation. We define the size of the formula as the number of leaves in it. Proving that some explicit function (in P or NP) requires a large formula is a central open question in computational complexity. While we believe that some explicit functions require exponential formula size, currently the best lower bound for an explicit function is the Ω(n^{3}) lower bound for Andreev’s function. A long line of work in quantum query complexity, culminating in the work of Reichardt [SODA, 2011], proved that for any formula of size s, there exists a polynomial of degree at most O(√s) that approximates the formula up to a small pointwise error. This is a classical theorem, arguing about polynomials and formulae, however the only known proof for it involves quantum algorithms. We apply Reichardt result to obtain the following: (1) We show how to trade averagecase hardness in exchange for size. More precisely, we show that if a function f cannot be computed correctly on more than 1/2 + 2^{−k} of the inputs by any formula of size at most s, then computing f exactly requires formula size at least Ω(k) · s. As an application, we improve the state of the art formula size lower bounds for explicit functions by a factor of Ω(logn). (2) We prove that the bipartite formula size of the InnerProduct function is Ω(n^{2}). (A bipartite formula on Boolean variables x_{1}, …, x_{n} and y_{1}, …, y_{n} is a binary tree whose internal nodes are marked with AND or OR gates and whose leaves can compute any function of either the x or y variables.) We show that any bipartite formula for the InnerProduct modulo 2 function, namely IP(x,y) = ∑_{i=1}^{n} x_{i} y_{i} (mod 2), must be of size Ω(n^{2}), which is tight up to logarithmic factors. To the best of our knowledge, this is the first superlinear lower bound on the bipartite formula complexity of any explicit function. @InProceedings{STOC17p1256, author = {Avishay Tal}, title = {Formula Lower Bounds via the Quantum Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12561268}, doi = {}, year = {2017}, } STOC '17: "TimeSpace Hardness of Learning ..." TimeSpace Hardness of Learning Sparse Parities Gillat Kol, Ran Raz, and Avishay Tal (Princeton University, USA; IAS, USA) We define a concept class F to be timespace hard (or memorysamples hard) if any learning algorithm for F requires either a memory of size superlinear in n or a number of samples superpolynomial in n, where n is the length of one sample. A recent work shows that the class of all parity functions is timespace hard [Raz, FOCS’16]. Building on [Raz, FOCS’16], we show that the class of all sparse parities of Hamming weight ℓ is timespace hard, as long as ℓ ≥ ω(logn / loglogn). Consequently, linearsize DNF Formulas, linearsize Decision Trees and logarithmicsize Juntas are all timespace hard. Our result is more general and provides timespace lower bounds for learning any concept class of parity functions. We give applications of our results in the field of boundedstorage cryptography. For example, for every ω(logn) ≤ k ≤ n, we obtain an encryption scheme that requires a private key of length k, and time complexity of n per encryption/decryption of each bit, and is provably and unconditionally secure as long as the attacker uses at most o(nk) memory bits and the scheme is used at most 2^{o(k)} times. Previously, this was known only for k=n [Raz, FOCS’16]. @InProceedings{STOC17p1067, author = {Gillat Kol and Ran Raz and Avishay Tal}, title = {TimeSpace Hardness of Learning Sparse Parities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10671080}, doi = {}, year = {2017}, } 

TalgamCohen, Inbal 
STOC '17: "Why Prices Need Algorithms ..."
Why Prices Need Algorithms (Invited Talk)
Tim Roughgarden and Inbal TalgamCohen (Stanford University, USA; Hebrew University of Jerusalem, Israel) Computational complexity has already had plenty to say about the computation of economic equilibria. However, understanding when equilibria are guaranteed to exist is a central theme in economic theory, seemingly unrelated to computation. In this talk we survey our main results presented at EC’15, which show that the existence of equilibria in markets is inextricably connected to the computational complexity of related optimization problems, such as revenue or welfare maximization. We demonstrate how this relationship implies, under suitable complexity assumptions, a host of impossibility results. We also suggest a complexitytheoretic explanation for the lack of useful extensions of the Walrasian equilibrium concept: such extensions seem to require the invention of novel polynomialtime algorithms for welfare maximization. @InProceedings{STOC17p2, author = {Tim Roughgarden and Inbal TalgamCohen}, title = {Why Prices Need Algorithms (Invited Talk)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {22}, doi = {}, year = {2017}, } STOC '17: "Approximate Modularity Revisited ..." Approximate Modularity Revisited Uriel Feige, Michal Feldman, and Inbal TalgamCohen (Weizmann Institute of Science, Israel; Microsoft Research, Israel; Tel Aviv University, Israel; Hebrew University of Jerusalem, Israel) Set functions with convenient properties (such as submodularity) appear in application areas of current interest, such as algorithmic game theory, and allow for improved optimization algorithms. It is natural to ask (e.g., in the context of data driven optimization) how robust such properties are, and whether small deviations from them can be tolerated. We consider two such questions in the important special case of linear set functions. One question that we address is whether any set function that approximately satisfies the modularity equation (linear functions satisfy the modularity equation exactly) is close to a linear function. The answer to this is positive (in a precise formal sense) as shown by Kalton and Roberts [1983] (and further improved by Bondarenko, Prymak, and Radchenko [2013]). We revisit their proof idea that is based on expander graphs, and provide significantly stronger upper bounds by combining it with new techniques. Furthermore, we provide improved lower bounds for this problem. Another question that we address is that of how to learn a linear function h that is close to an approximately linear function f, while querying the value of f on only a small number of sets. We present a deterministic algorithm that makes only linearly many (in the number of items) nonadaptive queries, by this improving over a previous algorithm of Chierichetti, Das, Dasgupta and Kumar [2015] that is randomized and makes more than a quadratic number of queries. Our learning algorithm is based on a Hadamard transform. @InProceedings{STOC17p1028, author = {Uriel Feige and Michal Feldman and Inbal TalgamCohen}, title = {Approximate Modularity Revisited}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10281041}, doi = {}, year = {2017}, } 

TaShma, Amnon 
STOC '17: "An Efficient Reduction from ..."
An Efficient Reduction from TwoSource to Nonmalleable Extractors: Achieving NearLogarithmic Minentropy
Avraham BenAroya, Dean Doron, and Amnon TaShma (Tel Aviv University, Israel) The breakthrough result of Chattopadhyay and Zuckerman (2016) gives a reduction from the construction of explicit twosource extractors to the construction of explicit nonmalleable extractors. However, even assuming the existence of optimal explicit nonmalleable extractors only gives a twosource extractor (or a Ramsey graph) for poly(logn) entropy, rather than the optimal O(logn). In this paper we modify the construction to solve the above barrier. Using the currently best explicit nonmalleable extractors we get an explicit bipartite Ramsey graphs for sets of size 2^{k}, for k=O(logn loglogn). Any further improvement in the construction of nonmalleable extractors would immediately yield a corresponding twosource extractor. Intuitively, Chattopadhyay and Zuckerman use an extractor as a sampler, and we observe that one could use a weaker object – a somewhererandom condenser with a small entropy gap and a very short seed. We also show how to explicitly construct this weaker object using the error reduction technique of Raz, Reingold and Vadhan (1999), and the constantdegree dispersers of Zuckerman (2006) that also work against extremely small tests. @InProceedings{STOC17p1185, author = {Avraham BenAroya and Dean Doron and Amnon TaShma}, title = {An Efficient Reduction from TwoSource to Nonmalleable Extractors: Achieving NearLogarithmic Minentropy}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11851194}, doi = {}, year = {2017}, } STOC '17: "Explicit, Almost Optimal, ..." Explicit, Almost Optimal, EpsilonBalanced Codes Amnon TaShma (Tel Aviv University, Israel) The question of finding an epsilonbiased set with close to optimal support size, or, equivalently, finding an explicit binary code with distance 1−є/2 and rate close to the GilbertVarshamov bound, attracted a lot of attention in recent decades. In this paper we solve the problem almost optimally and show an explicit єbiased set over k bits with support size O(k/є^{2+o(1)}). This improves upon all previous explicit constructions which were in the order of k^{2}/є^{2}, k/є^{3} or k^{5/4}/є^{5/2}. The result is close to the GilbertVarshamov bound which is O(k/є^{2}) and the lower bound which is Ω(k/є^{2} log1/є). The main technical tool we use is bias amplification with the swide replacement product. The sum of two independent samples from an єbiased set is є^{2} biased. Rozenman and Wigderson showed how to amplify the bias more economically by choosing two samples with an expander. Based on that they suggested a recursive construction that achieves sample size O(k/є^{4}). We show that amplification with a long random walk over the swide replacement product reduces the bias almost optimally. @InProceedings{STOC17p238, author = {Amnon TaShma}, title = {Explicit, Almost Optimal, EpsilonBalanced Codes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {238251}, doi = {}, year = {2017}, } 

Thierauf, Thomas 
STOC '17: "Linear Matroid Intersection ..."
Linear Matroid Intersection Is in QuasiNC
Rohit Gurjar and Thomas Thierauf (Tel Aviv University, Israel; Aalen University, Germany) Given two matroids on the same ground set, the matroid intersection problem asks to find a common independent set of maximum size. We show that the linear matroid intersection problem is in quasiNC^{2}. That is, it has uniform circuits of quasipolynomial size n^{O(logn)}, and O(log^{2} n) depth. This generalizes the similar result for the bipartite perfect matching problem. We do this by an almost complete derandomization of the Isolation lemma for matroid intersection. Our result also implies a blackbox singularity test for symbolic matrices of the form A_{0}+A_{1} z_{1} +A_{2} z_{2}+ ⋯+A_{m} z_{m}, where A_{0} is an arbitrary matrix and the matrices A_{1},A_{2},…,A_{m} are of rank 1 over some field. @InProceedings{STOC17p821, author = {Rohit Gurjar and Thomas Thierauf}, title = {Linear Matroid Intersection Is in QuasiNC}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {821830}, doi = {}, year = {2017}, } 

Touchette, Dave 
STOC '17: "Exponential Separation of ..."
Exponential Separation of Quantum Communication and Classical Information
Anurag Anshu, Dave Touchette, Penghui Yao, and Nengkun Yu (National University of Singapore, Singapore; University of Waterloo, Canada; Perimeter Institute for Theoretical Physics, Canada; University of Maryland, USA; University of Technology Sydney, Australia) We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550–1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric kary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The highlevel idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of roundelimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal tradeoff between Alice’s and Bob’s communication is given, even when allowing preshared entanglement, while computing the related GreaterThan function on n bits: say Bob communicates at most b bits, then Alice must send n/2^{O (b)} bits to Bob. We also present a classical protocol achieving this bound. @InProceedings{STOC17p277, author = {Anurag Anshu and Dave Touchette and Penghui Yao and Nengkun Yu}, title = {Exponential Separation of Quantum Communication and Classical Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {277288}, doi = {}, year = {2017}, } 

Umans, Chris 
STOC '17: "Targeted Pseudorandom Generators, ..."
Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace
William M. Hoza and Chris Umans (University of Texas at Austin, USA; California Institute of Technology, USA) Assume that for every derandomization result for logspace algorithms, there is a pseudorandom generator strong enough to nearly recover the derandomization by iterating over all seeds and taking a majority vote. We prove under a precise version of this assumption that BPL ⊆ ∩_{α > 0} DSPACE(log^{1 + α} n). We strengthen the theorem to an equivalence by considering two generalizations of the concept of a pseudorandom generator against logspace. A targeted pseudorandom generator against logspace takes as input a short uniform random seed and a finite automaton; it outputs a long bitstring that looks random to that particular automaton. A simulation advice generator for logspace stretches a small uniform random seed into a long advice string; the requirement is that there is some logspace algorithm that, given a finite automaton and this advice string, simulates the automaton reading a long uniform random input. We prove that ∩_{α > 0} prBPSPACE(log^{1 + α} n) = ∩_{α > 0} prDSPACE(log^{1 + α} n) if and only if for every targeted pseudorandom generator against logspace, there is a simulation advice generator for logspace with similar parameters. Finally, we observe that in a certain uniform setting (namely, if we only worry about sequences of automata that can be generated in logspace), targeted pseudorandom generators against logspace can be transformed into simulation advice generators with similar parameters. @InProceedings{STOC17p629, author = {William M. Hoza and Chris Umans}, title = {Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {629640}, doi = {}, year = {2017}, } 

Valiant, Gregory 
STOC '17: "Learning from Untrusted Data ..."
Learning from Untrusted Data
Moses Charikar, Jacob Steinhardt, and Gregory Valiant (Stanford University, USA) The vast majority of theoretical results in machine learning and statistics assume that the training data is a reliable reflection of the phenomena to be learned. Similarly, most learning techniques used in practice are brittle to the presence of large amounts of biased or malicious data. Motivated by this, we consider two frameworks for studying estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, listdecodable learning, asks whether it is possible to return a list of answers such that at least one is accurate. For example, given a dataset of n points for which an unknown subset of α n points are drawn from a distribution of interest, and no assumptions are made about the remaining (1−α)n points, is it possible to return a list of poly(1/α) answers? The second framework, which we term the semiverified model, asks whether a small dataset of trusted data (drawn from the distribution in question) can be used to extract accurate information from a much larger but untrusted dataset (of which only an αfraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This result has immediate implications for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary. @InProceedings{STOC17p47, author = {Moses Charikar and Jacob Steinhardt and Gregory Valiant}, title = {Learning from Untrusted Data}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {4760}, doi = {}, year = {2017}, } 

Vasudevan, Prashant Nalini 
STOC '17: "AverageCase FineGrained ..."
AverageCase FineGrained Hardness
Marshall Ball, Alon Rosen, Manuel Sabin, and Prashant Nalini Vasudevan (Columbia University, USA; IDC Herzliya, Israel; University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widelyconjectured worstcase hardness for problems from the study of finegrained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL ’94), but these have been canonical functions that have not found further use, while our functions are closely related to wellstudied problems and have considerable algebraic structure. Based on the averagecase hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing finegrained OneWay Functions. We also show how our reductions make conjectures regarding the worstcase hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO ’03). We prove our hardness results in each case by showing finegrained reductions from solving one of three problems – namely, Orthogonal Vectors (OV), 3SUM, and AllPairs Shortest Paths (APSP) – in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n^{2−o(1)} time to compute on average, and that of APSP gives us a function that requires n^{3−o(1)} time. Using the same techniques we also obtain a conditional averagecase time hierarchy of functions. @InProceedings{STOC17p483, author = {Marshall Ball and Alon Rosen and Manuel Sabin and Prashant Nalini Vasudevan}, title = {AverageCase FineGrained Hardness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {483496}, doi = {}, year = {2017}, } 

Vazirani, Vijay V. 
STOC '17: "Settling the Complexity of ..."
Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria
Jugal Garg, Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod (University of Illinois at UrbanaChampaign, USA; Georgia Institute of Technology, USA) Our first result shows membership in PPAD for the problem of computing approximate equilibria for an ArrowDebreu exchange market for piecewiselinear concave (PLC) utility functions. As a corollary we also obtain membership in PPAD for Leontief utility functions. This settles an open question of Vazirani and Yannakakis (2011). Next we show FIXPhardness of computing equilibria in ArrowDebreu exchange markets under Leontief utility functions, and ArrowDebreu markets under linear utility functions and Leontief production sets, thereby settling these open questions of Vazirani and Yannakakis (2011). As corollaries, we obtain FIXPhardness for PLC utilities and for ArrowDebreu markets under linear utility functions and polyhedral production sets. In all cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be "yes" instances. If all instances are under consideration, then in all cases we prove that the problem of deciding if a given instance admits an equilibrium is ETRcomplete, where ETR is the class Existential Theory of Reals. As a consequence of the results stated above, and the fact that membership in FIXP has been established for PLC utilities, the entire computational difficulty of ArrowDebreu markets under PLC utility functions lies in the Leontief utility subcase. This is perhaps the most unexpected aspect of our result, since Leontief utilities are meant for the case that goods are perfect complements, whereas PLC utilities are very general, capturing not only the cases when goods are complements and substitutes, but also arbitrary combinations of these and much more. Finally, we give a polynomial time algorithm for finding an equilibrium in ArrowDebreu exchange markets under Leontief utility functions provided the number of agents is a constant. This settles part of an open problem of Devanur and Kannan (2008). @InProceedings{STOC17p890, author = {Jugal Garg and Ruta Mehta and Vijay V. Vazirani and Sadra Yazdanbod}, title = {Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890901}, doi = {}, year = {2017}, } 

Végh, László A. 
STOC '17: "A Simpler and Faster Strongly ..."
A Simpler and Faster Strongly Polynomial Algorithm for Generalized Flow Maximization
Neil Olver and László A. Végh (VU University Amsterdam, Netherlands; CWI, Netherlands; London School of Economics, UK) We present a new strongly polynomial algorithm for generalized flow maximization. The first strongly polynomial algorithm for this problem was given very recently by Végh; our new algorithm is much simpler, and much faster. The complexity bound O((m+nlogn)mnlog(n^{2}/m)) improves on the previous estimate obtained by Végh by almost a factor O(n^{2}). Even for small numerical parameter values, our algorithm is essentially as fast as the best weakly polynomial algorithms. The key new technical idea is relaxing primal feasibility conditions. This allows us to work almost exclusively with integral flows, in contrast to all previous algorithms. @InProceedings{STOC17p100, author = {Neil Olver and László A. Végh}, title = {A Simpler and Faster Strongly Polynomial Algorithm for Generalized Flow Maximization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {100111}, doi = {}, year = {2017}, } 

Vempala, Santosh S. 
STOC '17: "Geodesic Walks in Polytopes ..."
Geodesic Walks in Polytopes
Yin Tat Lee and Santosh S. Vempala (Microsoft Research, USA; University of Washington, USA; Georgia Institute of Technology, USA) We introduce the geodesic walk for sampling Riemannian manifolds and apply it to the problem of generating uniform random points from the interior of polytopes in R^n specified by m inequalities. The walk is a discretetime simulation of a stochastic differential equation (SDE) on the Riemannian manifold equipped with the metric induced by the Hessian of a convex function; each step is the solution of an ordinary differential equation (ODE). The resulting sampling algorithm for polytopes mixes in O^*(mn^3/4) steps. This is the first walk that breaks the quadratic barrier for mixing in high dimension, improving on the previous best bound of O^*(mn) by Kannan and Narayanan for the Dikin walk. We also show that each step of the geodesic walk (solving an ODE) can be implemented efficiently, thus improving the time complexity for sampling polytopes. Our analysis of the geodesic walk for general Hessian manifolds does not assume positive curvature and might be of independent interest. @InProceedings{STOC17p927, author = {Yin Tat Lee and Santosh S. Vempala}, title = {Geodesic Walks in Polytopes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {927940}, doi = {}, year = {2017}, } 

Venkitasubramaniam, Muthuramakrishnan 
STOC '17: "Equivocating Yao: ConstantRound ..."
Equivocating Yao: ConstantRound Adaptively Secure Multiparty Computation in the Plain Model
Ran Canetti, Oxana Poburinnaya, and Muthuramakrishnan Venkitasubramaniam (Boston University, USA; Tel Aviv University, Israel; University of Rochester, USA) Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable twomessage, twoparty secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds? We provide a positive answer to this question. We define a new type of encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function. Combining our scheme with noncommitting encryption, we obtain the first twomessage, twoparty computation protocol, and the first constantrounds multiparty computation protocol, in the plain model, that are secure against semihonest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within. @InProceedings{STOC17p497, author = {Ran Canetti and Oxana Poburinnaya and Muthuramakrishnan Venkitasubramaniam}, title = {Equivocating Yao: ConstantRound Adaptively Secure Multiparty Computation in the Plain Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {497509}, doi = {}, year = {2017}, } Info 

Vidick, Thomas 
STOC '17: "Hardness Amplification for ..."
Hardness Amplification for Entangled Games via Anchoring
Mohammad Bavarian, Thomas Vidick, and Henry Yuen (Massachusetts Institute of Technology, USA; California Institute of Technology, USA; University of California at Berkeley, USA) We study the parallel repetition of oneround games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate — in other words, does an analogue of Raz’s parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games. We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the FeigeKilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the FeigeKilian transformation, our anchoring transformation is completeness preserving. We prove an exponentialdecay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games. Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture. @InProceedings{STOC17p303, author = {Mohammad Bavarian and Thomas Vidick and Henry Yuen}, title = {Hardness Amplification for Entangled Games via Anchoring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {303316}, doi = {}, year = {2017}, } STOC '17: "A Quantum Linearity Test for ..." A Quantum Linearity Test for Robustly Verifying Entanglement Anand Natarajan and Thomas Vidick (Massachusetts Institute of Technology, USA; California Institute of Technology, USA) We introduce a simple twoplayer test which certifies that the players apply tensor products of Pauli σ_{X} and σ_{Z} observables on the tensor product of n EPR pairs. The test has constant robustness: any strategy achieving success probability within an additive of the optimal must be poly(ε)close, in the appropriate distance measure, to the honest nqubit strategy. The test involves 2nbit questions and 2bit answers. The key technical ingredient is a quantum version of the classical linearity test of Blum, Luby, and Rubinfeld. As applications of our result we give (i) the first robust selftest for n EPR pairs; (ii) a quantum multiprover interactive proof system for the local Hamiltonian problem with a constant number of provers and classical questions and answers, and a constant completenesssoundness gap independent of system size; (iii) a robust protocol for verifiable delegated quantum computation with a constant number of quantum polynomialtime provers sharing entanglement. @InProceedings{STOC17p1003, author = {Anand Natarajan and Thomas Vidick}, title = {A Quantum Linearity Test for Robustly Verifying Entanglement}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10031015}, doi = {}, year = {2017}, } 

Vishnoi, Nisheeth K. 
STOC '17: "Real Stable Polynomials and ..."
Real Stable Polynomials and Matroids: Optimization and Counting
Damian Straszak and Nisheeth K. Vishnoi (EPFL, Switzerland) Several fundamental optimization and counting problems arising in computer science, mathematics and physics can be reduced to one of the following computational tasks involving polynomials and set systems: given an oracle access to an mvariate real polynomial g and to a family of (multi)subsets B of [m], (1) compute the sum of coefficients of monomials in g corresponding to all the sets that appear in B(1), or find S ∈ B such that the monomial in g corresponding to S has the largest coefficient in g. Special cases of these problems, such as computing permanents and mixed discriminants, sampling from determinantal point processes, and maximizing subdeterminants with combinatorial constraints have been topics of much recent interest in theoretical computer science. In this paper we present a general convex programming framework geared to solve both of these problems. Subsequently, we show that roughly, when g is a real stable polynomial with nonnegative coefficients and B is a matroid, the integrality gap of our convex relaxation is finite and depends only on m (and not on the coefficients of g). Prior to this work, such results were known only in important but sporadic cases that relied heavily on the structure of either g or B; it was not even a priori clear if one could formulate a convex relaxation that has a finite integrality gap beyond these special cases. Two notable examples are a result by Gurvits for real stable polynomials g when B contains one element, and a result by Nikolov and Singh for a family of multilinear real stable polynomials when B is the partition matroid. This work, which encapsulates almost all interesting cases of g and B, benefits from both – it is inspired by the latter in coming up with the right convex programming relaxation and the former in deriving the integrality gap. However, proving our results requires extensions of both; in that process we come up with new notions and connections between real stable polynomials and matroids which might be of independent interest. @InProceedings{STOC17p370, author = {Damian Straszak and Nisheeth K. Vishnoi}, title = {Real Stable Polynomials and Matroids: Optimization and Counting}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {370383}, doi = {}, year = {2017}, } 

Vladu, Adrian 
STOC '17: "AlmostLinearTime Algorithms ..."
AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu (Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA) In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graphtheoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almostlineartime directed Laplacian system solver, and, by leveraging the recent framework of [CohenKelnerPeeblesPengSidfordVladu, FOCS’16], we also obtain almostlineartime algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm^{3/4} + n^{2/3} m) log^{O(1)} (n κ є^{−1})) to O((m + n2^{O(√lognloglogn)}) log^{O(1)} (n κ є^{−1})) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and є is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs. @InProceedings{STOC17p410, author = {Michael B. Cohen and Jonathan Kelner and John Peebles and Richard Peng and Anup B. Rao and Aaron Sidford and Adrian Vladu}, title = {AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {410419}, doi = {}, year = {2017}, } 

Volk, Ben Lee 
STOC '17: "Succinct Hitting Sets and ..."
Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds
Michael A. Forbes, Amir Shpilka, and Ben Lee Volk (Simons Institute for the Theory of Computing Berkeley, USA; Tel Aviv University, Israel) We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining superpolynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits. Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)degree polylog(N)size circuits is a hitting set for the class of poly(N)degree poly(N)size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits. @InProceedings{STOC17p653, author = {Michael A. Forbes and Amir Shpilka and Ben Lee Volk}, title = {Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {653664}, doi = {}, year = {2017}, } 

Vyas, Nikhil 
STOC '17: "Faster SpaceEfficient Algorithms ..."
Faster SpaceEfficient Algorithms for Subset Sum and kSum
Nikhil Bansal, Shashwat Garg, Jesper Nederlof, and Nikhil Vyas (Eindhoven University of Technology, Netherlands; IIT Bombay, India) We present randomized algorithms that solve Subset Sum and Knapsack instances with n items in O^{*}(2^{0.86n}) time, where the O^{*}(·) notation suppresses factors polynomial in the input size, and polynomial space, assuming random readonly access to exponentially many random bits. These results can be extended to solve Binary Linear Programming on n variables with few constraints in a similar running time. We also show that for any constant k≥ 2, random instances of kSum can be solved using O(n^{k−0.5}(n)) time and O(logn) space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random readonly access to random bits, we show that this problem can be solved using O(logn) space significantly faster than the trivial O(n^{2}) time algorithm if no value occurs too often in the same list. @InProceedings{STOC17p198, author = {Nikhil Bansal and Shashwat Garg and Jesper Nederlof and Nikhil Vyas}, title = {Faster SpaceEfficient Algorithms for Subset Sum and kSum}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {198209}, doi = {}, year = {2017}, } 

Waingarten, Erik 
STOC '17: "Approximate Near Neighbors ..."
Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten (Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA) We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doublylogarithmic approximation. Specifically, for every n, d = n^{o(1)}, and every ddimensional symmetric norm ·, there exists a data structure for (loglogn)approximate nearest neighbor search over · for npoint datasets achieving n^{o(1)} query time and n^{1+o(1)} space. The main technical ingredient of the algorithm is a lowdistortion embedding of a symmetric norm into a lowdimensional iterated product of topk norms. We also show that our techniques cannot be extended to *general* norms. @InProceedings{STOC17p902, author = {Alexandr Andoni and Huy L. Nguyen and Aleksandar Nikolov and Ilya Razenshteyn and Erik Waingarten}, title = {Approximate Near Neighbors for General Symmetric Norms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {902913}, doi = {}, year = {2017}, } STOC '17: "Beyond Talagrand Functions: ..." Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness Xi Chen, Erik Waingarten, and Jinyu Xie (Columbia University, USA) We prove a lower bound of Ω(n^{1/3}) for the query complexity of any twosided and adaptive algorithm that tests whether an unknown Boolean function f:{0,1}^{n}→ {0,1} is monotone versus far from monotone. This improves the recent lower bound of Ω(n^{1/4}) for the same problem by Belovs and Blais (STOC’16). Our result builds on a new family of random Boolean functions that can be viewed as a twolevel extension of Talagrand’s random DNFs. Beyond monotonicity we prove a lower bound of Ω(√n) for twosided, adaptive algorithms and a lower bound of Ω(n) for onesided, nonadaptive algorithms for testing unateness, a natural generalization of monotonicity. The latter matches the linear upper bounds by Khot and Shinkar (RANDOM’16) and by Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri (2017). @InProceedings{STOC17p523, author = {Xi Chen and Erik Waingarten and Jinyu Xie}, title = {Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {523536}, doi = {}, year = {2017}, } 

Wang, Ruosong 
STOC '17: "Exponential Separations in ..."
Exponential Separations in the Energy Complexity of Leader Election
YiJun Chang, Tsvi Kopelowitz, Seth Pettie, Ruosong Wang, and Wei Zhan (University of Michigan, USA; Tsinghua University, China) Energy is often the most constrained resource for batterypowered wireless devices and the lion’s share of energy is often spent on transceiver usage (sending/receiving packets), not on computation. In this paper we study the energy complexity of Leader Election and Approximate Counting in several models of wireless radio networks. It turns out that energy complexity is very sensitive to whether the devices can generate random bits and their ability to detect collisions. We consider four collisiondetection models: StrongCD (in which transmitters and listeners detect collisions), SenderCD and ReceiverCD (in which only transmitters or only listeners detect collisions), and NoCD (in which no one detects collisions.) The takeaway message of our results is quite surprising. For randomized Leader Election algorithms, there is an exponential gap between the energy complexity of SenderCD and ReceiverCD: NoCD = SenderCD ≫ ReceiverCD = StrongCD and for deterministic Leader Election algorithms, there is another exponential gap in energy complexity, but in the reverse direction: NoCD = ReceiverCD ≫ SenderCD = StrongCD In particular, the randomized energy complexity of Leader Election is Θ(log^{*} n) in SenderCD but Θ(log(log^{*} n)) in ReceiverCD, where n is the (unknown) number of devices. Its deterministic complexity is Θ(logN) in ReceiverCD but Θ(loglogN) in SenderCD, where N is the (known) size of the devices’ ID space. There is a tradeoff between time and energy. We give a new upper bound on the timeenergy tradeoff curve for randomized Leader Election and Approximate Counting. A critical component of this algorithm is a new deterministic Leader Election algorithm for dense instances, when n=Θ(N), with inverseAckermanntype (O(α(N))) energy complexity. @InProceedings{STOC17p771, author = {YiJun Chang and Tsvi Kopelowitz and Seth Pettie and Ruosong Wang and Wei Zhan}, title = {Exponential Separations in the Energy Complexity of Leader Election}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {771783}, doi = {}, year = {2017}, } 

Wei, Fan 
STOC '17: "Local MaxCut in Smoothed ..."
Local MaxCut in Smoothed Polynomial Time
Omer Angel, Sébastien Bubeck, Yuval Peres, and Fan Wei (University of British Columbia, Canada; Microsoft Research, USA; Stanford University, USA) In 1988, Johnson, Papadimitriou and Yannakakis wrote that “Practically all the empirical evidence would lead us to conclude that finding locally optimal solutions is much easier than solving NPhard problems”. Since then the empirical evidence has continued to amass, but formal proofs of this phenomenon have remained elusive. A canonical (and indeed complete) example is the local maxcut problem, for which no polynomial time method is known. In a breakthrough paper, Etscheid and R'oglin proved that the smoothed complexity of local maxcut is quasipolynomial, i.e., if arbitrary bounded weights are randomly perturbed, a local maximum can be found in φ n^{O(logn)} steps where φ is an upper bound on the random edge weight density. In this paper we prove smoothed polynomial complexity for local maxcut, thus confirming that finding local optima for maxcut is much easier than solving it. @InProceedings{STOC17p429, author = {Omer Angel and Sébastien Bubeck and Yuval Peres and Fan Wei}, title = {Local MaxCut in Smoothed Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {429437}, doi = {}, year = {2017}, } 

Weismantel, Robert 
STOC '17: "A Strongly Polynomial Algorithm ..."
A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming
Stephan Artmann, Robert Weismantel, and Rico Zenklusen (ETH Zurich, Switzerland) We present a strongly polynomial algorithm to solve integer programs of the form max{c^{T} x∶ Ax≤ b, x∈ℤ^{n} }, for A∈ℤ^{m× n} with rank(A)=n, b∈ℤ^{m}, c∈ℤ^{n}, and where all determinants of (n× n)submatrices of A are bounded by 2 in absolute value. In particular, this implies that integer programs max{c^{T} x : Q x≤ b, x∈ ℤ_{ 0}^{n}}, where Q∈ ℤ^{m× n} has the property that all subdeterminants are bounded by 2 in absolute value, can be solved in strongly polynomial time. We thus obtain an extension of the wellknown result that integer programs with constraint matrices that are totally unimodular are solvable in strongly polynomial time. @InProceedings{STOC17p1206, author = {Stephan Artmann and Robert Weismantel and Rico Zenklusen}, title = {A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12061219}, doi = {}, year = {2017}, } 

Wigderson, Avi 
STOC '17: "Algorithmic and Optimization ..."
Algorithmic and Optimization Aspects of BrascampLieb Inequalities, via Operator Scaling
Ankit Garg, Leonid Gurvits, Rafael Oliveira, and Avi Wigderson (Microsoft Research, USA; City College of New York, USA; Princeton University, USA; IAS, USA) The celebrated BrascampLieb (BL) inequalities [BL76, Lie90], and their reverse form of Barthe [Bar98], are an important mathematical tool, unifying and generalizing numerous in equalities in analysis, convex geometry and information theory, with many used in computer science. While their structural theory is very well understood, far less is known about computing their main parameters below (which we later define). Prior to this work, the best known algorithms for any of these optimization tasks required at least exponential time. In this work, we give polynomial time algorithms to compute: (1) Feasibility of BLdatum, (2) Optimal BL constant, (3) Weak separation oracle for BLpolytopes. What is particularly exciting about this progress, beyond the better understanding of BL inequalities, is that the objects above naturally encode rich families of optimization problems which had no prior efficient algorithms. In particular, the BLconstants (which we efficiently compute) are solutions to nonconvex optimization problems, and the BLpolytopes (for which we provide efficient membership and separation oracles) are linear programs with exponentially many facets. Thus we hope that new combinatorial optimization problems can be solved via reductions to the ones above, and make modest initial steps in exploring this possibility. Our algorithms are obtained by a simple efficient reduction of a given BLdatum to an instance of the Operator Scaling problem defined by [Gur04]. To obtain the results above, we utilize the two (very recent and different) algorithms for the operator scaling problem [GGOW16, IQS15a]. Our reduction implies algorithmic versions of many of the known structural results on BLinequalities, and in some cases provide proofs that are different or simpler than existing ones. Further, the analytic properties of the [GGOW16] algorithm provide new, effective bounds on the magnitude and continuity of BLconstants, with applications to nonlinear versions of BLinequalities; prior work relied on compactness, and thus provided no bounds. On a higher level, our application of operator scaling algorithm to BLinequalities further connects analysis and optimization with the diverse mathematical areas used so far to mo tivate and solve the operator scaling problem, which include commutative invariant theory, noncommutative algebra, computational complexity and quantum information theory. @InProceedings{STOC17p397, author = {Ankit Garg and Leonid Gurvits and Rafael Oliveira and Avi Wigderson}, title = {Algorithmic and Optimization Aspects of BrascampLieb Inequalities, via Operator Scaling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {397409}, doi = {}, year = {2017}, } 

Williams, Ryan 
STOC '17: "Probabilistic Rank and Matrix ..."
Probabilistic Rank and Matrix Rigidity
Josh Alman and Ryan Williams (Massachusetts Institute of Technology, USA) We consider a notion of probabilistic rank and probabilistic signrank of a matrix, which measure the extent to which a matrix can be probabilistically represented by lowrank matrices. We demonstrate several connections with matrix rigidity, communication complexity, and circuit lower bounds. The most interesting outcomes are: The WalshHadamard Transform is Not Very Rigid. We give surprising upper bounds on the rigidity of a family of matrices whose rigidity has been extensively studied, and was conjectured to be highly rigid. For the 2^{n} × 2^{n} WalshHadamard transform H_{n} (a.k.a. Sylvester matrices, a.k.a. the communication matrix of Inner Product modulo 2), we show how to modify only 2^{ε n} entries in each row and make the rank of H_{n} drop below 2^{n(1−Ω(ε2/log(1/ε)))}, for all small ε > 0, over any field. That is, it is not possible to prove arithmetic circuit lower bounds on Hadamard matrices such as H_{n}, via L. Valiant’s matrix rigidity approach. We also show nontrivial rigidity upper bounds for H_{n} with smaller target rank. Matrix Rigidity and Threshold Circuit Lower Bounds. We give new consequences of rigid matrices for Boolean circuit complexity. First, we show that explicit n × n Boolean matrices which maintain rank at least 2^{(logn)1−δ} after n^{2}/2^{(logn)δ/2} modified entries (over any field, for any δ > 0) would yield an explicit function that does not have subquadraticsize AC^{0} circuits with two layers of arbitrary linear threshold gates. Second, we prove that explicit 0/1 matrices over the reals which are modestly more rigid than the best known rigidity lower bounds for signrank would imply exponentialgate lower bounds for the infamously difficult class of depthtwo linear threshold circuits with arbitrary weights on both layers. In particular, we show that matrices defined by these seeminglydifficult circuit classes actually have low probabilistic rank and signrank, respectively. An Equivalence Between Communication, Probabilistic Rank, and Rigidity. It has been known since Razborov [1989] that explicit rigidity lower bounds would resolve longstanding lowerbound problems in communication complexity, but it seemed possible that communication lower bounds could be proved without making progress on matrix rigidity. We show that for every function f which is randomly selfreducible in a natural way (the inner product mod 2 is an example), bounding the communication complexity of f (in a precise technical sense) is equivalent to bounding the rigidity of the matrix of f, via an equivalence with probabilistic rank. @InProceedings{STOC17p641, author = {Josh Alman and Ryan Williams}, title = {Probabilistic Rank and Matrix Rigidity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {641652}, doi = {}, year = {2017}, } 

Witmer, David 
STOC '17: "Sum of Squares Lower Bounds ..."
Sum of Squares Lower Bounds for Refuting any CSP
Pravesh K. Kothari, Ryuhei Mori, Ryan O'Donnell, and David Witmer (Princeton University, USA; IAS, USA; Tokyo Institute of Technology, Japan; Carnegie Mellon University, USA) Let P:{0,1}^{k} → {0,1} be a nontrivial kary predicate. Consider a random instance of the constraint satisfaction problem (P) on n variables with Δ n constraints, each being P applied to k randomly chosen literals. Provided the constraint density satisfies Δ ≫ 1, such an instance is unsatisfiable with high probability. The refutation problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate P supports a twise uniform probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d = Θ(n/Δ^{2/(t−1)} logΔ) (which runs in time n^{O(d)}) cannot refute a random instance of (P). In particular, the polynomialtime SOS algorithm requires Ω(n^{(t+1)/2}) constraints to refute random instances of CSP(P) when P supports a twise uniform distribution on its satisfying assignments. Together with recent work of Lee et al.(Lee, Raghavendra, Steurer 2015), our result also implies that any polynomialsize semidefinite programming relaxation for refutation requires at least Ω(n^{(t+1)/2}) constraints. More generally, we consider the δrefutation problem, in which the goal is to certify that at most a (1−δ)fraction of constraints can be simultaneously satisfied. We show that if P is δclose to supporting a twise uniform distribution on satisfying assignments, then the degreeΩ(n/Δ^{2/(t−1)} logΔ) SOS algorithm cannot (δ+o(1))refute a random instance of CSP(P). This is the first result to show a distinction between the degree SOS needs to solve the refutation problem and the degree it needs to solve the harder δrefutation problem. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate P, they give a threeway hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen, O’Donnell, Witmer (2015) and Raghavendra, Rao, Schramm (2016), this full threeway tradeoff is tight, up to lowerorder factors. @InProceedings{STOC17p132, author = {Pravesh K. Kothari and Ryuhei Mori and Ryan O'Donnell and David Witmer}, title = {Sum of Squares Lower Bounds for Refuting any CSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {132145}, doi = {}, year = {2017}, } Info 

Wong, Sam Chiuwai 
STOC '17: "Subquadratic Submodular Function ..."
Subquadratic Submodular Function Minimization
Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, and Sam Chiuwai Wong (Dartmouth College, USA; Microsoft Research, USA; Stanford University, USA; University of California at Berkeley, USA) Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n^{2}lognM·+n^{3}log^{O(1)}nM) time and O(n^{3}log^{2}n·+n^{4}log^{O(1)}n) time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], The main contribution of this paper are subquadratic SFM algorithms. For integervalued submodular functions, we give an SFM algorithm which runs in O(nM^{3}logn·) time giving the first nearly linear time algorithm in any known regime. For realvalued submodular functions with range in [−1,1], we give an algorithm which in Õ(n^{5/3}·/ε^{2}) time returns an εadditive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n^{2} bound – we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast FujishigeWolfe heuristic [Fujishige, 1980; Wolfe, 1976] @InProceedings{STOC17p1220, author = {Deeparnab Chakrabarty and Yin Tat Lee and Aaron Sidford and Sam Chiuwai Wong}, title = {Subquadratic Submodular Function Minimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12201231}, doi = {}, year = {2017}, } 

Woodruff, David P. 
STOC '17: "Low Rank Approximation with ..."
Low Rank Approximation with Entrywise ℓ₁Norm Error
Zhao Song, David P. Woodruff, and Peilin Zhong (University of Texas at Austin, USA; IBM Research, USA; Columbia University, USA) We study the ℓ_{1}low rank approximation problem, where for a given n × d matrix A and approximation factor α ≥ 1, the goal is to output a rankk matrix A for which A−A_{1} ≤ α · minrankk matrices A′ A−A′_{1}, where for an n × d matrix C, we let C_{1} = ∑_{i=1}^{n} ∑_{j=1}^{d} C_{i,j}. This error measure is known to be more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. The problem was shown to be NPhard by Gillis and Vavasis and a number of heuristics have been proposed. It was asked in multiple places if there are any approximation algorithms. We give the first provable approximation algorithms for ℓ_{1}low rank approximation, showing that it is possible to achieve approximation factor α = (logd) · poly(k) in nnz(A) + (n+d) poly(k) time, where nnz(A) denotes the number of nonzero entries of A. If k is constant, we further improve the approximation ratio to O(1) with a poly(nd)time algorithm. Under the Exponential Time Hypothesis, we show there is no poly(nd)time algorithm achieving a (1+1/log^{1+γ}(nd))approximation, for γ > 0 an arbitrarily small constant, even when k = 1. We give a number of additional results for ℓ_{1}low rank approximation: nearly tight upper and lower bounds for column subset selection, CUR decompositions, extensions to low rank approximation with respect to ℓ_{p}norms for 1 ≤ p < 2 and earthmover distance, lowcommunication distributed protocols and lowmemory streaming algorithms, algorithms with limited randomness, and bicriteria algorithms. We also give a preliminary empirical evaluation. @InProceedings{STOC17p688, author = {Zhao Song and David P. Woodruff and Peilin Zhong}, title = {Low Rank Approximation with Entrywise ℓ₁Norm Error}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {688701}, doi = {}, year = {2017}, } Info 

Woods, Damien 
STOC '17: "The Noncooperative Tile Assembly ..."
The Noncooperative Tile Assembly Model Is Not Intrinsically Universal or Capable of Bounded Turing Machine Simulation
PierreÉtienne Meunier and Damien Woods (Inria, France) The field of algorithmic selfassembly is concerned with the computational and expressive power of nanoscale selfassembling molecular systems. In the wellstudied cooperative, or temperature 2, abstract tile assembly model it is known that there is a tile set to simulate any Turing machine and an intrinsically universal tile set that simulates the shapes and dynamics of any instance of the model, up to spatial rescaling. It has been an open question as to whether the seemingly simpler noncooperative, or temperature 1, model is capable of such behaviour. Here we show that this is not the case by showing that there is no tile set in the noncooperative model that is intrinsically universal, nor one capable of timebounded Turing machine simulation within a bounded region of the plane. Although the noncooperative model intuitively seems to lack the complexity and power of the cooperative model it has been exceedingly hard to prove this. One reason is that there have been few tools to analyse the structure of complicated paths in the plane. This paper provides a number of such tools. A second reason is that almost every obvious and small generalisation to the model (e.g. allowing error, 3D, nonsquare tiles, signals/wires on tiles, tiles that repel each other, parallel synchronous growth) endows it with great computational, and sometimes simulation, power. Our main results show that all of these generalisations provably increase computational and/or simulation power. Our results hold for both deterministic and nondeterministic noncooperative systems. Our first main result stands in stark contrast with the fact that for both the cooperative tile assembly model, and for 3D noncooperative tile assembly, there are respective intrinsically universal tilesets. Our second main result gives a new technique (reduction to simulation) for proving negative results about computation in tile assembly. @InProceedings{STOC17p328, author = {PierreÉtienne Meunier and Damien Woods}, title = {The Noncooperative Tile Assembly Model Is Not Intrinsically Universal or Capable of Bounded Turing Machine Simulation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {328341}, doi = {}, year = {2017}, } 

Wright, John 
STOC '17: "Efficient Quantum Tomography ..."
Efficient Quantum Tomography II
Ryan O'Donnell and John Wright (Carnegie Mellon University, USA; Massachusetts Institute of Technology, USA) We continue our analysis of: (i) “Quantum tomography”, i.e., learning a quantum state, i.e., the quantum generalization of learning a discrete probability distribution; (ii) The distribution of Young diagrams output by the RSK algorithm on random words. Regarding (ii), we introduce two powerful new tools: first, a precise upper bound on the expected length of the longest union of k disjoint increasing subsequences in a random lengthn word with letter distribution α_{1} ≥ α_{2} ≥ ⋯ ≥ α_{d}. Our bound has the correct main term and secondorder term, and holds for all n, not just in the largen limit. Second, a new majorization property of the RSK algorithm that allows one to analyze the Young diagram formed by the lower rows λ_{k}, λ_{k+1}, … of its output. These tools allow us to prove several new theorems concerning the distribution of random Young diagrams in the nonasymptotic regime, giving concrete error bounds that are optimal, or nearly so, in all parameters. As one example, we give a fundamentally new proof of the celebrated fact that the expected length of the longest increasing sequence in a random lengthn permutation is bounded by 2√n. This is the k = 1, α_{i} ≡ 1/d, d → ∞ special case of a much more general result we prove: the expected length of the kth Young diagram row produced by an αrandom word is α_{k} n ± 2√α_{k}d n. From our new analyses of random Young diagrams we derive several new results in quantum tomography, including: (i) learning the eigenvalues of an unknown state to єaccuracy in Hellingersquared, chisquared, or KL distance, using n = O(d^{2}/є) copies; (ii) learning the topk eigenvalues of an unknown state to єaccuracy in Hellingersquared or chisquared distance using n = O(kd/є) copies or in ℓ_{2}^{2} distance using n = O(k/є) copies; (iii) learning the optimal rankk approximation of an unknown state to єfidelity (Hellingersquared distance) using n = O(kd/є) copies. We believe our new techniques will lead to further advances in quantum learning; indeed, they have already subsequently been used for efficient von Neumann entropy estimation. @InProceedings{STOC17p962, author = {Ryan O'Donnell and John Wright}, title = {Efficient Quantum Tomography II}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {962974}, doi = {}, year = {2017}, } 

WulffNilsen, Christian 
STOC '17: "FullyDynamic Minimum Spanning ..."
FullyDynamic Minimum Spanning Forest with Improved WorstCase Update Time
Christian WulffNilsen (University of Copenhagen, Denmark) We give a Las Vegas data structure which maintains a minimum spanning forest in an nvertex edgeweighted undirected dynamic graph undergoing updates consisting of any mixture of edge insertions and deletions. Each update is supported in O(n^{1/2 − c}) worstcase time w.h.p. where c > 0 is some constant, and this bound also holds in expectation. This is the first data structure achieving an improvement over the O(√n) deterministic worstcase update time of Eppstein et al., a bound that has been standing for 25 years. In fact, it was previously not even known how to maintain a spanning forest of an unweighted graph in worstcase time polynomially faster than Θ(√n). Our result is achieved by first giving a reduction from fullydynamic to decremental minimum spanning forest preserving worstcase update time up to logarithmic factors. Then decremental minimum spanning forest is solved using several novel techniques, one of which involves keeping track of lowconductance cuts in a dynamic graph. An immediate corollary of our result is the first Las Vegas data structure for fullydynamic connectivity where each update is handled in worstcase time polynomially faster than Θ(√n) w.h.p.; this data structure has O(1) worstcase query time. @InProceedings{STOC17p1130, author = {Christian WulffNilsen}, title = {FullyDynamic Minimum Spanning Forest with Improved WorstCase Update Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11301143}, doi = {}, year = {2017}, } 

Xie, Jinyu 
STOC '17: "Beyond Talagrand Functions: ..."
Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness
Xi Chen, Erik Waingarten, and Jinyu Xie (Columbia University, USA) We prove a lower bound of Ω(n^{1/3}) for the query complexity of any twosided and adaptive algorithm that tests whether an unknown Boolean function f:{0,1}^{n}→ {0,1} is monotone versus far from monotone. This improves the recent lower bound of Ω(n^{1/4}) for the same problem by Belovs and Blais (STOC’16). Our result builds on a new family of random Boolean functions that can be viewed as a twolevel extension of Talagrand’s random DNFs. Beyond monotonicity we prove a lower bound of Ω(√n) for twosided, adaptive algorithms and a lower bound of Ω(n) for onesided, nonadaptive algorithms for testing unateness, a natural generalization of monotonicity. The latter matches the linear upper bounds by Khot and Shinkar (RANDOM’16) and by Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri (2017). @InProceedings{STOC17p523, author = {Xi Chen and Erik Waingarten and Jinyu Xie}, title = {Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {523536}, doi = {}, year = {2017}, } 

Yang, Lin F. 
STOC '17: "Streaming Symmetric Norms ..."
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang (Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel) We characterize the streaming space complexity of every symmetric norm l (a norm on ℝ^{n} invariant under signflips and coordinatepermutations), by relating this space complexity to the measureconcentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±є)approximation to the norm of the stream, for every 0<є≤ 1/2. (The bounds match up to (є^{−1} logn) factors.) We further extend those bounds to any large approximation ratio D≥ 1.1, showing that the decrease in space complexity is proportional to D^{2}, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l_{2} unit sphere. The same median governs many phenomena in highdimensional spaces, such as largedeviation bounds and the critical dimension in Dvoretzky’s Theorem. The family of symmetric norms contains several wellstudied norms, such as all l_{p} norms, and indeed we provide a new explanation for the disparity in space complexity between p≤ 2 and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the topk norm and the ksupport norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in http://sublinear.info). @InProceedings{STOC17p716, author = {Jarosław Błasiok and Vladimir Braverman and Stephen R. Chestnut and Robert Krauthgamer and Lin F. Yang}, title = {Streaming Symmetric Norms via Measure Concentration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716729}, doi = {}, year = {2017}, } 

Yao, Penghui 
STOC '17: "Exponential Separation of ..."
Exponential Separation of Quantum Communication and Classical Information
Anurag Anshu, Dave Touchette, Penghui Yao, and Nengkun Yu (National University of Singapore, Singapore; University of Waterloo, Canada; Perimeter Institute for Theoretical Physics, Canada; University of Maryland, USA; University of Technology Sydney, Australia) We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550–1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric kary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The highlevel idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of roundelimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal tradeoff between Alice’s and Bob’s communication is given, even when allowing preshared entanglement, while computing the related GreaterThan function on n bits: say Bob communicates at most b bits, then Alice must send n/2^{O (b)} bits to Bob. We also present a classical protocol achieving this bound. @InProceedings{STOC17p277, author = {Anurag Anshu and Dave Touchette and Penghui Yao and Nengkun Yu}, title = {Exponential Separation of Quantum Communication and Classical Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {277288}, doi = {}, year = {2017}, } 

Yazdanbod, Sadra 
STOC '17: "Settling the Complexity of ..."
Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria
Jugal Garg, Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod (University of Illinois at UrbanaChampaign, USA; Georgia Institute of Technology, USA) Our first result shows membership in PPAD for the problem of computing approximate equilibria for an ArrowDebreu exchange market for piecewiselinear concave (PLC) utility functions. As a corollary we also obtain membership in PPAD for Leontief utility functions. This settles an open question of Vazirani and Yannakakis (2011). Next we show FIXPhardness of computing equilibria in ArrowDebreu exchange markets under Leontief utility functions, and ArrowDebreu markets under linear utility functions and Leontief production sets, thereby settling these open questions of Vazirani and Yannakakis (2011). As corollaries, we obtain FIXPhardness for PLC utilities and for ArrowDebreu markets under linear utility functions and polyhedral production sets. In all cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be "yes" instances. If all instances are under consideration, then in all cases we prove that the problem of deciding if a given instance admits an equilibrium is ETRcomplete, where ETR is the class Existential Theory of Reals. As a consequence of the results stated above, and the fact that membership in FIXP has been established for PLC utilities, the entire computational difficulty of ArrowDebreu markets under PLC utility functions lies in the Leontief utility subcase. This is perhaps the most unexpected aspect of our result, since Leontief utilities are meant for the case that goods are perfect complements, whereas PLC utilities are very general, capturing not only the cases when goods are complements and substitutes, but also arbitrary combinations of these and much more. Finally, we give a polynomial time algorithm for finding an equilibrium in ArrowDebreu exchange markets under Leontief utility functions provided the number of agents is a constant. This settles part of an open problem of Devanur and Kannan (2008). @InProceedings{STOC17p890, author = {Jugal Garg and Ruta Mehta and Vijay V. Vazirani and Sadra Yazdanbod}, title = {Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {890901}, doi = {}, year = {2017}, } 

Young, Robert 
STOC '17: "The Integrality Gap of the ..."
The Integrality Gap of the GoemansLinial SDP Relaxation for Sparsest Cut Is at Least a Constant Multiple of √log n
Assaf Naor and Robert Young (Princeton University, USA; New York University, USA) We prove that the integrality gap of the Goemans–Linial semidefinite programming relaxation for the Sparsest Cut Problem is Ω(√logn) on inputs with n vertices, thus matching the previously best known upper bound (logn)^{1/2+o(1)} up to lowerorder factors. This statement is a consequence of the following new isoperimetrictype inequality. Consider the 8regular graph whose vertex set is the 5dimensional integer grid ℤ^{5} and where each vertex (a,b,c,d,e)∈ ℤ^{5} is connected to the 8 vertices (a± 1,b,c,d,e), (a,b± 1,c,d,e), (a,b,c± 1,d,e± a), (a,b,c,d± 1,e± b). This graph is known as the Cayley graph of the 5dimensional discrete Heisenberg group. Given Ω⊂ ℤ^{5}, denote the size of its edge boundary in this graph (a.k.a. the horizontal perimeter of Ω) by ∂_{h}Ω. For t∈ ℕ, denote by ∂_{v}^{t}Ω the number of (a,b,c,d,e)∈ ℤ^{5} such that exactly one of the two vectors (a,b,c,d,e),(a,b,c,d,e+t) is in Ω. The vertical perimeter of Ω is defined to be ∂_{v}Ω= √∑_{t=1}^{∞}∂_{v}^{t}Ω^{2}/t^{2}. We show that every subset Ω⊂ ℤ^{5} satisfies ∂_{v}Ω=O(∂_{h}Ω). This verticalversushorizontal isoperimetric inequality yields the abovestated integrality gap for Sparsest Cut and answers several geometric and analytic questions of independent interest. The theorem stated above is the culmination of a program whose aim is to understand the performance of the Goemans–Linial semidefinite program through the embeddability properties of Heisenberg groups. These investigations have mathematical significance even beyond their established relevance to approximation algorithms and combinatorial optimization. In particular they contribute to a range of mathematical disciplines including functional analysis, geometric group theory, harmonic analysis, subRiemannian geometry, geometric measure theory, ergodic theory, group representations, and metric differentiation. This article builds on the above cited works, with the “twist” that while those works were equally valid for any finite dimensional Heisenberg group, our result holds for the Heisenberg group of dimension 5 (or higher) but fails for the 3dimensional Heisenberg group. This insight leads to our core contribution, which is a deduction of an endpoint L_{1}boundedness of a certain singular integral on ℝ^{5} from the (local) L_{2}boundedness of the corresponding singular integral on ℝ^{3}. To do this, we devise a coronatype decomposition of subsets of a Heisenberg group, in the spirit of the construction that David and Semmes performed in ℝ^{n}, but with two main conceptual differences (in addition to more technical differences that arise from the peculiarities of the geometry of Heisenberg group). Firstly, the“atoms” of our decomposition are perturbations of intrinsic Lipschitz graphs in the sense of Franchi, Serapioni, and Serra Cassano (plus the requisite “wild” regions that satisfy a Carleson packing condition). Secondly, we control the local overlap of our corona decomposition by using quantitative monotonicity rather than Jonestype βnumbers. @InProceedings{STOC17p564, author = {Assaf Naor and Robert Young}, title = {The Integrality Gap of the GoemansLinial SDP Relaxation for Sparsest Cut Is at Least a Constant Multiple of √log n}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {564575}, doi = {}, year = {2017}, } 

Yu, Huacheng 
STOC '17: "DecreaseKeys Are Expensive ..."
DecreaseKeys Are Expensive for External Memory Priority Queues
Kasper Eenberg, Kasper Green Larsen, and Huacheng Yu (Aarhus University, Denmark; Stanford University, USA) One of the biggest open problems in external memory data structures is the priority queue problem with DecreaseKey operations. If only Insert and ExtractMin operations need to be supported, one can design a comparisonbased priority queue performing O((N/B)lg_{M/B} N) I/Os over a sequence of N operations, where B is the disk block size in number of words and M is the main memory size in number of words. This matches the lower bound for comparisonbased sorting and is hence optimal for comparisonbased priority queues. However, if we also need to support DecreaseKeys, the performance of the best known priority queue is only O((N/B) lg_{2} N) I/Os. The big open question is whether a degradation in performance really is necessary. We answer this question affirmatively by proving a lower bound of Ω((N/B) lg_{lgN} B) I/Os for processing a sequence of N intermixed Insert, ExtraxtMin and DecreaseKey operations. Our lower bound is proved in the cell probe model and thus holds also for noncomparisonbased priority queues. @InProceedings{STOC17p1081, author = {Kasper Eenberg and Kasper Green Larsen and Huacheng Yu}, title = {DecreaseKeys Are Expensive for External Memory Priority Queues}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10811093}, doi = {}, year = {2017}, } 

Yu, Nengkun 
STOC '17: "Exponential Separation of ..."
Exponential Separation of Quantum Communication and Classical Information
Anurag Anshu, Dave Touchette, Penghui Yao, and Nengkun Yu (National University of Singapore, Singapore; University of Waterloo, Canada; Perimeter Institute for Theoretical Physics, Canada; University of Maryland, USA; University of Technology Sydney, Australia) We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550–1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric kary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The highlevel idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of roundelimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal tradeoff between Alice’s and Bob’s communication is given, even when allowing preshared entanglement, while computing the related GreaterThan function on n bits: say Bob communicates at most b bits, then Alice must send n/2^{O (b)} bits to Bob. We also present a classical protocol achieving this bound. @InProceedings{STOC17p277, author = {Anurag Anshu and Dave Touchette and Penghui Yao and Nengkun Yu}, title = {Exponential Separation of Quantum Communication and Classical Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {277288}, doi = {}, year = {2017}, } 

Yuen, Henry 
STOC '17: "Hardness Amplification for ..."
Hardness Amplification for Entangled Games via Anchoring
Mohammad Bavarian, Thomas Vidick, and Henry Yuen (Massachusetts Institute of Technology, USA; California Institute of Technology, USA; University of California at Berkeley, USA) We study the parallel repetition of oneround games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate — in other words, does an analogue of Raz’s parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games. We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the FeigeKilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the FeigeKilian transformation, our anchoring transformation is completeness preserving. We prove an exponentialdecay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games. Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture. @InProceedings{STOC17p303, author = {Mohammad Bavarian and Thomas Vidick and Henry Yuen}, title = {Hardness Amplification for Entangled Games via Anchoring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {303316}, doi = {}, year = {2017}, } 

Zandieh, Amir 
STOC '17: "An Adaptive SublinearTime ..."
An Adaptive SublinearTime Block Sparse Fourier Transform
Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh (EPFL, Switzerland) The problem of approximately computing the k dominant Fourier coefficients of a vector X quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with O(klognlog(n/k)) runtime [Hassanieh et al., STOC’12] and O(klogn) sample complexity [Indyk et al., FOCS’14]. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a (k_{0},k_{1})block sparse model. In this model, signal frequencies are clustered in k_{0} intervals with width k_{1} in Fourier space, and k= k_{0}k_{1} is the total sparsity. Our main result is the first sparse FFT algorithm for (k_{0}, k_{1})block sparse signals with a sample complexity of O^{*}(k_{0}k_{1} + k_{0}log(1+ k_{0})logn) at constant signaltonoise ratios, and sublinear runtime. Our algorithm crucially uses adaptivity to achieve the improved sample complexity bound, and we provide a lower bound showing that this is essential in the Fourier setting: Any nonadaptive algorithm must use Ω(k_{0}k_{1}logn/k_{0}k_{1}) samples for the (k_{0},k_{1})block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energybased importance sampling technique that may be of independent interest. @InProceedings{STOC17p702, author = {Volkan Cevher and Michael Kapralov and Jonathan Scarlett and Amir Zandieh}, title = {An Adaptive SublinearTime Block Sparse Fourier Transform}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {702715}, doi = {}, year = {2017}, } 

Zdeborova, Lenka 
STOC '17: "InformationTheoretic Thresholds ..."
InformationTheoretic Thresholds from the Cavity Method
Amin CojaOghlan, Florent Krzakala, Will Perkins, and Lenka Zdeborova (Goethe University Frankfurt, Germany; CNRS, France; PSL Research University, France; ENS, France; UPMC, France; University of Birmingham, UK; CEA, France; University of ParisSaclay, France) Vindicating a sophisticated but nonrigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the informationtheoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in LowDensity Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications. @InProceedings{STOC17p146, author = {Amin CojaOghlan and Florent Krzakala and Will Perkins and Lenka Zdeborova}, title = {InformationTheoretic Thresholds from the Cavity Method}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {146157}, doi = {}, year = {2017}, } 

Zenklusen, Rico 
STOC '17: "A Strongly Polynomial Algorithm ..."
A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming
Stephan Artmann, Robert Weismantel, and Rico Zenklusen (ETH Zurich, Switzerland) We present a strongly polynomial algorithm to solve integer programs of the form max{c^{T} x∶ Ax≤ b, x∈ℤ^{n} }, for A∈ℤ^{m× n} with rank(A)=n, b∈ℤ^{m}, c∈ℤ^{n}, and where all determinants of (< 