STOC 2017
49th Annual ACM SIGACT Symposium on the Theory of Computing (STOC 2017)
Powered by
Conference Publishing Consulting

49th Annual ACM SIGACT Symposium on the Theory of Computing (STOC 2017), June 19–23, 2017, Montreal, Canada

STOC 2017 – Proceedings

Contents - Abstracts - Authors


Title Page

Message from the Chairs
The papers in this volume were presented in Montreal, Quebec, Canada at the Forty-Ninth Annual ACM Symposium on Theory of Computing (STOC 2017), sponsored by the ACM Special Interest Group on Algorithms and Computation Theory (SIGACT). The papers were given as both talks and posters during the first four days of a five-day Theory Fest, which was held on Monday through Friday, June 19-23, 2017. Monday's program included tutorials. On Tuesday and Thursday there were two evening poster sessions for other contributed posters as well as STOC papers. A panel discussion on the future of theoretical computer science was held on Wednesday, as was the Knuth Prize presentation and lecture. Workshops were held on Friday. During the week, there were invited talks on previously published work and keynote talks, whose titles and short abstracts are included in this volume. In response to a Call for Papers, 422 submissions to STOC were received by the submission deadline of November 2, 2016. The Program Committee deliberations continued electronically until the in-person meeting at the University of British Columbia in Vancouver, Canada on February 4 - 5, 2017, where final decisions were made. The Program Committee meeting was attended by 23 of the 25 of its members. They selected 103 papers for presentation; one pair was merged for this volume and two pairs were merged for the talks.
STOC 2017 Conference Organization
Committee listings

Invited Talks

Recent Trends in Decentralized Cryptocurrencies (Invited Talk)
Aviv Zohar
(Hebrew University of Jerusalem, Israel)
Following Bitcoin's introduction, decentralized cryptocurrencies began to emerge as a new application domain for computer science. Bitcoin's protocol has been researched and improved upon along many fronts: from its underlying incentives, through to its cryptographic primitives and its security. Many research questions and challenges still remain as cryptocurrencies and other financial systems that rely on similar principles gain wider adoption.
Publisher's Version Article Search
Why Prices Need Algorithms (Invited Talk)
Tim Roughgarden and Inbal Talgam-Cohen
(Stanford University, USA; Hebrew University of Jerusalem, Israel)

Computational complexity has already had plenty to say about the computation of economic equilibria. However, understanding when equilibria are guaranteed to exist is a central theme in economic theory, seemingly unrelated to computation. In this talk we survey our main results presented at EC’15, which show that the existence of equilibria in markets is inextricably connected to the computational complexity of related optimization problems, such as revenue or welfare maximization. We demonstrate how this relationship implies, under suitable complexity assumptions, a host of impossibility results. We also suggest a complexity-theoretic explanation for the lack of useful extensions of the Walrasian equilibrium concept: such extensions seem to require the invention of novel polynomial-time algorithms for welfare maximization.

Publisher's Version Article Search
Optimizing Tree Pattern Queries: Why Cutting Is Not Enough (Invited Talk)
Wim Martens
(University of Bayreuth, Germany)
Tree pattern queries are a natural language for querying graph- and tree-structured data. A central question for understanding their optimization problem was whether they can be minimized by cutting away redundant parts. This question has been studied since the early 2000's and was recently resolved.
Publisher's Version Article Search
Answering FAQs in CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix Operations (Invited Talk)
Atri Rudra
(SUNY Buffalo, USA)
In this talk we will discuss a general framework to solve certain sums of products of functions over semi-rings. This captures many well-known problems in disparate areas such as CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix Operations. This talk is based on joint work titled FAQ: Questions Asked Frequently with Mahmoud Abo Khamis and Hung Q. Ngo, which appeared in PODS 2016.
Publisher's Version Article Search
Fast Convergence of Learning in Games (Invited Talk)
Vasilis Syrgkanis
(Microsoft Research, USA)

A plethora of recent work has analyzed properties of outcomes in games when each player employs a no-regret learning algorithm. Many algorithms achieve regret against the best fixed action in hindisght that decays at a rate of O(1/√T), when the game is played for T iterations. The latter rate is optimal in adversarial settings. However, in a game a player’s opponents are minimizing their own regret, rather than maximizing the player’s regret. (Daskalakis et al. 2014) and (Rakhlin and Sridharan 2013) showed that in two player zero-sum games O(1/T) rates are achievable. In (Syrgkanis et al. 2015), we show that O(1/T3/4) rates are achievable in general multi-player games and also analyze convergence of the dynamics to approximately optimal social welfare, where we show a convergence rate of O(1/T). The latter result was subsequently generalized to a broader class of learning algorithms by (Foster et al. 2016). This is based on joint work with Alekh Agarwal, Haipeng Luo and Robert E. Schapire.

Publisher's Version Article Search
Examining Classical Graph-Theory Problems from the Viewpoint of Formal-Verification Methods (Invited Talk)
Orna Kupferman
(Hebrew University of Jerusalem, Israel)
The talk surveys a series of works that lift the rich semantics and structure of graphs, and the experience of the formal-verification community in reasoning about them, to classical graph-theoretical problems.
Publisher's Version Article Search
The Next 700 Network Programming Languages (Invited Talk)
Nate Foster
(Cornell University, USA)
Specification and verification of computer networks has become a reality in recent years, with the emergence of domain-specific programming languages and automated reasoning tools. But the design of these frameworks has been largely ad hoc, driven more by the needs of applications and the capabilities of hardware than by any foundational principles. This talk will present NetKAT, a language for programming networks based on a well-studied mathematical foundation: regular languages and finite automata. The talk will describe the design of the language, discuss its semantic underpinnings, and present highlights from ongoing work extending the language with stateful and probabilistic features.
Publisher's Version Article Search
Practical Post-quantum Key Agreement from Generic Lattices (Invited Talk)
Valeria Nikolaenko
(Stanford University, USA)
Lattice-based cryptography offers some of the most attractive primitives believed to be resistant to quantum computers. This work introduces "Frodo" - a concrete instantiation of a key agreement mechanism based on hard problems in generic lattices.
Publisher's Version Article Search


Session 1A

Twenty (Simple) Questions
Yuval Dagan, Yuval Filmus, Ariel Gabizon, and Shay Moran
(Technion, Israel; Zerocoin Electronic Coin, USA; University of California at San Diego, USA; Simons Institute for the Theory of Computing Berkeley, USA)

A basic combinatorial interpretation of Shannon’s entropy function is via the "20 questions" game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the "20 questions" game is given by a Huffman code for π: Bob’s questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately?* Our first main result shows that for every distribution π, Bob has a strategy that uses only questions of the form "x < c?" and "x = c?", and uncovers x using at most H(π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn1/r) questions that achieve a performance of at most H(π)+r, and show that Ω(rn1/r) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25n+o(n) questions such that for every distribution π, Bob can implement an optimal strategy for π using only questions from Q. We also show that 1.25no(n) questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)Θ(1/r) questions are necessary and sufficient.

Publisher's Version Article Search
Kolmogorov Complexity Version of Slepian-Wolf Coding
Marius Zimand
(Towson University, USA)

Alice and Bob are given two correlated n-bit strings x1 and, respectively, x2, which they want to losslessly compress and send to Zack. They can either collaborate by sharing their strings, or work separately. We show that there is no disadvantage in the second scenario: Alice and Bob, without knowing the other party’s string, can compress their strings to almost minimal description length in the sense of Kolmogorov complexity. Furthermore, compression takes polynomial time and can be made at any combination of lengths that satisfy some necessary conditions (modulo additive polylogarithmic terms). More precisely, there exist probabilistic algorithms E1, E2, and D, with E1 and E2 running in polynomial time, having the following behavior: if n1, n2 are two integers satisfying n1 + n2C(x1,x2), n1C(x1x2), n2C(x2x1), then for i ∈ {1,2}, Ei on input xi and ni outputs a string of length ni + O(log3 n) such that D on input E1(x1), E2(x2) reconstructs (x1,x2) with high probability (where C(x) denotes the plain Kolmogorov complexity of x, and C(xy) is the complexity of x conditioned by y). Our main result is more general, as it deals with the compression of any constant number of correlated strings. It is an analog in the framework of algorithmic information theory of the classic Slepian-Wolf Theorem, a fundamental result in network information theory, in which x1 and x2 are realizations of two discrete random variables representing n independent draws from a joint distribution. In the classical result, the decompressor needs to know the joint distribution of the sources. In our result no type of independence is assumed and the decompressor does not have any prior information about the sources that are compressed.

Publisher's Version Article Search
Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound
Bernhard Haeupler and Amirbehshad Shahrasbi
(Carnegie Mellon University, USA)

We introduce synchronization strings, which provide a novel way of efficiently dealing with synchronization errors, i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to deal with than more commonly considered half-errors, i.e., symbol corruptions and erasures. For every T >0, synchronization strings allow to index a sequence with an TO(1) size alphabet such that one can efficiently transform k synchronization errors into (1 + T)k half-errors. This powerful new technique has many applications. In this paper we focus on designing insdel codes, i.e., error correcting block codes (ECCs) for insertion deletion channels.

While ECCs for both half-errors and synchronization errors have been intensely studied, the later has largely resisted progress. As Mitzenmacher puts it in his 2009 survey: “Channels with synchronization errors ... are simply not adequately understood by current theory. Given the near-complete knowledge we have for channels with erasures and errors ... our lack of understanding about channels with synchronization errors is truly remarkable.” Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed and only since 2016 are there constructions of constant rate indel codes for asymptotically large noise rates. Even in the asymptotically large or small noise regime these codes are polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago.

A straight forward application of our synchronization strings based indexing method gives a simple black-box construction which transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs over large constant alphabets into the realm of insdel codes. Most notably, for the complete noise spectrum we obtain efficient “near-MDS” insdel codes which get arbitrarily close to the optimal rate-distance tradeoff given by the Singleton bound. In particular, for any δ ∈ (0,1) and >0 we give insdel codes achieving a rate of 1 − δ − T over a constant size alphabet that efficiently correct a δ fraction of insertions or deletions.

Publisher's Version Article Search

Session 1B

Learning from Untrusted Data
Moses Charikar, Jacob Steinhardt, and Gregory Valiant
(Stanford University, USA)

The vast majority of theoretical results in machine learning and statistics assume that the training data is a reliable reflection of the phenomena to be learned. Similarly, most learning techniques used in practice are brittle to the presence of large amounts of biased or malicious data. Motivated by this, we consider two frameworks for studying estimation, learning, and optimization in the presence of significant fractions of arbitrary data.

The first framework, list-decodable learning, asks whether it is possible to return a list of answers such that at least one is accurate. For example, given a dataset of n points for which an unknown subset of α n points are drawn from a distribution of interest, and no assumptions are made about the remaining (1−α)n points, is it possible to return a list of poly(1/α) answers? The second framework, which we term the semi-verified model, asks whether a small dataset of trusted data (drawn from the distribution in question) can be used to extract accurate information from a much larger but untrusted dataset (of which only an α-fraction is drawn from the distribution).

We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This result has immediate implications for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary.

Publisher's Version Article Search
Beating 1-1/e for Ordered Prophets
Melika Abolhassani, Soheil Ehsani, Hossein Esfandiari, MohammadTaghi HajiAghayi, Robert Kleinberg, and Brendan Lucier
(University of Maryland at College Park, USA; Cornell University, USA; Microsoft Research, USA)

Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 − 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/e≃ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design.

Publisher's Version Article Search
Kernel-Based Methods for Bandit Convex Optimization
Sébastien Bubeck, Yin Tat Lee, and Ronen Eldan
(Microsoft Research, USA; Weizmann Institute of Science, Israel)

We consider the adversarial convex bandit problem and we build the first poly(T)-time algorithm with poly(n) √T-regret for this problem. To do so we introduce three new ideas in the derivative-free optimization

literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves Õ(n9.5T)-regret, and we show that a simple variant of this algorithm can be run in poly(n log(T))-time per step at the cost of an additional poly(n) To(1) factor in the regret. These results improve upon the Õ(n11T)-regret and exp(poly(T))-time result of the first two authors, and the log(T)poly(n)T-regret and log(T)poly(n)-time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve Õ(n1.5T)-regret, and moreover that this regret is unimprovable (the current best lower bound being Ω(nT) and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order n3 / T2.

Publisher's Version Article Search Video Info

Session 1C

New Hardness Results for Routing on Disjoint Paths
Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat
(Toyota Technological Institute at Chicago, USA; University of Chicago, USA)

In the classical Node-Disjoint Paths (NDP) problem, the input consists of an undirected n-vertex graph G, and a collection M={(s1,t1),…,(sk,tk)} of pairs of its vertices, called source-destination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via node-disjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is O(√n), while the best current negative result is an Ω(log1/2−δn)-hardness of approximation for any constant δ, under standard complexity assumptions. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves an Õ(n1/4)-approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is Õ(n9/19). The best currently known lower bound for both these versions of the problem is APX-hardness.

In this paper we prove that NDP is 2Ω(√logn)-hard to approximate, unless all problems in NP have algorithms with running time nO(logn). Our result holds even when the underlying graph is a planar graph with maximum vertex degree 4, and all source vertices lie on the boundary of a single face (but the destination vertices may lie anywhere in the graph). We extend this result to the closely related Edge-Disjoint Paths problem, showing the same hardness of approximation ratio even for sub-cubic planar graphs with all sources lying on the boundary of a single face.

Publisher's Version Article Search
A Simpler and Faster Strongly Polynomial Algorithm for Generalized Flow Maximization
Neil Olver and László A. Végh
(VU University Amsterdam, Netherlands; CWI, Netherlands; London School of Economics, UK)

We present a new strongly polynomial algorithm for generalized flow maximization. The first strongly polynomial algorithm for this problem was given very recently by Végh; our new algorithm is much simpler, and much faster. The complexity bound O((m+nlogn)mnlog(n2/m)) improves on the previous estimate obtained by Végh by almost a factor O(n2). Even for small numerical parameter values, our algorithm is essentially as fast as the best weakly polynomial algorithms. The key new technical idea is relaxing primal feasibility conditions. This allows us to work almost exclusively with integral flows, in contrast to all previous algorithms.

Publisher's Version Article Search
Finding Even Cycles Faster via Capped k-Walks
Søren Dahlgaard, Mathias Bæk Tejs Knudsen, and Morten Stöckel
(University of Copenhagen, Denmark)

Finding cycles in graphs is a fundamental problem in algorithmic graph theory. In this paper, we consider the problem of finding and reporting a cycle of length 2k in an undirected graph G with n nodes and m edges for constant k≥ 2. A classic result by Bondy and Simonovits [J. Combinatorial Theory, 1974] implies that if m ≥ 100k n1+1/k, then G contains a 2k-cycle, further implying that one needs to consider only graphs with m = O(n1+1/k).

Previously the best known algorithms were an O(n2) algorithm due to Yuster and Zwick [J. Discrete Math 1997] as well as a O(m2−(1+⌈ k/2 ⌉−1)/(k+1)) algorithm by Alon et. al. [Algorithmica 1997].

We present an algorithm that uses O( m2k/(k+1) ) time and finds a 2k-cycle if one exists. This bound is O(n2) exactly when m = Θ(n1+1/k). When finding 4-cycles our new bound coincides with Alon et. al., while for every k>2 our new bound yields a polynomial improvement in m.

Yuster and Zwick noted that it is “plausible to conjecture that O(n2) is the best possible bound in terms of n”. We show “conditional optimality”: if this hypothesis holds then our O(m2k/(k+1)) algorithm is tight as well. Furthermore, a folklore reduction implies that no combinatorial algorithm can determine if a graph contains a 6-cycle in time O(m3/2−ε) for any ε>0 unless boolean matrix multiplication can be solved combinatorially in time O(n3−ε′) for some ε′ > 0, which is widely believed to be false. Coupled with our main result, this gives tight bounds for finding 6-cycles combinatorially and also separates the complexity of finding 4- and 6-cycles giving evidence that the exponent of m in the running time should indeed increase with k.

The key ingredient in our algorithm is a new notion of capped k-walks, which are walks of length k that visit only nodes according to a fixed ordering. Our main technical contribution is an involved analysis proving several properties of such walks which may be of independent interest.

Publisher's Version Article Search

Session 2A

Strongly Refuting Random CSPs Below the Spectral Threshold
Prasad Raghavendra, Satish Rao, and Tselil Schramm
(University of California at Berkeley, USA)

Random constraint satisfaction problems (CSPs) are known to exhibit threshold phenomena: given a uniformly random instance of a CSP with n variables and m clauses, there is a value of m = Ω(n) beyond which the CSP will be unsatisfiable with high probability. Strong refutation is the problem of certifying that no variable assignment satisfies more than a constant fraction of clauses; this is the natural algorithmic problem in the unsatisfiable regime (when m/n = ω(1)).

Intuitively, strong refutation should become easier as the clause density m/n grows, because the contradictions introduced by the random clauses become more locally apparent. For CSPs such as k-SAT and k-XOR, there is a long-standing gap between the clause density at which efficient strong refutation algorithms are known, m/n ≥ Õ(nk/2−1), and the clause density at which instances become unsatisfiable with high probability, m/n = ω (1).

In this paper, we give spectral and sum-of-squares algorithms for strongly refuting random k-XOR instances with clause density m/n ≥ Õ(n(k/2−1)(1−δ)) in time exp(Õ(nδ)) or in Õ(nδ) rounds of the sum-of-squares hierarchy, for any δ ∈ [0,1) and any integer k ≥ 3. Our algorithms provide a smooth transition between the clause density at which polynomial-time algorithms are known at δ = 0, and brute-force refutation at the satisfiability threshold when δ = 1. We also leverage our k-XOR results to obtain strong refutation algorithms for SAT (or any other Boolean CSP) at similar clause densities.

Our algorithms match the known sum-of-squares lower bounds due to Grigoriev and Schonebeck, up to logarithmic factors.

Publisher's Version Article Search
Sum of Squares Lower Bounds for Refuting any CSP
Pravesh K. Kothari, Ryuhei Mori, Ryan O'Donnell, and David Witmer
(Princeton University, USA; IAS, USA; Tokyo Institute of Technology, Japan; Carnegie Mellon University, USA)

Let P:{0,1}k → {0,1} be a nontrivial k-ary predicate. Consider a random instance of the constraint satisfaction problem (P) on n variables with Δ n constraints, each being P applied to k randomly chosen literals. Provided the constraint density satisfies Δ ≫ 1, such an instance is unsatisfiable with high probability. The refutation problem is to efficiently find a proof of unsatisfiability.

We show that whenever the predicate P supports a t-wise uniform probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d = Θ(n2/(t−1) logΔ) (which runs in time nO(d)) cannot refute a random instance of (P). In particular, the polynomial-time SOS algorithm requires Ω(n(t+1)/2) constraints to refute random instances of CSP(P) when P supports a t-wise uniform distribution on its satisfying assignments. Together with recent work of Lee et al.(Lee, Raghavendra, Steurer 2015), our result also implies that any polynomial-size semidefinite programming relaxation for refutation requires at least Ω(n(t+1)/2) constraints.

More generally, we consider the δ-refutation problem, in which the goal is to certify that at most a (1−δ)-fraction of constraints can be simultaneously satisfied. We show that if P is δ-close to supporting a t-wise uniform distribution on satisfying assignments, then the degree-Ω(n2/(t−1) logΔ) SOS algorithm cannot (δ+o(1))-refute a random instance of CSP(P). This is the first result to show a distinction between the degree SOS needs to solve the refutation problem and the degree it needs to solve the harder δ-refutation problem.

Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate P, they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen, O’Donnell, Witmer (2015) and Raghavendra, Rao, Schramm (2016), this full three-way tradeoff is tight, up to lower-order factors.

Publisher's Version Article Search Info
Information-Theoretic Thresholds from the Cavity Method
Amin Coja-Oghlan, Florent Krzakala, Will Perkins, and Lenka Zdeborova
(Goethe University Frankfurt, Germany; CNRS, France; PSL Research University, France; ENS, France; UPMC, France; University of Birmingham, UK; CEA, France; University of Paris-Saclay, France)
Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications.
Publisher's Version Article Search

Session 2B

Bernoulli Factories and Black-Box Reductions in Mechanism Design
Shaddin Dughmi, Jason D. Hartline, Robert Kleinberg, and Rad Niazadeh
(University of Southern California, USA; Northwestern University, USA; Cornell University, USA)

We provide a polynomial-time reduction from Bayesian incentive-compatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multi-dimensional and continuous type spaces.

The key technical barrier preventing exact incentive compatibility in prior black-box reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism’s output, which is typically #P-hard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility.

We overcome this barrier by employing and generalizing the computational model in the literature on "Bernoulli Factories". In a Bernoulli factory problem, one is given a function mapping the bias of an “input coin” to that of an “output coin”, and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the "expectations from samples" computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions.

In this model we give a polynomial time algorithm for the function given by "exponential weights": expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of Hartline-Malekian-Kleinberg [2015] exactly incentive compatible.

Publisher's Version Article Search
Simple Mechanisms for Subadditive Buyers via Duality
Yang Cai and Mingfei Zhao
(McGill University, Canada)

We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers’ valuations are XOS over independent items. If the buyers’ valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques.

Publisher's Version Article Search
Stability of Service under Time-of-Use Pricing
Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James B. Martin, and Balasubramanian Sivan
(University of Wisconsin-Madison, USA; Microsoft Research, USA; University of Washington, USA; University of Oxford, UK; Google Research, USA)

We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm.

Publisher's Version Article Search

Session 2C

Faster Space-Efficient Algorithms for Subset Sum and k-Sum
Nikhil Bansal, Shashwat Garg, Jesper Nederlof, and Nikhil Vyas
(Eindhoven University of Technology, Netherlands; IIT Bombay, India)

We present randomized algorithms that solve Subset Sum and Knapsack instances with n items in O*(20.86n) time, where the O*(·) notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve Binary Linear Programming on n variables with few constraints in a similar running time. We also show that for any constant k≥ 2, random instances of k-Sum can be solved using O(nk−0.5(n)) time and O(logn) space, without the assumption of random access to random bits.

Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using O(logn) space significantly faster than the trivial O(n2) time algorithm if no value occurs too often in the same list.

Publisher's Version Article Search
Homomorphisms Are a Good Basis for Counting Small Subgraphs
Radu Curticapean, Holger Dell, and Dániel Marx
(Hungarian Academy of Sciences, Hungary; Saarland University, Germany)

We introduce graph motif parameters, a class of graph parameters that depend only on the frequencies of constant-size induced subgraphs. Classical works by Lovász show that many interesting quantities have this form, including, for fixed graphs H, the number of H-copies (induced or not) in an input graph G, and the number of homomorphisms from H to G.

We use the framework of graph motif parameters to obtain faster algorithms for counting subgraph copies of fixed graphs H in host graphs G. More precisely, for graphs H on k edges, we show how to count subgraph copies of H in time kO(k)· n0.174k + o(k) by a surprisingly simple algorithm. This improves upon previously known running times, such as O(n0.91k + c) time for k-edge matchings or O(n0.46k + c) time for k-cycles.

Furthermore, we prove a general complexity dichotomy for evaluating graph motif parameters: Given a class C of such parameters, we consider the problem of evaluating fC on input graphs G, parameterized by the number of induced subgraphs that f depends upon. For every recursively enumerable class C, we prove the above problem to be either FPT or #W[1]-hard, with an explicit dichotomy criterion. This allows us to recover known dichotomies for counting subgraphs, induced subgraphs, and homomorphisms in a uniform and simplified way, together with improved lower bounds.

Finally, we extend graph motif parameters to colored subgraphs and prove a complexity trichotomy: For vertex-colored graphs H and G, where H is from a fixed class of graphs, we want to count color-preserving H-copies in G. We show that this problem is either polynomial-time solvable or FPT or #W[1]-hard, and that the FPT cases indeed need FPT time under reasonable assumptions.

Publisher's Version Article Search
Lossy Kernelization
Daniel Lokshtanov, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh
(University of Bergen, Norway; Vienna University of Technology, Austria; Institute of Mathematical Sciences, India)

In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions com- bine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α-approximate kernel. Loosely speaking, a polynomial size α-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance (I′,k′) to the same problem, such that |I′| + k′ ≤ kO(1). Additionally, for every c≥ 1, a c-approximate solution s′ to the pre-processed instance (I′, k′) can be turned in polynomial time into a (c · α)-approximate solution s to the original instance (I,k).

Amongst our main technical contributions are α-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NPcoNP/Poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α-approximate kernel of polynomial size, for any α≥ 1, unless NPcoNP/Poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation.

Publisher's Version Article Search

Session 3: STOC Best Papers

Explicit, Almost Optimal, Epsilon-Balanced Codes
Amnon Ta-Shma
(Tel Aviv University, Israel)

The question of finding an epsilon-biased set with close to optimal support size, or, equivalently, finding an explicit binary code with distance 1−T/2 and rate close to the Gilbert-Varshamov bound, attracted a lot of attention in recent decades. In this paper we solve the problem almost optimally and show an explicit T-biased set over k bits with support size O(k/T2+o(1)). This improves upon all previous explicit constructions which were in the order of k2/T2, k/T3 or k5/4/T5/2. The result is close to the Gilbert-Varshamov bound which is O(k/T2) and the lower bound which is Ω(k/T2 log1/T).

The main technical tool we use is bias amplification with the s-wide replacement product. The sum of two independent samples from an T-biased set is T2 biased. Rozenman and Wigderson showed how to amplify the bias more economically by choosing two samples with an expander. Based on that they suggested a recursive construction that achieves sample size O(k/T4). We show that amplification with a long random walk over the s-wide replacement product reduces the bias almost optimally.

Publisher's Version Article Search
Deciding Parity Games in Quasipolynomial Time
Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan
(University of Auckland, New Zealand; National University of Singapore, Singapore)

It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game – with n nodes and m distinct values (aka colours or priorities) – is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime nO(√n) to O(nlog(m)+6) and from an XP-algorithm with runtime O(nΘ(m)) for fixed parameter m to an FPT-algorithm with runtime O(n5)+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((mm · n)5); it is also shown that this bound cannot be improved to O((2m · n)c), for any c, unless FPT = W[1].

Publisher's Version Article Search
A Weighted Linear Matroid Parity Algorithm
Satoru Iwata and Yusuke Kobayashi
(University of Tokyo, Japan; University of Tsukuba, Japan)

The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Lovász (1980) showed that this problem admits a min-max formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem.

In this paper, we present a combinatorial, deterministic, strongly polynomial algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primal-dual approach with the aid of the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem.

Publisher's Version Article Search

Session 4A

Exponential Separation of Quantum Communication and Classical Information
Anurag Anshu, Dave Touchette, Penghui Yao, and Nengkun Yu
(National University of Singapore, Singapore; University of Waterloo, Canada; Perimeter Institute for Theoretical Physics, Canada; University of Maryland, USA; University of Technology Sydney, Australia)

We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550–1572], hence our work implies that these two complexity measures are incomparable.

As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold.

The function we use to present such a separation is the Symmetric k-ary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15-057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The high-level idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of round-elimination arguments, allowing us to simplify further the approach of Rao and Sinha.

As another application of the techniques that we develop, a simple proof for an optimal trade-off between Alice’s and Bob’s communication is given, even when allowing pre-shared entanglement, while computing the related Greater-Than function on n bits: say Bob communicates at most b bits, then Alice must send n/2O (b) bits to Bob. We also present a classical protocol achieving this bound.

Publisher's Version Article Search
Compression of Quantum Multi-prover Interactive Proofs
Zhengfeng Ji
(University of Technology Sydney, Australia)
We present a protocol that transforms any quantum multi-prover interactive proof into a nonlocal game in which questions consist of logarithmic number of bits and answers of constant number of bits. As a corollary, it follows that the promise problem corresponding to the approximation of the nonlocal value to inverse polynomial accuracy is complete for QMIP*, and therefore NEXP-hard. This establishes that nonlocal games are provably harder than classical games without any complexity theory assumptions. Our result also indicates that gap amplification for nonlocal games may be impossible in general and provides a negative evidence for the feasibility of the gap amplification approach to the multi-prover variant of the quantum PCP conjecture.
Publisher's Version Article Search
Hardness Amplification for Entangled Games via Anchoring
Mohammad Bavarian, Thomas Vidick, and Henry Yuen
(Massachusetts Institute of Technology, USA; California Institute of Technology, USA; University of California at Berkeley, USA)

We study the parallel repetition of one-round games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate — in other words, does an analogue of Raz’s parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games.

We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the Feige-Kilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the Feige-Kilian transformation, our anchoring transformation is completeness preserving.

We prove an exponential-decay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games.

Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture.

Publisher's Version Article Search
The Computational Complexity of Ball Permutations
Scott Aaronson, Adam Bouland, Greg Kuperberg, and Saeed Mehraban
(University of Texas at Austin, USA; Massachusetts Institute of Technology, USA; University of California at Davis, USA)
We define several models of computation based on permuting distinguishable particles (which we call balls) and characterize their computational complexity. In the quantum setting, we use the representation theory of the symmetric group to find variants of this model which are intermediate between BPP and DQC1 (the class of problems solvable with one clean qubit) and between DQC1 and BQP. Furthermore, we consider a restricted version of this model based on an exactly solvable scattering problem of particles moving on a line. Despite the simplicity of this model from the perspective of mathematical physics, we show that if we allow intermediate destructive measurements and specific input states, then the model cannot be efficiently simulated classically up to multiplicative error unless the polynomial hierarchy collapses. Finally, we define a classical version of this model in which one can probabilistically permute balls. We find this yields a complexity class which is intermediate between L and BPP, and that a nondeterministic version of this model is NP-complete.
Publisher's Version Article Search
The Non-cooperative Tile Assembly Model Is Not Intrinsically Universal or Capable of Bounded Turing Machine Simulation
Pierre-Étienne Meunier and Damien Woods
(Inria, France)

The field of algorithmic self-assembly is concerned with the computational and expressive power of nanoscale self-assembling molecular systems. In the well-studied cooperative, or temperature 2, abstract tile assembly model it is known that there is a tile set to simulate any Turing machine and an intrinsically universal tile set that simulates the shapes and dynamics of any instance of the model, up to spatial rescaling. It has been an open question as to whether the seemingly simpler noncooperative, or temperature 1, model is capable of such behaviour. Here we show that this is not the case by showing that there is no tile set in the noncooperative model that is intrinsically universal, nor one capable of time-bounded Turing machine simulation within a bounded region of the plane.

Although the noncooperative model intuitively seems to lack the complexity and power of the cooperative model it has been exceedingly hard to prove this. One reason is that there have been few tools to analyse the structure of complicated paths in the plane. This paper provides a number of such tools. A second reason is that almost every obvious and small generalisation to the model (e.g. allowing error, 3D, non-square tiles, signals/wires on tiles, tiles that repel each other, parallel synchronous growth) endows it with great computational, and sometimes simulation, power. Our main results show that all of these generalisations provably increase computational and/or simulation power. Our results hold for both deterministic and nondeterministic noncooperative systems. Our first main result stands in stark contrast with the fact that for both the cooperative tile assembly model, and for 3D noncooperative tile assembly, there are respective intrinsically universal tilesets. Our second main result gives a new technique (reduction to simulation) for proving negative results about computation in tile assembly.

Publisher's Version Article Search

Session 4B

Uniform Sampling through the Lovász Local Lemma
Heng Guo, Mark Jerrum, and Jingcheng Liu
(Queen Mary University of London, UK; University of California at Berkeley, USA)
We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lovász Local Lemma and some clas- sical sampling algorithms such as the “cycle-popping” algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences.
Publisher's Version Article Search
Approximate Counting, the Lovász Local Lemma, and Inference in Graphical Models
Ankur Moitra
(Massachusetts Institute of Technology, USA)

In this paper we introduce a new approach for approximately counting in bounded degree systems with higher-order constraints. Our main result is an algorithm to approximately count the number of solutions to a CNF formula Φ when the width is logarithmic in the maximum degree. This closes an exponential gap between the known upper and lower bounds.

Moreover our algorithm extends straightforwardly to approximate sampling, which shows that under Lovasz Local Lemma-like conditions it is not only possible to find a satisfying assignment, it is also possible to generate one approximately uniformly at random from the set of all satisfying assignments. Our approach is a significant departure from earlier techniques in approximate counting, and is based on a framework to bootstrap an oracle for computing marginal probabilities on individual variables. Finally, we give an application of our results to show that it is algorithmically possible to sample from the posterior distribution in an interesting class of graphical models.

Publisher's Version Article Search
Real Stable Polynomials and Matroids: Optimization and Counting
Damian Straszak and Nisheeth K. Vishnoi
(EPFL, Switzerland)

Several fundamental optimization and counting problems arising in computer science, mathematics and physics can be reduced to one of the following computational tasks involving polynomials and set systems: given an oracle access to an m-variate real polynomial g and to a family of (multi-)subsets B of [m], (1) compute the sum of coefficients of monomials in g corresponding to all the sets that appear in B(1), or find SB such that the monomial in g corresponding to S has the largest coefficient in g. Special cases of these problems, such as computing permanents and mixed discriminants, sampling from determinantal point processes, and maximizing sub-determinants with combinatorial constraints have been topics of much recent interest in theoretical computer science.

In this paper we present a general convex programming framework geared to solve both of these problems. Subsequently, we show that roughly, when g is a real stable polynomial with non-negative coefficients and B is a matroid, the integrality gap of our convex relaxation is finite and depends only on m (and not on the coefficients of g).

Prior to this work, such results were known only in important but sporadic cases that relied heavily on the structure of either g or B; it was not even a priori clear if one could formulate a convex relaxation that has a finite integrality gap beyond these special cases. Two notable examples are a result by Gurvits for real stable polynomials g when B contains one element, and a result by Nikolov and Singh for a family of multi-linear real stable polynomials when B is the partition matroid. This work, which encapsulates almost all interesting cases of g and B, benefits from both – it is inspired by the latter in coming up with the right convex programming relaxation and the former in deriving the integrality gap. However, proving our results requires extensions of both; in that process we come up with new notions and connections between real stable polynomials and matroids which might be of independent interest.

Publisher's Version Article Search
A Generalization of Permanent Inequalities and Applications in Counting and Optimization
Nima Anari and Shayan Oveis Gharan
(Stanford University, USA; University of Washington, USA)

A polynomial p∈ℝ[z1,…,zn] is real stable if it has no roots in the upper-half complex plane. Gurvits’s permanent inequality gives a lower bound on the coefficient of the z1z2zn monomial of a real stable polynomial p with nonnegative coefficients. This fundamental inequality has been used to attack several counting and optimization problems. Here, we study a more general question: Given a stable multilinear polynomial p with nonnegative coefficients and a set of monomials S, we show that if the polynomial obtained by summing up all monomials in S is real stable, then we can lower bound the sum of coefficients of monomials of p that are in S. We also prove generalizations of this theorem to (real stable) polynomials that are not multilinear. We use our theorem to give a new proof of Schrijver’s inequality on the number of perfect matchings of a regular bipartite graph, generalize a recent result of Nikolov and Singh, and give deterministic polynomial time approximation algorithms for several counting problems.

Publisher's Version Article Search
Algorithmic and Optimization Aspects of Brascamp-Lieb Inequalities, via Operator Scaling
Ankit Garg, Leonid Gurvits, Rafael Oliveira, and Avi Wigderson
(Microsoft Research, USA; City College of New York, USA; Princeton University, USA; IAS, USA)
The celebrated Brascamp-Lieb (BL) inequalities [BL76, Lie90], and their reverse form of Barthe [Bar98], are an important mathematical tool, unifying and generalizing numerous in- equalities in analysis, convex geometry and information theory, with many used in computer science. While their structural theory is very well understood, far less is known about computing their main parameters below (which we later define). Prior to this work, the best known algorithms for any of these optimization tasks required at least exponential time. In this work, we give polynomial time algorithms to compute: (1) Feasibility of BL-datum, (2) Optimal BL- constant, (3) Weak separation oracle for BL-polytopes. What is particularly exciting about this progress, beyond the better understanding of BL- inequalities, is that the objects above naturally encode rich families of optimization problems which had no prior efficient algorithms. In particular, the BL-constants (which we efficiently compute) are solutions to non-convex optimization problems, and the BL-polytopes (for which we provide efficient membership and separation oracles) are linear programs with exponentially many facets. Thus we hope that new combinatorial optimization problems can be solved via reductions to the ones above, and make modest initial steps in exploring this possibility. Our algorithms are obtained by a simple efficient reduction of a given BL-datum to an instance of the Operator Scaling problem defined by [Gur04]. To obtain the results above, we utilize the two (very recent and different) algorithms for the operator scaling problem [GGOW16, IQS15a]. Our reduction implies algorithmic versions of many of the known structural results on BL-inequalities, and in some cases provide proofs that are different or simpler than existing ones. Further, the analytic properties of the [GGOW16] algorithm provide new, effective bounds on the magnitude and continuity of BL-constants, with applications to non-linear versions of BL-inequalities; prior work relied on compactness, and thus provided no bounds. On a higher level, our application of operator scaling algorithm to BL-inequalities further connects analysis and optimization with the diverse mathematical areas used so far to mo- tivate and solve the operator scaling problem, which include commutative invariant theory, non-commutative algebra, computational complexity and quantum information theory.
Publisher's Version Article Search

Session 4C

Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, and Adrian Vladu
(Massachusetts Institute of Technology, USA; Georgia Institute of Technology, USA; Stanford University, USA)

In this paper, we begin to address the longstanding algorithmic gap between general and reversible Markov chains. We develop directed analogues of several spectral graph-theoretic tools that had previously been available only in the undirected setting, and for which it was not clear that directed versions even existed. In particular, we provide a notion of approximation for directed graphs, prove sparsifiers under this notion always exist, and show how to construct them in almost linear time. Using this notion of approximation, we design the first almost-linear-time directed Laplacian system solver, and, by leveraging the recent framework of [Cohen-Kelner-Peebles-Peng-Sidford-Vladu, FOCS’16], we also obtain almost-linear-time algorithms for computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more. For each problem, our algorithms improve the previous best running times of O((nm3/4 + n2/3 m) logO(1) (n κ T−1)) to O((m + n2O(√lognloglogn)) logO(1) (n κ T−1)) where n is the number of vertices in the graph, m is the number of edges, κ is a natural condition number associated with the problem, and T is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and that they will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs.

Publisher's Version Article Search
Surviving in Directed Graphs: A Quasi-Polynomial-Time Polylogarithmic Approximation for Two-Connected Directed Steiner Tree
Fabrizio Grandoni and Bundit Laekhanukit
(IDSIA, Switzerland; University of Lugano, Switzerland; Weizmann Institute of Science, Israel)

Real-word networks are often prone to failures. A reliable network needs to cope with this situation and must provide a backup communication channel. This motivates the study of survivable network design, which has been a focus of research for a few decades. To date, survivable network design problems on undirected graphs are well-understood. For example, there is a 2 approximation in the case of edge failures [Jain, FOCS’98/Combinatorica’01]. The problems on directed graphs, in contrast, have seen very little progress. Most techniques for the undirected case like primal-dual and iterative rounding methods do not seem to extend to the directed case. Almost no non-trivial approximation algorithm is known even for a simple case where we wish to design a network that tolerates a single failure.

In this paper, we study a survivable network design problem on directed graphs, 2-Connected Directed Steiner Tree (2-DST): given an n-vertex weighted directed graph, a root r, and a set of h terminals S, find a min-cost subgraph H that has two edge/vertex disjoint paths from r to any tS. 2-DST is a natural generalization of the classical Directed Steiner Tree problem (DST), where we have an additional requirement that the network must tolerate one failure. No non-trivial approximation is known for 2-DST. This was left as an open problem by Feldman et al., [SODA’09; JCSS] and has then been studied by Cheriyan et al. [SODA’12; TALG] and Laekhanukit [SODA’14]. However, no positive result was known except for the special case of a D-shallow instance [Laekhanukit, ICALP’16].

We present an O(D3logD· h2/D· logn) approximation algorithm for 2-DST that runs in time O(nO(D)), for any D∈[log2h]. This implies a polynomial-time O(hTlogn) approximation for any constant T>0, and a poly-logarithmic approximation running in quasi-polynomial time. We remark that this is essentially the best-known even for the classical DST, and the latter problem is O(log2−Tn)-hard to approximate [Halperin and Krauthgamer, STOC’03]. As a by product, we obtain an algorithm with the same approximation guarantee for the 2-Connected Directed Steiner Subgraph problem, where the goal is to find a min-cost subgraph such that every pair of terminals are 2-edge/vertex connected.

Our approximation algorithm is based on a careful combination of several techniques. In more detail, we decompose an optimal solution into two (possibly not edge disjoint) divergent trees that induces two edge disjoint paths from the root to any given terminal. These divergent trees are then embedded into a shallow tree by means of Zelikovsky’s height reduction theorem. On the latter tree we solve a 2-Connected Group Steiner Tree problem and then map back this solution to the original graph. Crucially, our tree embedding is achieved via a probabilistic mapping guided by an LP: This is the main technical novelty of our approach, and might be useful for future work.

Publisher's Version Article Search
Local Max-Cut in Smoothed Polynomial Time
Omer Angel, Sébastien Bubeck, Yuval Peres, and Fan Wei
(University of British Columbia, Canada; Microsoft Research, USA; Stanford University, USA)

In 1988, Johnson, Papadimitriou and Yannakakis wrote that “Practically all the empirical evidence would lead us to conclude that finding locally optimal solutions is much easier than solving NP-hard problems". Since then the empirical evidence has continued to amass, but formal proofs of this phenomenon have remained elusive. A canonical (and indeed complete) example is the local max-cut problem, for which no polynomial time method is known. In a breakthrough paper, Etscheid and Röglin proved that the smoothed complexity of local max-cut is quasi-polynomial, i.e., if arbitrary bounded weights are randomly perturbed, a local maximum can be found in φ nO(logn) steps where φ is an upper bound on the random edge weight density. In this paper we prove smoothed polynomial complexity for local max-cut, thus confirming that finding local optima for max-cut is much easier than solving it.

Publisher's Version Article Search
Algorithms for Stable and Perturbation-Resilient Problems
Haris Angelidakis, Konstantin Makarychev, and Yury Makarychev
(Toyota Technological Institute at Chicago, USA; Northwestern University, USA)

We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results.

We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√2≈2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2−ε)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016).

We then give an exact algorithm for (2−2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2−2/k+δ)-weakly stable instances of Minimum Multiway Cut.

Finally, we show that there are no robust polynomial-time algorithms for n1−ε-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP).

Publisher's Version Article Search
Area-Convexity, ℓ Regularization, and Undirected Multicommodity Flow
Jonah Sherman
(University of California at Berkeley, USA)

We show the strong-convexity assumption of regularization-based methods for solving bilinear saddle point problems may be relaxed to a weaker notion of area-convexity with respect to an alternating bilinear form. This allows bypassing the infamous ℓ barrier for strongly convex regularizers that has stalled progress on a number of algorithmic problems.

Applying area-convex regularization, we present a nearly-linear time approximation algorithm for solving matrix inequality systems A XB over right-stochastic matrices X. By combining that algorithm with existing work on preconditioning maximum-flow, we obtain a nearly-linear time approximation algorithm for maximum concurrent flow in undirected graphs: given an undirected, capacitated graph with m edges and k demand vectors, the algorithm takes Õ(mkT−1) time and outputs k flows routing the specified demands with total congestion at most (1+T) times optimal.

Publisher's Version Article Search

Session 5A

Pseudorandomness of Ring-LWE for Any Ring and Modulus
Chris Peikert, Oded Regev, and Noah Stephens-Davidowitz
(University of Michigan, USA; New York University, USA)
We give a polynomial-time quantum reduction from worst-case (ideal) lattice problems directly to decision (Ring-)LWE. This extends to decision all the worst-case hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision Ring-LWE with any number field and any modulus.
Publisher's Version Article Search
Non-interactive Delegation and Batch NP Verification from Standard Computational Assumptions
Zvika Brakerski, Justin Holmgren, and Yael Kalai
(Weizmann Institute of Science, Israel; Massachusetts Institute of Technology, USA; Microsoft Research, USA)
We present an adaptive and non-interactive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomial-time cryptographic assumptions (e.g. the worst case hardness of polynomial-factor approximation of short-vector lattice problems). In our protocol, the verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements by simpling sending a proof to the verifier. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer non-adaptive two-message protocols (where the first message could not be re-used), and required either obfuscation-based assumptions or super-polynomial hardness assumptions. We show that our techniques can also be applied to construct a new type of (non-adaptive) 2-message argument for batch NP-statements. Specifically, we can simultaneously prove (with computational soundness) the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness.
Publisher's Version Article Search
Average-Case Fine-Grained Hardness
Marshall Ball, Alon Rosen, Manuel Sabin, and Prashant Nalini Vasudevan
(Columbia University, USA; IDC Herzliya, Israel; University of California at Berkeley, USA; Massachusetts Institute of Technology, USA)

We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL ’94), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure.

Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO ’03).

We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems – namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) – in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2−o(1) time to compute on average, and that of APSP gives us a function that requires n3−o(1) time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions.

Publisher's Version Article Search
Equivocating Yao: Constant-Round Adaptively Secure Multiparty Computation in the Plain Model
Ran Canetti, Oxana Poburinnaya, and Muthuramakrishnan Venkitasubramaniam
(Boston University, USA; Tel Aviv University, Israel; University of Rochester, USA)
Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable two-message, two-party secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds? We provide a positive answer to this question. We define a new type of encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function. Combining our scheme with non-committing encryption, we obtain the first two-message, two-party computation protocol, and the first constant-rounds multiparty computation protocol, in the plain model, that are secure against semi-honest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within.
Publisher's Version Article Search Info

Session 5B

Removal Lemmas with Polynomial Bounds
Lior Gishboliner and Asaf Shapira
(Tel Aviv University, Israel)
We give new sufficient and necessary criteria guaranteeing that a hereditary graph property can be tested with a polynomial query complexity. Although both are simple combinatorial criteria, they imply almost all prior positive and negative results of this type, as well as many new ones. One striking application of our results is that every semi-algebraic graph property (e.g., being an interval graph, a unit-disc graph etc.) can be tested with a polynomial query complexity. This confirms a conjecture of Alon. The proofs combine probabilistic ideas together with a novel application of a conditional regularity lemma for matrices, due to Alon, Fischer and Newman.
Publisher's Version Article Search
Beyond Talagrand Functions: New Lower Bounds for Testing Monotonicity and Unateness
Xi Chen, Erik Waingarten, and Jinyu Xie
(Columbia University, USA)
ERROR in abstract.
Publisher's Version Article Search
Online and Dynamic Algorithms for Set Cover
Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Debmalya Panigrahi
(Carnegie Mellon University, USA; Microsoft Research, India; IIT Delhi, India; Duke University, USA)
ERROR in abstract.
Publisher's Version Article Search
Online Service with Delay
Yossi Azar, Arun Ganesh, Rong Ge, and Debmalya Panigrahi
(Tel Aviv University, Israel; Duke University, USA)
ERROR in abstract.
Publisher's Version Article Search

Session 5C

The Integrality Gap of the Goemans-Linial SDP Relaxation for Sparsest Cut Is at Least a Constant Multiple of √log n
Assaf Naor and Robert Young
(Princeton University, USA; New York University, USA)

We prove that the integrality gap of the Goemans–Linial semidefinite programming relaxation for the Sparsest Cut Problem is Ω(√logn) on inputs with n vertices, thus matching the previously best known upper bound (logn)1/2+o(1) up to lower-order factors. This statement is a consequence of the following new isoperimetric-type inequality. Consider the 8-regular graph whose vertex set is the 5-dimensional integer grid ℤ5 and where each vertex (a,b,c,d,e)∈ ℤ5 is connected to the 8 vertices (a± 1,b,c,d,e), (a,b± 1,c,d,e), (a,b,c± 1,d,e± a), (a,b,c,d± 1,e± b). This graph is known as the Cayley graph of the 5-dimensional discrete Heisenberg group. Given Ω⊂ ℤ5, denote the size of its edge boundary in this graph (a.k.a. the horizontal perimeter of Ω) by |∂hΩ|. For t∈ ℕ, denote by |∂vtΩ| the number of (a,b,c,d,e)∈ ℤ5 such that exactly one of the two vectors (a,b,c,d,e),(a,b,c,d,e+t) is in Ω. The vertical perimeter of Ω is defined to be |∂vΩ|= √t=1|∂vtΩ|2/t2. We show that every subset Ω⊂ ℤ5 satisfies |∂vΩ|=O(|∂hΩ|). This vertical-versus-horizontal isoperimetric inequality yields the above-stated integrality gap for Sparsest Cut and answers several geometric and analytic questions of independent interest.

The theorem stated above is the culmination of a program whose aim is to understand the performance of the Goemans–Linial semidefinite program through the embeddability properties of Heisenberg groups. These investigations have mathematical significance even beyond their established relevance to approximation algorithms and combinatorial optimization. In particular they contribute to a range of mathematical disciplines including functional analysis, geometric group theory, harmonic analysis, sub-Riemannian geometry, geometric measure theory, ergodic theory, group representations, and metric differentiation. This article builds on the above cited works, with the “twist” that while those works were equally valid for any finite dimensional Heisenberg group, our result holds for the Heisenberg group of dimension 5 (or higher) but fails for the 3-dimensional Heisenberg group. This insight leads to our core contribution, which is a deduction of an endpoint L1-boundedness of a certain singular integral on ℝ5 from the (local) L2-boundedness of the corresponding singular integral on ℝ3. To do this, we devise a corona-type decomposition of subsets of a Heisenberg group, in the spirit of the construction that David and Semmes performed in ℝn, but with two main conceptual differences (in addition to more technical differences that arise from the peculiarities of the geometry of Heisenberg group). Firstly, the“atoms” of our decomposition are perturbations of intrinsic Lipschitz graphs in the sense of Franchi, Serapioni, and Serra Cassano (plus the requisite “wild” regions that satisfy a Carleson packing condition). Secondly, we control the local overlap of our corona decomposition by using quantitative monotonicity rather than Jones-type β-numbers.

Publisher's Version Article Search
On Independent Sets, 2-to-2 Games, and Grassmann Graphs
Subhash Khot, Dor Minzer, and Muli Safra
(New York University, USA; Tel Aviv University, Israel)
ERROR in abstract.
Publisher's Version Article Search
Approximating Rectangles by Juntas and Weakly-Exponential Lower Bounds for LP Relaxations of CSPs
Pravesh K. Kothari, Raghu Meka, and Prasad Raghavendra
(Princeton University, USA; IAS, USA; University of California at Los Angeles, USA; University of California at Berkeley, USA)
ERROR in abstract.
Publisher's Version Article Search
How Well Do Local Algorithms Solve Semidefinite Programs?
Zhou Fan and Andrea Montanari
(Stanford University, USA)

Several probabilistic models from high-dimensional statistics and machine learning reveal an intriguing and yet poorly understood dichotomy. Either simple local algorithms succeed in estimating the object of interest, or even sophisticated semi-definite programming (SDP) relaxations fail. In order to explore this phenomenon, we study a classical SDP relaxation of the minimum graph bisection problem, when applied to Erdos-Rényi random graphs with bounded average degree d > 1, and obtain several types of results. First, we use a dual witness construction (using the so-called non-backtracking matrix of the graph) to upper bound the SDP value. Second, we prove that a simple local algorithm approximately solves the SDP to within a factor 2d^2/(2d^2 + d - 1) of the upper bound. In particular, the local algorithm is at most 8/9 suboptimal, and 1 + O(d^-1) suboptimal for large degree.

We then analyze a more sophisticated local algorithm, which aggregates information according to the harmonic measure on the limiting Galton-Watson (GW) tree. The resulting lower bound is expressed in terms of the conductance of the GW tree and matches surprisingly well the empirically determined SDP values on large-scale Erdos-Rényi graphs.

We finally consider the planted partition model. In this case, purely local algorithms are known to fail, but they do succeed if a small amount of side information is available. Our results imply quantitative bounds on the threshold for partial recovery using SDP in this model.

Publisher's Version Article Search

Session 6A

A Polynomial Restriction Lemma with Applications
Valentine Kabanets, Daniel M. Kane, and Zhenjian Lu
(Simon Fraser University, Canada; University of California at San Diego, USA)

A polynomial threshold function (PTF) of degree d is a boolean function of the form f=sgn(p), where p is a degree-d polynomial, and sgn is the sign function. The main result of the paper is an almost optimal bound on the probability that a random restriction of a PTF is not close to a constant function, where a boolean function g is called δ-close to constant if, for some v∈{1,−1}, we have g(x)=v for all but at most δ fraction of inputs. We show for every PTF f of degree d≥ 1, and parameters 0<δ, r≤ 1/16, that

Prρ∼ Rr [fρ  is not  δ -close to constant] ≤ 
· (logr−1 · logδ−1)O(d2),  

where ρ∼ Rr is a random restriction leaving each variable, independently, free with probability r, and otherwise assigning it 1 or −1 uniformly at random. In fact, we show a more general result for random block restrictions: given an arbitrary partitioning of input variables into m blocks, a random block restriction picks a uniformly random block ℓ∈ [m] and assigns 1 or −1, uniformly at random, to all variable outside the chosen block ℓ. We prove the Block Restriction Lemma saying that a PTF f of degree d becomes δ-close to constant when hit with a random block restriction, except with probability at most m−1/2 · (logm· logδ−1)O(d2). As an application of our Restriction Lemma, we prove lower bounds against constant-depth circuits with PTF gates of any degree 1≤ d≪ √logn/loglogn, generalizing the recent bounds against constant-depth circuits with linear threshold gates (LTF gates) proved by Kane and Williams (STOC, 2016) and Chen, Santhanam, and Srinivasan (CCC, 2016). In particular, we show that there is an n-variate boolean function FnP such that every depth-2 circuit with PTF gates of degree d≥ 1 that computes Fn must have at least (n3/2+1/d)· (logn)O(d2) wires. For constant depths greater than 2, we also show average-case lower bounds for such circuits with super-linear number of wires. These are the first super-linear bounds on the number of wires for circuits with PTF gates. We also give short proofs of the optimal-exponent average sensitivity bound for degree-d PTFs due to Kane (Computational Complexity, 2014), and the Littlewood-Offord type anticoncentration bound for degree-d multilinear polynomials due to Meka, Nguyen, and Vu (Theory of Computing, 2016). Finally, we give derandomized versions of our Block Restriction Lemma and Littlewood-Offord type anticoncentration bounds, using a pseudorandom generator for PTFs due to Meka and Zuckerman (SICOMP, 2013).

Publisher's Version Article Search
Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace
William M. Hoza and Chris Umans
(University of Texas at Austin, USA; California Institute of Technology, USA)
ERROR in abstract.
Publisher's Version Article Search
Probabilistic Rank and Matrix Rigidity
Josh Alman and Ryan Williams
(Massachusetts Institute of Technology, USA)

We consider a notion of probabilistic rank and probabilistic sign-rank of a matrix, which measure the extent to which a matrix can be probabilistically represented by low-rank matrices. We demonstrate several connections with matrix rigidity, communication complexity, and circuit lower bounds. The most interesting outcomes are:

The Walsh-Hadamard Transform is Not Very Rigid. We give surprising upper bounds on the rigidity of a family of matrices whose rigidity has been extensively studied, and was conjectured to be highly rigid. For the 2n × 2n Walsh-Hadamard transform Hn (a.k.a. Sylvester matrices, a.k.a. the communication matrix of Inner Product modulo 2), we show how to modify only 2ε n entries in each row and make the rank of Hn drop below 2n(1−Ω(ε2/log(1/ε))), for all small ε > 0, over any field. That is, it is not possible to prove arithmetic circuit lower bounds on Hadamard matrices such as Hn, via L. Valiant’s matrix rigidity approach. We also show non-trivial rigidity upper bounds for Hn with smaller target rank.

Matrix Rigidity and Threshold Circuit Lower Bounds. We give new consequences of rigid matrices for Boolean circuit complexity. First, we show that explicit n × n Boolean matrices which maintain rank at least 2(logn)1−δ after n2/2(logn)δ/2 modified entries (over any field, for any δ > 0) would yield an explicit function that does not have sub-quadratic-size AC0 circuits with two layers of arbitrary linear threshold gates. Second, we prove that explicit 0/1 matrices over the reals which are modestly more rigid than the best known rigidity lower bounds for sign-rank would imply exponential-gate lower bounds for the infamously difficult class of depth-two linear threshold circuits with arbitrary weights on both layers. In particular, we show that matrices defined by these seemingly-difficult circuit classes actually have low probabilistic rank and sign-rank, respectively.

An Equivalence Between Communication, Probabilistic Rank, and Rigidity. It has been known since Razborov [1989] that explicit rigidity lower bounds would resolve longstanding lower-bound problems in communication complexity, but it seemed possible that communication lower bounds could be proved without making progress on matrix rigidity. We show that for every function f which is randomly self-reducible in a natural way (the inner product mod 2 is an example), bounding the communication complexity of f (in a precise technical sense) is equivalent to bounding the rigidity of the matrix of f, via an equivalence with probabilistic rank.

Publisher's Version Article Search
Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds
Michael A. Forbes, Amir Shpilka, and Ben Lee Volk
(Simons Institute for the Theory of Computing Berkeley, USA; Tel Aviv University, Israel)
We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining super-polynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits. Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)-degree polylog(N)-size circuits is a hitting set for the class of poly(N)-degree poly(N)-size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits.
Publisher's Version Article Search
Pseudodeterministic Constructions in Subexponential Time
Igor C. Oliveira and Rahul Santhanam
(Charles University in Prague, Czechia; University of Oxford, UK)
ERROR in abstract.
Publisher's Version Article Search

Session 6B

An SDP-Based Algorithm for Linear-Sized Spectral Sparsification
Yin Tat Lee and He Sun
(Microsoft Research, USA; University of Bristol, UK)

For any undirected and weighted graph G=(V,E,w) with n vertices and m edges, we call a sparse subgraph H of G, with proper reweighting of the edges, a (1+ε)-spectral sparsifier if

(1−ε)xTLGx≤ xT LH x≤ (1+ε) xT LGx  

holds for any x∈ℝn, where LG and LH are the respective Laplacian matrices of G and H. Noticing that Ω(m) time is needed for any algorithm to construct a spectral sparsifier and a spectral sparsifier of G requires Ω(n) edges, a natural question is to investigate, for any constant ε, if a (1+ε)-spectral sparsifier of G with O(n) edges can be constructed in Õ(m) time, where the Õ notation suppresses polylogarithmic factors. All previous constructions on spectral sparsification require either super-linear number of edges or m1+Ω(1) time.

In this work we answer this question affirmatively by presenting an algorithm that, for any undirected graph G and ε>0, outputs a (1+ε)-spectral sparsifier of G with O(n2) edges in Õ(mO(1)) time. Our algorithm is based on three novel techniques: (1) a new potential function which is much easier to compute yet has similar guarantees as the potential functions used in previous references; (2) an efficient reduction from a two-sided spectral sparsifier to a one-sided spectral sparsifier; (3) constructing a one-sided spectral sparsifier by a semi-definite program.

Publisher's Version Article Search
Low Rank Approximation with Entrywise ℓ₁-Norm Error
Zhao Song, David P. Woodruff, and Peilin Zhong
(University of Texas at Austin, USA; IBM Research, USA; Columbia University, USA)

We study the ℓ1-low rank approximation problem, where for a given n × d matrix A and approximation factor α ≥ 1, the goal is to output a rank-k matrix A for which

||AA||1 ≤ α · 
rank-k matrices  A

where for an n × d matrix C, we let ||C||1 = ∑i=1nj=1d |Ci,j|. This error measure is known to be more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. The problem was shown to be NP-hard by Gillis and Vavasis and a number of heuristics have been proposed. It was asked in multiple places if there are any approximation algorithms.

We give the first provable approximation algorithms for ℓ1-low rank approximation, showing that it is possible to achieve approximation factor α = (logd) · poly(k) in nnz(A) + (n+d) poly(k) time, where nnz(A) denotes the number of non-zero entries of A. If k is constant, we further improve the approximation ratio to O(1) with a poly(nd)-time algorithm. Under the Exponential Time Hypothesis, we show there is no poly(nd)-time algorithm achieving a (1+1/log1+γ(nd))-approximation, for γ > 0 an arbitrarily small constant, even when k = 1.

We give a number of additional results for ℓ1-low rank approximation: nearly tight upper and lower bounds for column subset selection, CUR decompositions, extensions to low rank approximation with respect to ℓp-norms for 1 ≤ p < 2 and earthmover distance, low-communication distributed protocols and low-memory streaming algorithms, algorithms with limited randomness, and bicriteria algorithms. We also give a preliminary empirical evaluation.

Publisher's Version Article Search Info
An Adaptive Sublinear-Time Block Sparse Fourier Transform
Volkan Cevher, Michael Kapralov, Jonathan Scarlett, and Amir Zandieh
(EPFL, Switzerland)
ERROR in abstract.
Publisher's Version Article Search
Streaming Symmetric Norms via Measure Concentration
Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer, and Lin F. Yang
(Harvard University, USA; Johns Hopkins University, USA; ETH Zurich, Switzerland; Weizmann Institute of Science, Israel)
ERROR in abstract.
Publisher's Version Article Search
Sampling Random Spanning Trees Faster Than Matrix Multiplication
David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva
(Georgia Institute of Technology, USA; Yale University, USA; Massachusetts Institute of Technology, USA; Google, USA)
ERROR in abstract.
Publisher's Version Article Search Info

Session 6C

A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning Trees
Gopal Pandurangan, Peter Robinson, and Michele Scquizzato
(University of Houston, USA; Royal Holloway University of London, UK)
ERROR in abstract.
Publisher's Version Article Search
Distributed Exact Shortest Paths in Sublinear Time
Michael Elkin
(Ben-Gurion University of the Negev, Israel)

The distributed single-source shortest paths problem is one of the most fundamental and central problems in the message-passing distributed computing. Classical Bellman-Ford algorithm solves it in O(n) time, where n is the number of vertices in the input graph G. Peleg and Rubinovich, FOCS’99, showed a lower bound of Ω(D + √n) for this problem, where D is the hop-diameter of G.

Whether or not this problem can be solved in o(n) time when D is relatively small is a major notorious open question. Despite intensive research that yielded near-optimal algorithms for the approximate variant of this problem, no progress was reported for the original problem.

In this paper we answer this question in the affirmative. We devise an algorithm that requires O((n logn)5/6) time, for D = O(√n logn), and O(D1/3 · (n logn)2/3) time, for larger D. This running time is sublinear in n in almost the entire range of parameters, specifically, for D = o(n/log2 n).

We also generalize our result in two directions. One is when edges have bandwidth b ≥ 1, and the other is the s-sources shortest paths problem. For the former problem, our algorithm provides an improved bound, compared to the unit-bandwidth case. In particular, we provide an all-pairs shortest paths algorithm that requires O(n5/3 · log2/3 n) time, even for b = 1, for all values of D.

For the latter problem (of s sources), our algorithm also provides bounds that improve upon the previous state-of-the-art in the entire range of parameters.

From the technical viewpoint, our algorithm computes a hopset G″ of a skeleton graph G′ of G without first computing G′ itself.

We then conduct a Bellman-Ford exploration in G′ ∪ G″, while computing the required edges of Gon the fly. As a result, our algorithm computes exactly those edges of G′ that it really needs, rather than computing approximately the entire G′.

Publisher's Version Article Search
Exponential Separations in the Energy Complexity of Leader Election
Yi-Jun Chang, Tsvi Kopelowitz, Seth Pettie, Ruosong Wang, and Wei Zhan
(University of Michigan, USA; Tsinghua University, China)

Energy is often the most constrained resource for battery-powered wireless devices and the lion’s share of energy is often spent on transceiver usage (sending/receiving packets), not on computation. In this paper we study the energy complexity of Leader Election and Approximate Counting in several models of wireless radio networks. It turns out that energy complexity is very sensitive to whether the devices can generate random bits and their ability to detect collisions. We consider four collision-detection models: Strong-CD (in which transmitters and listeners detect collisions), Sender-CD and Receiver-CD (in which only transmitters or only listeners detect collisions), and No-CD (in which no one detects collisions.)

The take-away message of our results is quite surprising. For randomized Leader Election algorithms, there is an exponential gap between the energy complexity of Sender-CD and Receiver-CD:

No-CD = Sender-CD ≫ Receiver-CD = Strong-CD

and for deterministic Leader Election algorithms, there is another exponential gap in energy complexity, but in the reverse direction:

No-CD = Receiver-CD ≫ Sender-CD = Strong-CD

In particular, the randomized energy complexity of Leader Election is Θ(log* n) in Sender-CD but Θ(log(log* n)) in Receiver-CD, where n is the (unknown) number of devices. Its deterministic complexity is Θ(logN) in Receiver-CD but Θ(loglogN) in Sender-CD, where N is the (known) size of the devices’ ID space.

There is a tradeoff between time and energy. We give a new upper bound on the time-energy tradeoff curve for randomized Leader Election and Approximate Counting. A critical component of this algorithm is a new deterministic Leader Election algorithm for dense instances, when n=Θ(N), with inverse-Ackermann-type (O(α(N))) energy complexity.

Publisher's Version Article Search
On the Complexity of Local Distributed Graph Problems
Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus
(ETH Zurich, Switzerland; University of Freiburg, Germany)
ERROR in abstract.
Publisher's Version Article Search
Efficient Massively Parallel Methods for Dynamic Programming
Sungjin Im, Benjamin Moseley, and Xiaorui Sun
(University of California at Merced, USA; Washington University at St. Louis, USA; Simons Institute for the Theory of Computing Berkeley, USA)
ERROR in abstract.
Publisher's Version Article Search

Session 7A

Complexity of Short Presburger Arithmetic
Danny Nguyen and Igor Pak
(University of California at Los Angeles, USA)
We study complexity of short sentences in Presburger arithmetic (Short-PA). Here by “short” we mean sentences with a bounded number of variables, quantifers, inequalities and Boolean operations; the input consists only of the integers involved in the inequalities. We prove that assuming Kannan’s partition can be found in polynomial time, the satisfability of Short-PA sentences can be decided in polynomial time. Furthermore, under the same assumption, we show that the numbers of satisfying assignments of short Presburger sentences can also be computed in polynomial time.
Publisher's Version Article Search
Linear Matroid Intersection Is in Quasi-NC
Rohit Gurjar and Thomas Thierauf
(Tel Aviv University, Israel; Aalen University, Germany)
ERROR in abstract.
Publisher's Version Article Search
Randomized Polynomial Time Identity Testing for Noncommutative Circuits
V. Arvind, Pushkar S Joglekar, Partha Mukhopadhyay, and S. Raja
(Institute of Mathematical Sciences, India; Vishwakarma Institute of Technology Pune, India; Chennai Mathematical Institute, India)

In this paper we show that black-box polynomial identity testing for noncommutative polynomials f∈F⟨ z1,z2,⋯,zn ⟩ of degree D and sparsity t, can be done in randomized (n,logt,logD) time. As a consequence, given a circuit C of size s computing a polynomial f∈F⟨ z1,z2,⋯,zn ⟩ with at most t non-zero monomials, then testing if f is identically zero can be done by a randomized algorithm with running time polynomial in s and n and logt. This makes significant progress on a question that has been open for over ten years. Our algorithm is based on automata-theoretic ideas that can efficiently isolate a monomial in the given polynomial. In particular, we carry out the monomial isolation using nondeterministic automata.

In general, noncommutative circuits of size s can compute polynomials of degree exponential in s and number of monomials double-exponential in s. In this paper, we consider a natural class of homogeneous noncommutative circuits, that we call +-regular circuits, and give a white-box polynomial time deterministic polynomial identity test. These circuits can compute noncommutative polynomials with number of monomials double-exponential in the circuit size. Our algorithm combines some new structural results for +-regular circuits with known results for noncommutative ABP identity testing, rank bound of commutative depth three identities, and equivalence testing problem for words. Finally, we consider the black-box identity testing problem for depth three +-regular circuits and give a randomized polynomial time identity test. In particular, we show if f∈⟨ Z⟩ is a nonzero noncommutative polynomial computed by a depth three +-regular circuit of size s, then f cannot be a polynomial identity for the matrix algebra Ms(F) when F is sufficiently large depending on the degree of f.

Publisher's Version Article Search
Holographic Algorithm with Matchgates Is Universal for Planar #CSP over Boolean Domain
Jin-Yi Cai and Zhiguo Fu
(University of Wisconsin-Madison, USA; Jilin University, China)
We prove a complexity classification theorem that classifies all counting constraint satisfaction problems (#CSP) over Boolean variables into exactly three classes: (1) Polynomial-time solvable; (2) #P-hard for general instances, but solvable in polynomial-time over planar structures; and (3) #P-hard over planar structures. The classification applies to all finite sets of complex-valued, not necessarily symmetric, constraint functions on Boolean variables. It is shown that Valiant's holographic algorithm with matchgates is universal strategy for all problems in class (2).
Publisher's Version Article Search

Session 7B

Efficient Empirical Revenue Maximization in Single-Parameter Auction Environments
Yannai A. Gonczarowski and Noam Nisan
(Hebrew University of Jerusalem, Israel; Microsoft Research, Israel)
We present a polynomial-time algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of single-parameter auction environments including matroid environments, position environments, and the public project environment. The valuation distributions may be arbitrary bounded distributions (in particular, they may be irregular, and may differ for the various bidders), thus resolving a problem left open by previous papers. The analysis uses basic tools, is performed in its entirety in value-space, and simplifies the analysis of previously known results for special cases. Furthermore, the analysis extends to certain single-parameter auction environments where precise revenue maximization is known to be intractable, such as knapsack environments.
Publisher's Version Article Search
The Menu-Size Complexity of Revenue Approximation
Moshe Babaioff, Yannai A. Gonczarowski, and Noam Nisan
(Microsoft Research, Israel; Hebrew University of Jerusalem, Israel)
ERROR in abstract.
Publisher's Version Article Search
Communication Complexity of Approximate Nash Equilibria
Yakov Babichenko and Aviad Rubinstein
(Technion, Israel; University of California at Berkeley, USA)
ERROR in abstract.
Publisher's Version Article Search
Settling the Complexity of Leontief and PLC Exchange Markets under Exact and Approximate Equilibria
Jugal Garg, Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod
(University of Illinois at Urbana-Champaign, USA; Georgia Institute of Technology, USA)
Our first result shows membership in PPAD for the problem of computing approximate equilibria for an Arrow-Debreu exchange market for piecewise-linear concave (PLC) utility functions. As a corollary we also obtain membership in PPAD for Leontief utility functions. This settles an open question of Vazirani and Yannakakis (2011). Next we show FIXP-hardness of computing equilibria in Arrow-Debreu exchange markets under Leontief utility functions, and Arrow-Debreu markets under linear utility functions and Leontief production sets, thereby settling these open questions of Vazirani and Yannakakis (2011). As corollaries, we obtain FIXP-hardness for PLC utilities and for Arrow-Debreu markets under linear utility functions and polyhedral production sets. In all cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be "yes" instances. If all instances are under consideration, then in all cases we prove that the problem of deciding if a given instance admits an equilibrium is ETR-complete, where ETR is the class Existential Theory of Reals. As a consequence of the results stated above, and the fact that membership in FIXP has been established for PLC utilities, the entire computational difficulty of Arrow-Debreu markets under PLC utility functions lies in the Leontief utility subcase. This is perhaps the most unexpected aspect of our result, since Leontief utilities are meant for the case that goods are perfect complements, whereas PLC utilities are very general, capturing not only the cases when goods are complements and substitutes, but also arbitrary combinations of these and much more. Finally, we give a polynomial time algorithm for finding an equilibrium in Arrow-Debreu exchange markets under Leontief utility functions provided the number of agents is a constant. This settles part of an open problem of Devanur and Kannan (2008).
Publisher's Version Article Search

Session 7C

Approximate Near Neighbors for General Symmetric Norms
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten
(Columbia University, USA; Northeastern University, USA; University of Toronto, Canada; Massachusetts Institute of Technology, USA)
ERROR in abstract.
Publisher's Version Article Search
Algorithmic Discrepancy Beyond Partial Coloring
Nikhil Bansal and Shashwat Garg
(Eindhoven University of Technology, Netherlands)
ERROR in abstract.
Publisher's Version Article Search
Geodesic Walks in Polytopes
Yin Tat Lee and Santosh S. Vempala
(Microsoft Research, USA; University of Washington, USA; Georgia Institute of Technology, USA)
ERROR in abstract.
Publisher's Version Article Search
A Reverse Minkowski Theorem
Oded Regev and Noah Stephens-Davidowitz
(New York University, USA)
ERROR in abstract.
Publisher's Version Article Search

Session 8: Danny Lewin Prize STOC Best Student Paper

Almost-Polynomial Ratio ETH-Hardness of Approximating Densest k-Subgraph
Pasin Manurangsi
(University of California at Berkeley, USA)
ERROR in abstract.
Publisher's Version Article Search

Session 9A

Efficient Quantum Tomography II
Ryan O'Donnell and John Wright
(Carnegie Mellon University, USA; Massachusetts Institute of Technology, USA)
ERROR in abstract.
Publisher's Version Article Search
Quantum Entanglement, Sum of Squares, and the Log Rank Conjecture
Boaz Barak, Pravesh K. Kothari, and David Steurer
(Harvard University, USA; Princeton University, USA; IAS, USA; Cornell University, USA)
ERROR in abstract.
Publisher's Version Article Search
Quantum Algorithm for Tree Size Estimation, with Applications to Backtracking and 2-Player Games
Andris Ambainis and Martins Kokainis
(University of Latvia, Latvia)
ERROR in abstract.
Publisher's Version Article Search
A Quantum Linearity Test for Robustly Verifying Entanglement
Anand Natarajan and Thomas Vidick
(Massachusetts Institute of Technology, USA; California Institute of Technology, USA)
ERROR in abstract.
Publisher's Version Article Search

Session 9B

The Limitations of Optimization from Samples
Eric Balkanski, Aviad Rubinstein, and Yaron Singer
(Harvard University, USA; University of California at Berkeley, USA)
In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature.
Publisher's Version Article Search
Approximate Modularity Revisited
Uriel Feige, Michal Feldman, and Inbal Talgam-Cohen
(Weizmann Institute of Science, Israel; Microsoft Research, Israel; Tel Aviv University, Israel; Hebrew University of Jerusalem, Israel)
ERROR in abstract.
Publisher's Version Article Search
Trace Reconstruction with exp(O(n1/3)) Samples
Fedor Nazarov and Yuval Peres
(Kent State University, USA; Microsoft Research, USA)
ERROR in abstract.
Publisher's Version Article Search
Optimal Mean-Based Algorithms for Trace Reconstruction
Anindya De, Ryan O'Donnell, and Rocco A. Servedio
(Northwestern University, USA; Carnegie Mellon University, USA; Columbia University, USA)
ERROR in abstract.
Publisher's Version Article Search
Provable Learning of Noisy-or Networks
Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski
(Princeton University, USA; Duke University, USA)
Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Learning the model ---that is, the mapping from hidden variables to visible ones and vice versa---is NP-hard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structure: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the single-layer noisy-OR network, which is a textbook example of a bayes net, and used for example in the classic QMR-DT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable.
Publisher's Version Article Search
Time-Space Hardness of Learning Sparse Parities
Gillat Kol, Ran Raz, and Avishay Tal
(Princeton University, USA; IAS, USA)
ERROR in abstract.
Publisher's Version Article Search

Session 9C

DecreaseKeys Are Expensive for External Memory Priority Queues
Kasper Eenberg, Kasper Green Larsen, and Huacheng Yu
(Aarhus University, Denmark; Stanford University, USA)
ERROR in abstract.
Publisher's Version Article Search
Set Similarity Search Beyond MinHash
Tobias Christiani and Rasmus Pagh
(IT University of Copenhagen, Denmark)

We consider the problem of approximate set similarity search under Braun-Blanquet similarity B(x, y) = |xy| / max(|x|, |y|). The (b1, b2)-approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets P such that, given a query set q, if there exists xP with B(q, x) ≥ b1, then we can efficiently return x′ ∈ P with B(q, x′) > b2.

We present a simple data structure that solves this problem with space usage O(n1+ρlogn + ∑xP|x|) and query time O(|q|nρ logn) where n = |P| and ρ = log(1/b1)/log(1/b2). Making use of existing lower bounds for locality-sensitive hashing by O’Donnell et al. (TOCT 2014) we show that this value of ρ is tight across the parameter space, i.e., for every choice of constants 0 < b2 < b1 < 1.

In the case where all sets have the same size our solution strictly improves upon the value of ρ that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broder’s MinHash (CCS 1997) for Jaccard similarity and Andoni et al.’s cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-dependent method by Andoni and Razenshteyn (STOC 2015).

Publisher's Version Article Search
Decremental Single-Source Reachability in Planar Digraphs
Giuseppe F. Italiano, Adam Karczmarz, Jakub Łącki, and Piotr Sankowski
(University of Rome Tor Vergata, Italy; University of Warsaw, Poland; Google Research, USA)
ERROR in abstract.
Publisher's Version Article Search
Dynamic Spanning Forest with Worst-Case Update Time: Adaptive, Las Vegas, and O(n1/2 - ε)-Time
Danupon Nanongkai and Thatchaphol Saranurak
(KTH, Sweden)
ERROR in abstract.
Publisher's Version Article Search
Fully-Dynamic Minimum Spanning Forest with Improved Worst-Case Update Time
Christian Wulff-Nilsen
(University of Copenhagen, Denmark)
ERROR in abstract.
Publisher's Version Article Search

Session 10A

Improved Non-malleable Extractors, Non-malleable Codes and Independent Source Extractors
Xin Li
(Johns Hopkins University, USA)
ERROR in abstract.
Publisher's Version Article Search
Towards Optimal Two-Source Extractors and Ramsey Graphs
Gil Cohen
(Princeton University, USA)

The main contribution of this work is a construction of a two-source extractor for quasi-logarithmic min-entropy. That is, an extractor for two independent n-bit sources with min-entropy O(logn), which is optimal up to the poly(loglogn) factor. A strong motivation for constructing two-source extractors for low entropy is for Ramsey graphs constructions. Our two-source extractor readily yields a (logn)(logloglogn)O(1)-Ramsey graph on n vertices.

Although there has been exciting progress towards constructing O(logn)-Ramsey graphs in recent years, a line of work that this paper contributes to, it is not clear if current techniques can be pushed so as to match this bound. Interestingly, however, as an artifact of current techniques, one obtains strongly explicit Ramsey graphs, namely, graphs on n vertices where the existence of an edge connecting any pair of vertices can be determined in time poly(logn). On top of our strongly explicit construction, in this work, we consider algorithms that output the entire graph in poly(n)-time, and make progress towards matching the desired O(logn) bound in this setting. In our opinion, this is a natural setting in which Ramsey graphs constructions should be studied.

The main technical novelty of this work lies in an improved construction of an independence-preserving merger (IPM), a variant of the well-studied notion of a merger, which was recently introduced by Cohen and Schulman. Our construction is based on a new connection to correlation breakers with advice. In fact, our IPM satisfies a stronger and more natural property than that required by the original definition, and we believe it may find further applications.

Publisher's Version Article Search
Non-malleable Codes and Extractors for Small-Depth Circuits, and Affine Functions
Eshan Chattopadhyay and Xin Li
(IAS, USA; Johns Hopkins University, USA)
ERROR in abstract.
Publisher's Version Article Search Video
An Efficient Reduction from Two-Source to Non-malleable Extractors: Achieving Near-Logarithmic Min-entropy
Avraham Ben-Aroya, Dean Doron, and Amnon Ta-Shma
(Tel Aviv University, Israel)
ERROR in abstract.
Publisher's Version Article Search

Session 10B

Finding Approximate Local Minima Faster than Gradient Descent
Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma
(Princeton University, USA; IAS, USA)
ERROR in abstract.
Publisher's Version Article Search
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
(IAS, USA; Princeton University, USA)
ERROR in abstract.
Publisher's Version Article Search
A Strongly Polynomial Algorithm for Bimodular Integer Linear Programming
Stephan Artmann, Robert Weismantel, and Rico Zenklusen
(ETH Zurich, Switzerland)
ERROR in abstract.
Publisher's Version Article Search
Subquadratic Submodular Function Minimization
Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, and Sam Chiu-wai Wong
(Dartmouth College, USA; Microsoft Research, USA; Stanford University, USA; University of California at Berkeley, USA)
ERROR in abstract.
Publisher's Version Article Search

Session 10C

Addition Is Exponentially Harder Than Counting for Shallow Monotone Circuits
Xi Chen, Igor C. Oliveira, and Rocco A. Servedio
(Columbia University, USA; Charles University in Prague, Czechia)
ERROR in abstract.
Publisher's Version Article Search
Strongly Exponential Lower Bounds for Monotone Computation
Toniann Pitassi and Robert Robere
(University of Toronto, Canada)
ERROR in abstract.
Publisher's Version Article Search
Formula Lower Bounds via the Quantum Method
Avishay Tal
ERROR in abstract.
Publisher's Version Article Search

proc time: 1.16