STOC 2024 – Author Index 
Contents 
Abstracts 
Authors

A B C D E F G H I J K L M N O P Q R S T V W X Y Z
Abboud, Amir 
STOC '24: "New Graph Decompositions and ..."
New Graph Decompositions and Combinatorial Boolean Matrix Multiplication Algorithms
Amir Abboud , Nick Fischer , Zander Kelley , Shachar Lovett , and Raghu Meka (Weizmann Institute of Science, Israel; University of Illinois at UrbanaChampaign, USA; University of California at San Diego, USA; University of California at Los Angeles, USA) We revisit the fundamental Boolean Matrix Multiplication (BMM) problem. With the invention of algebraic fast matrix multiplication over 50 years ago, it also became known that BMM can be solved in truly subcubic O(n^{ω}) time, where ω<3; much work has gone into bringing ω closer to 2. Since then, a parallel line of work has sought comparably fast combinatorial algorithms but with limited success. The na'ive O(n^{3})time algorithm was initially improved by a log^{2}n factor [Arlazarov et al.; RAS’70], then by log^{2.25}n [Bansal and Williams; FOCS’09], then by log^{3}n [Chan; SODA’15], and finally by log^{4}n [Yu; ICALP’15]. We design a combinatorial algorithm for BMM running in time n^{3} / 2^{Ω((logn)1/7)} – a speedup over cubic time that is stronger than any polylog factor. This comes tantalizingly close to refuting the conjecture from the 90s that truly subcubic combinatorial algorithms for BMM are impossible. This popular conjecture is the basis for dozens of finegrained hardness results. Our main technical contribution is a new regularity decomposition theorem for Boolean matrices (or equivalently, bipartite graphs) under a notion of regularity that was recently introduced and analyzed analytically in the context of communication complexity [Kelley, Lovett, Meka; STOC’24], and is related to a similar notion from the recent work on 3term arithmetic progression free sets [Kelley, Meka; FOCS’23]. @InProceedings{STOC24p935, author = {Amir Abboud and Nick Fischer and Zander Kelley and Shachar Lovett and Raghu Meka}, title = {New Graph Decompositions and Combinatorial Boolean Matrix Multiplication Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {935943}, doi = {10.1145/3618260.3649696}, year = {2024}, } Publisher's Version 

Abrahamsen, Mikkel 
STOC '24: "Minimum Star Partitions of ..."
Minimum Star Partitions of Simple Polygons in Polynomial Time
Mikkel Abrahamsen , Joakim Blikstad , André Nusser , and Hanwen Zhang (University of Copenhagen, Copenhagen, Denmark; KTH Royal Institute of Technology, Stockholm, Sweden; MPIINF, Germany) We devise a polynomialtime algorithm for partitioning a simple polygon P into a minimum number of starshaped polygons. The question of whether such an algorithm exists has been open for more than four decades [Avis and Toussaint, Pattern Recognit., 1981] and it has been repeated frequently, for example in O’Rourke’s famous book [Art Gallery Theorems and Algorithms, 1987]. In addition to its strong theoretical motivation, the problem is also motivated by practical domains such as CNC pocket milling, motion planning, and shape parameterization. The only previously known algorithm for a nontrivial special case is for P being both monotone and rectilinear [Liu and Ntafos, Algorithmica, 1991]. For general polygons, an algorithm was only known for the restricted version in which Steiner points are disallowed [Keil, SIAM J. Comput., 1985], meaning that each corner of a piece in the partition must also be a corner of P. Interestingly, the solution size for the restricted version may be linear for instances where the unrestricted solution has constant size. The covering variant in which the pieces are starshaped but allowed to overlap—known as the Art Gallery Problem—was recently shown to be ∃ℝcomplete and is thus likely not in NP [Abrahamsen, Adamaszek and Miltzow, STOC 2018 & J. ACM 2022]; this is in stark contrast to our result. Arguably the most related work to ours is the polynomialtime algorithm to partition a simple polygon into a minimum number of convex pieces by Chazelle and Dobkin [STOC, 1979 & Comp. Geom., 1985]. @InProceedings{STOC24p904, author = {Mikkel Abrahamsen and Joakim Blikstad and André Nusser and Hanwen Zhang}, title = {Minimum Star Partitions of Simple Polygons in Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {904910}, doi = {10.1145/3618260.3649756}, year = {2024}, } Publisher's Version 

Ahmadi, Ali 
STOC '24: "PrizeCollecting Steiner Tree: ..."
PrizeCollecting Steiner Tree: A 1.79 Approximation
Ali Ahmadi , Iman Gholami , MohammadTaghi Hajiaghayi , Peyman Jabbarzade , and Mohammad Mahdavi (University of Maryland, USA) PrizeCollecting Steiner Tree (PCST) is a generalization of the Steiner Tree problem, a fundamental problem in computer science. In the classic Steiner Tree problem, we aim to connect a set of vertices known as terminals using the minimumweight tree in a given weighted graph. In this generalized version, each vertex has a penalty, and there is flexibility to decide whether to connect each vertex or pay its associated penalty, making the problem more realistic and practical. Both the Steiner Tree problem and its PrizeCollecting version had longstanding 2approximation algorithms, matching the integrality gap of the natural LP formulations for both. This barrier for both problems has been surpassed, with algorithms achieving approximation factors below 2. While research on the Steiner Tree problem has led to a series of reductions in the approximation ratio below 2, culminating in a ln(4)+є approximation by Byrka, Grandoni, Rothvoß, and Sanità [STOC’10], the PrizeCollecting version has not seen improvements in the past 15 years since the work of Archer, Bateni, Hajiaghayi, and Karloff [FOCS’09, SIAM J. Comput.’11], which reduced the approximation factor for this problem from 2 to 1.9672. Interestingly, even the PrizeCollecting TSP approximation, which was first improved below 2 in the same paper, has seen several advancements since then (see, e.g., Blauth and N'agele [STOC’23]). In this paper, we reduce the approximation factor for the PCST problem substantially to 1.7994 via a novel iterative approach. @InProceedings{STOC24p1641, author = {Ali Ahmadi and Iman Gholami and MohammadTaghi Hajiaghayi and Peyman Jabbarzade and Mohammad Mahdavi}, title = {PrizeCollecting Steiner Tree: A 1.79 Approximation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {16411652}, doi = {10.1145/3618260.3649789}, year = {2024}, } Publisher's Version 

Akshima 
STOC '24: "Tight TimeSpace Tradeoffs ..."
Tight TimeSpace Tradeoffs for the Decisional DiffieHellman Problem
Akshima , Tyler Besselman , Siyao Guo , Zhiye Xie , and Yuping Ye (NYU Shanghai, China; East China Normal University, China) In the (preprocessing) Decisional DiffieHellman (DDH) problem, we are given a cyclic group G with a generator g and a prime order N, and want to prepare some advice of S, such that we can efficiently distinguish (g^{x},g^{y},g^{xy}) from (g^{x},g^{y},g^{z}) in time T for uniformly and independently chosen x,y,z from [N]. This is a central cryptographic problem whose computational hardness underpins many widely deployed schemes such as the Diffie–Hellman key exchange protocol. We prove that any generic preprocessing DDH algorithm (operating in any cyclic group) achieves advantage at most O(ST^{2}/N). This bound matches the best known attack up to polylog factors, and confirms that DDH is as secure as the (seemingly harder) discrete logarithm problem against preprocessing attacks. Our result resolves an open question by CorriganGibbs and Kogan (EUROCRYPT 2018), which proved optimal bounds for many variants of discrete logarithm problems except DDH (with an O(√ST^{2}/N) bound). We obtain our results by adopting and refining the approach by Gravin, Guo, Kwok, Lu (SODA 2021) and by Yun (EUROCRYPT 2015). Along the way, we significantly simplified and extended above techniques which may be of independent interests. The highlights of our techniques are following: 1. We obtain a simpler reduction from decisional problems against Sbit advice to their Swise XOR lemmas against zeroadvice, recovering the reduction by Gravin, Guo, Kwok and Lu (SODA 2021). 2. We show how to reduce generic hardness of decisional problems to their variants in the simpler hyperplane model proposed by Yun (EUROCRYPT 2015). This is the first work analyzing a decisional problem in Yun’s model, answering an open problem proposed by Auerbach, Hoffman, and PascualPerez (TCC 2023). 3. We prove an Swise XOR lemma of DDH in Yun’s model. As a corollary, we obtain the generic hardness of the SXOR DDH problem. @InProceedings{STOC24p1739, author = { Akshima and Tyler Besselman and Siyao Guo and Zhiye Xie and Yuping Ye}, title = {Tight TimeSpace Tradeoffs for the Decisional DiffieHellman Problem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17391749}, doi = {10.1145/3618260.3649752}, year = {2024}, } Publisher's Version 

Alrabiah, Omar 
STOC '24: "Randomly Punctured Reed–Solomon ..."
Randomly Punctured Reed–Solomon Codes Achieve ListDecoding Capacity over LinearSized Fields
Omar Alrabiah , Venkatesan Guruswami , and Ray Li (University of California at Berkeley, USA; Santa Clara University, USA) Reed–Solomon codes are a classic family of errorcorrecting codes consisting of evaluations of lowdegree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal uniquedecoding capabilities, but their listdecoding capabilities are not fully understood. Given the prevalence of ReedSolomon codes, a fundamental question in coding theory is determining if Reed–Solomon codes can optimally achieve listdecoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed–Solomon codes are combinatorially listdecodable all the way to capacity. However, their results hold for randomlypunctured Reed–Solomon codes over an exponentially large field size 2^{O(n)}, where n is the block length of the code. A natural question is whether Reed–Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed–Solomon codes are listdecodable to capacity with field size O(n^{2}). We show that Reed–Solomon codes are listdecodable to capacity with linear field size O(n), which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size q and code length n cannot be bounded by an absolute constant. Our techniques also show that random linear codes are listdecodable up to (the alphabetindependent) capacity with optimal listsize O(1/ε) and nearoptimal alphabet size 2^{O(1/ε2)}, where ε is the gap to capacity. As far as we are aware, listdecoding up to capacity with optimal listsize O(1/ε) was not known to be achievable with any linear code over a constant alphabet size (even nonconstructively), and it was also not known to be achievable for random linear codes over any alphabet size. Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices. With our proof, which maintains a hypergraph perspective of the listdecoding problem, we include an alternate presentation of ideas from Brakensiek, Gopi, and Makam that more directly connects the listdecoding problem to the GMMDS theorem via a hypergraph orientation theorem. @InProceedings{STOC24p1458, author = {Omar Alrabiah and Venkatesan Guruswami and Ray Li}, title = {Randomly Punctured Reed–Solomon Codes Achieve ListDecoding Capacity over LinearSized Fields}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14581469}, doi = {10.1145/3618260.3649634}, year = {2024}, } Publisher's Version 

Amir, Daniel 
STOC '24: "Breaking the VLB Barrier for ..."
Breaking the VLB Barrier for Oblivious Reconfigurable Networks
Tegan Wilson , Daniel Amir , Nitika Saran , Robert Kleinberg , Vishal Shrivastav , and Hakim Weatherspoon (Cornell University, USA; Purdue University, USA) In a landmark 1981 paper, Valiant and Brebner gave birth to the study of oblivious routing and, simultaneously, introduced its most powerful and ubiquitous method: Valiant load balancing (VLB). By routing messages through a randomly sampled intermediate node, VLB lengthens routing paths by a factor of two but gains the crucial property of obliviousness: it balances load in a completely decentralized manner, with no global knowledge of the communication pattern. Forty years later, with datacenters handling workloads whose communication pattern varies too rapidly to allow centralized coordination, oblivious routing is as relevant as ever, and VLB continues to take center stage as a widely used — and in some settings, provably optimal — way to balance load in the network obliviously to the traffic demands. However, the ability of the network to rapidly reconfigure its interconnection topology gives rise to new possibilities. In this work we revisit the question of whether VLB remains optimal in the novel setting of reconfigurable networks. Prior work showed that VLB achieves the optimal tradeoff between latency and guaranteed throughput. In this work we show that a strictly superior latencythroughput tradeoff is achievable when the throughput bound is relaxed to hold with high probability. The same improved tradeoff is also achievable with guaranteed throughput under timestationary demands, provided the latency bound is relaxed to hold with high probability and that the network is allowed to be semioblivious, using an oblivious (randomized) connection schedule but demandaware routing. We prove that the latter result is not achievable by any fullyoblivious reconfigurable network design, marking a rare case in which semioblivious routing has a provable asymptotic advantage over oblivious routing. Our results are enabled by a novel oblivious routing scheme that improves VLB by stretching routing paths the minimum possible amount — an additive stretch of 1 rather than a multiplicative stretch of 2 — yet still manages to balance load with high probability when either the traffic demand matrix or the network’s interconnection schedule are shuffled by a uniformly random permutation. To analyze our routing scheme we prove an exponential tail bound which may be of independent interest, concerning the distribution of values of a bilinear form on an orbit of a permutation group action. @InProceedings{STOC24p1865, author = {Tegan Wilson and Daniel Amir and Nitika Saran and Robert Kleinberg and Vishal Shrivastav and Hakim Weatherspoon}, title = {Breaking the VLB Barrier for Oblivious Reconfigurable Networks}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18651876}, doi = {10.1145/3618260.3649608}, year = {2024}, } Publisher's Version 

Amireddy, Prashanth 
STOC '24: "Local Correction of Linear ..."
Local Correction of Linear Functions over the Boolean Cube
Prashanth Amireddy , Amik Raj Behera , Manaswi Paraashar , Srikanth Srinivasan , and Madhu Sudan (Harvard University, USA; Aarhus University, Aarhus, Denmark; University of Copenhagen, Copenhagen, Denmark) We consider the task of locally correcting, and locally listcorrecting, multivariate linear functions over the domain {0,1}^{n} over arbitrary fields and more generally Abelian groups. Such functions form errorcorrecting codes of relative distance 1/2 and we give localcorrection algorithms correcting up to nearly 1/4fraction errors making O(logn) queries. This query complexity is optimal up to poly(loglogn) factors. We also give local listcorrecting algorithms correcting (1/2 − ε)fraction errors with O_{ε}(logn) queries. These results may be viewed as natural generalizations of the classical work of Goldreich and Levin whose work addresses the special case where the underlying group is ℤ_{2}. By extending to the case where the underlying group is, say, the reals, we give the first nontrivial locally correctable codes (LCCs) over the reals (with query complexity being sublinear in the dimension (also known as message length)). Previous works in the area mostly focused on the case where the domain is a vector space or a group and this lends to tools that exploit symmetry. Since our domains lack such symmetries, we encounter new challenges whose resolution may be of independent interest. The central challenge in constructing the local corrector is constructing “nearly balanced vectors” over {−1,1}^{n} that span 1^{n} — we show how to construct O(logn) vectors that do so, with entries in each vector summing to ±1. The challenge to the locallistcorrection algorithms, given the local corrector, is principally combinatorial, i.e., in proving that the number of linear functions within any Hamming ball of radius (1/2−ε) is O_{ε}(1). Getting this general result covering every Abelian group requires integrating a variety of known methods with some new combinatorial ingredients analyzing the structural properties of codewords that lie within small Hamming balls. @InProceedings{STOC24p764, author = {Prashanth Amireddy and Amik Raj Behera and Manaswi Paraashar and Srikanth Srinivasan and Madhu Sudan}, title = {Local Correction of Linear Functions over the Boolean Cube}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {764775}, doi = {10.1145/3618260.3649746}, year = {2024}, } Publisher's Version 

Anand, Aditya 
STOC '24: "Approximating Small Sparse ..."
Approximating Small Sparse Cuts
Aditya Anand , Euiwoong Lee , Jason Li , and Thatchaphol Saranurak (University of Michigan, USA; Carnegie Mellon University, USA) We study polynomialtime approximation algorithms for edge and vertex Sparsest Cut and Small Set Expansion in terms of k, the number of edges or vertices cut in the optimal solution. Our main results are O(polylog k)approximation algorithms for various versions in this setting. Our techniques involve an extension of the notion of sample sets (Feige and Mahdian STOC’06), originally developed for small balanced cuts, to sparse cuts in general. We then show how to combine this notion of sample sets with two algorithms, one based on an existing framework of LP rounding and another new algorithm based on the cutmatching game, to get such approximation algorithms. Our cutmatching game algorithm can be viewed as a local version of the cutmatching game by Khandekar, Khot, Orecchia and Vishnoi and certifies an expansion of every vertex set of size s in O(logs) rounds. These techniques may be of independent interest. As corollaries of our results, we also obtain an O(logopt) approximation for minmax graph partitioning, where opt is the minmax value of the optimal cut, and improve the bound on the size of multicut mimicking networks computable in polynomial time. @InProceedings{STOC24p319, author = {Aditya Anand and Euiwoong Lee and Jason Li and Thatchaphol Saranurak}, title = {Approximating Small Sparse Cuts}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {319330}, doi = {10.1145/3618260.3649747}, year = {2024}, } Publisher's Version 

Anari, Nima 
STOC '24: "TrickleDown in Localization ..."
TrickleDown in Localization Schemes and Applications
Nima Anari , Frederic Koehler , and ThuyDuong Vuong (Stanford University, USA; University of Chicago, USA) Trickledown is a phenomenon in highdimensional expanders with many important applications — for example, it is a key ingredient in various constructions of highdimensional expanders or the proof of rapid mixing for the basis exchange walk on matroids and in the analysis of logconcave polynomials. We formulate a generalized trickledown equation in the abstract context of lineartilt localization schemes. Building on this generalization, we improve the bestknown results for several Markov chain mixing or sampling problems — for example, we improve the threshold up to which Glauber dynamics is known to mix rapidly in the SherringtonKirkpatrick spin glass model. Other applications of our framework include nearlinear time sampling algorithms from the antiferromagnetic Ising model and the fixed magnetization (antiferromagnetic or ferromagnetic) Ising model on expanders. For this application, we use a new dynamics inspired by polarization, a technique from the theory of stable polynomials. @InProceedings{STOC24p1094, author = {Nima Anari and Frederic Koehler and ThuyDuong Vuong}, title = {TrickleDown in Localization Schemes and Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10941105}, doi = {10.1145/3618260.3649622}, year = {2024}, } Publisher's Version STOC '24: "Parallel Sampling via Counting ..." Parallel Sampling via Counting Nima Anari , Ruiquan Gao , and Aviad Rubinstein (Stanford University, USA) We show how to use parallelization to speed up sampling from an arbitrary distribution µ on a product space [q]^{n}, given oracle access to counting queries: ℙ_{X∼ µ}[X_{S}=σ_{S}] for any S⊆ [n] and σ_{S} ∈ [q]^{S}. Our algorithm takes O(n^{2/3}· polylog(n,q)) parallel time, to the best of our knowledge, the first sublinear in n runtime for arbitrary distributions. Our results have implications for sampling in autoregressive models. Our algorithm directly works with an equivalent oracle that answers conditional marginal queries ℙ_{X∼ µ}[X_{i}=σ_{i}  X_{S}=σ_{S}], whose role is played by a trained neural network in autoregressive models. This suggests a roughly n^{1/3}factor speedup is possible for sampling in anyorder autoregressive models. We complement our positive result by showing a lower bound of Ω(n^{1/3}) for the runtime of any parallel sampling algorithm making at most poly(n) queries to the counting oracle, even for q=2. @InProceedings{STOC24p537, author = {Nima Anari and Ruiquan Gao and Aviad Rubinstein}, title = {Parallel Sampling via Counting}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {537548}, doi = {10.1145/3618260.3649744}, year = {2024}, } Publisher's Version 

Anshu, Anurag 
STOC '24: "Learning Shallow Quantum Circuits ..."
Learning Shallow Quantum Circuits
HsinYuan Huang , Yunchao Liu , Michael Broughton , Isaac Kim , Anurag Anshu , Zeph Landau , and Jarrod R. McClean (California Institute of Technology, USA; Google Quantum AI, USA; University of California at Berkeley, USA; University of California at Davis, USA; Harvard University, USA) Despite fundamental interests in learning quantum circuits, the existence of a computationally efficient algorithm for learning shallow quantum circuits remains an open question. Because shallow quantum circuits can generate distributions that are classically hard to sample from, existing learning algorithms do not apply. In this work, we present a polynomialtime classical algorithm for learning the description of any unknown nqubit shallow quantum circuit U (with arbitrary unknown architecture) within a small diamond distance using singlequbit measurement data on the output states of U. We also provide a polynomialtime classical algorithm for learning the description of any unknown nqubit state  ψ ⟩ = U  0^{n} ⟩ prepared by a shallow quantum circuit U (on a 2D lattice) within a small trace distance using singlequbit measurements on copies of  ψ ⟩. Our approach uses a quantum circuit representation based on local inversions and a technique to combine these inversions. This circuit representation yields an optimization landscape that can be efficiently navigated and enables efficient learning of quantum circuits that are classically hard to simulate. @InProceedings{STOC24p1343, author = {HsinYuan Huang and Yunchao Liu and Michael Broughton and Isaac Kim and Anurag Anshu and Zeph Landau and Jarrod R. McClean}, title = {Learning Shallow Quantum Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13431351}, doi = {10.1145/3618260.3649722}, year = {2024}, } Publisher's Version STOC '24: "CircuittoHamiltonian from ..." CircuittoHamiltonian from Tensor Networks and Fault Tolerance Anurag Anshu , Nikolas P. Breuckmann , and Quynh T. Nguyen (Harvard University, USA; University of Bristol, United Kingdom) We define a map from an arbitrary quantum circuit to a local Hamiltonian whose ground state encodes the quantum computation. All previous maps relied on the FeynmanKitaev construction, which introduces an ancillary "clock register" to track the computational steps. Our construction, on the other hand, relies on injective tensor networks with associated parent Hamiltonians, avoiding the introduction of a clock register. This comes at the cost of the ground state containing only a noisy version of the quantum computation, with independent stochastic noise. We can remedy this  making our construction robust  by using quantum fault tolerance. In addition to the stochastic noise, we show that any state with energy density exponentially small in the circuit depth encodes a noisy version of the quantum computation with adversarial noise. We also show that any "combinatorial state" with energy density polynomially small in depth encodes the quantum computation with adversarial noise. This serves as evidence that any state with energy density polynomially small in depth has a similar property. As an application, we show that contracting injective tensor networks to additive error is BQPhard. We also discuss the implication of our construction to the quantum PCP conjecture, combining with an observation that QMA verification can be done in logarithmic depth. @InProceedings{STOC24p585, author = {Anurag Anshu and Nikolas P. Breuckmann and Quynh T. Nguyen}, title = {CircuittoHamiltonian from Tensor Networks and Fault Tolerance}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {585595}, doi = {10.1145/3618260.3649690}, year = {2024}, } Publisher's Version 

Arad, Itai 
STOC '24: "An Area Law for the MaximallyMixed ..."
An Area Law for the MaximallyMixed Ground State in Arbitrarily Degenerate Systems with Good AGSP
Itai Arad , Raz Firanko , and Rahul Jain (Centre for Quantum Technologies, Singapore; Technion, Israel; National University of Singapore, Singapore) We show an area law in the mutual information for the maximallymixed state Ω in the ground space of general Hamiltonians, which is independent of the underlying ground space degeneracy. Our result assumes the existence of a ‘good’ approximation to the ground state projector (a good AGSP), a crucial ingredient in former arealaw proofs. Such approximations have been explicitly derived for 1D gapped local Hamiltonians and 2D frustrationfree locallygapped local Hamiltonians. As a corollary, we show that in 1D gapped local Hamiltonians, for any є>0 and any bipartition L∪ L^{c} of the system, I^є_max(L)(L^c) ≤O( log(L)+log(1/є)), where L represents the number of sites in L and I_{max}^{є}(L)(L^{c})_{Ω} represents the єsmoothed maximum mutual information with respect to the L:L^{c} partition in Ω. From this bound we then conclude I(L)(L^{c})_{Ω}≤ O(log(L)) – an area law for the mutual information in 1D systems with a logarithmic correction. In addition, we show that Ω can be approximated up to an є in trace norm with a state of Schmidt rank of at most poly(L/є). Similar corollaries are derived for the mutual information of 2D frustrationfree and locallygapped local Hamiltonians. @InProceedings{STOC24p1311, author = {Itai Arad and Raz Firanko and Rahul Jain}, title = {An Area Law for the MaximallyMixed Ground State in Arbitrarily Degenerate Systems with Good AGSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13111322}, doi = {10.1145/3618260.3649612}, year = {2024}, } Publisher's Version 

Arvind, V. 
STOC '24: "BlackBox Identity Testing ..."
BlackBox Identity Testing of Noncommutative Rational Formulas in Deterministic Quasipolynomial Time
V. Arvind , Abhranil Chatterjee , and Partha Mukhopadhyay (Institute of Mathematical Sciences, India; Chennai Mathematical Institute, India; Indian Statistical Institute, Kolkata, India) Rational Identity Testing (RIT) is the decision problem of determining whether or not a noncommutative rational formula computes zero in the free skew field. It admits a deterministic polynomialtime whitebox algorithm [Garg, Gurvits, Oliveira, and Wigderson (2016); Ivanyos, Qiao, Subrahmanyam (2018); Hamada and Hirai (2021)], and a randomized polynomialtime algorithm [Derksen and Makam (2017)] in the blackbox setting, via singularity testing of linear matrices over the free skew field. Indeed, a randomized NC algorithm for RIT in the whitebox setting follows from the result of Derksen and Makam (2017). Designing an efficient deterministic blackbox algorithm for RIT and understanding the parallel complexity of RIT are major open problems in this area. Despite being open since the work of Garg, Gurvits, Oliveira, and Wigderson (2016), these questions have seen limited progress. In fact, the only known result in this direction is the construction of a quasipolynomialsize hitting set for rational formulas of only inversion height two [Arvind, Chatterjee, and Mukhopadhyay (2022)]. In this paper, we significantly improve the blackbox complexity of this problem and obtain the first quasipolynomialsize hitting set for all rational formulas of polynomial size. Our construction also yields the first deterministic quasiNC upper bound for RIT in the whitebox setting. @InProceedings{STOC24p106, author = {V. Arvind and Abhranil Chatterjee and Partha Mukhopadhyay}, title = {BlackBox Identity Testing of Noncommutative Rational Formulas in Deterministic Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {106117}, doi = {10.1145/3618260.3649693}, year = {2024}, } Publisher's Version 

Assadi, Sepehr 
STOC '24: "O(log log n) Passes Is Optimal ..."
O(log log n) Passes Is Optimal for Semistreaming Maximal Independent Set
Sepehr Assadi , Christian Konrad , Kheeran K. Naidu , and Janani Sundaresan (University of Waterloo, Canada; Rutgers University, USA; University of Bristol, United Kingdom) In the semistreaming model for processing massive graphs, an algorithm makes multiple passes over the edges of a given nvertex graph and is tasked with computing the solution to a problem using O(n · log(n)) space. Semistreaming algorithms for Maximal Independent Set (MIS) that run in O(loglogn) passes have been known for almost a decade, however, the best lower bounds can only rule out singlepass algorithms. We close this large gap by proving that the current algorithms are optimal: Any semistreaming algorithm for finding an MIS with constant probability of success requires Ω(loglogn) passes. This settles the complexity of this fundamental problem in the semistreaming model, and constitutes one of the first optimal multipass lower bounds in this model. We establish our result by proving an optimal round vs communication tradeoff for the (multiparty) communication complexity of MIS. The key ingredient of this result is a new technique, called hierarchical embedding, for performing round elimination: we show how to pack many but small hard (r−1)round instances of the problem into a single rround instance, in a way that enforces any rround protocol to effectively solve all these (r−1)round instances also. These embeddings are obtained via a novel application of results from extremal graph theory—in particular dense graphs with many disjoint unique shortest paths—together with a newly designed graph product, and are analyzed via informationtheoretic tools such as directsum and message compression arguments. @InProceedings{STOC24p847, author = {Sepehr Assadi and Christian Konrad and Kheeran K. Naidu and Janani Sundaresan}, title = {O(log log n) Passes Is Optimal for Semistreaming Maximal Independent Set}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {847858}, doi = {10.1145/3618260.3649763}, year = {2024}, } Publisher's Version STOC '24: "Optimal Multipass Lower Bounds ..." Optimal Multipass Lower Bounds for MST in Dynamic Streams Sepehr Assadi , Gillat Kol , and Zhijun Zhang (University of Waterloo, Canada; Rutgers University, USA; Princeton University, USA) The seminal work of Ahn, Guha, and McGregor in 2012 introduced the graph sketching technique and used it to present the first streaming algorithms for various graph problems over dynamic streams with both insertions and deletions of edges. This includes algorithms for cut sparsification, spanners, matchings, and minimum spanning trees (MSTs). These results have since been improved or generalized in various directions, leading to a vastly rich host of efficient algorithms for processing dynamic graph streams. A curious omission from the list of improvements has been the MST problem. The best algorithm for this problem remains the original AGM algorithm that for every integer p ≥ 1, uses n^{1+O(1/p)} space in p passes on nvertex graphs, and thus achieves the desired semistreaming space of Õ(n) at a relatively high cost of O(logn/loglogn) passes. On the other hand, no lower bound beyond a folklore onepass lower bound is known for this problem. We provide a simple explanation for this lack of improvements: The AGM algorithm for MSTs is optimal for the entire range of its number of passes! We prove that even for the simplest decision version of the problem — deciding whether the weight of MSTs is at least a given threshold or not — any ppass dynamic streaming algorithm requires n^{1+Ω(1/p)} space. This implies that semistreaming algorithms do need Ω(logn/loglogn) passes. Our result relies on proving new multiround communication complexity lower bounds for a variant of the universal relation problem that has been instrumental in proving prior lower bounds for singlepass dynamic streaming algorithms. The proof also involves proving new composition theorems in communication complexity, including majority lemmas and multiparty XOR lemmas, via information complexity approaches. @InProceedings{STOC24p835, author = {Sepehr Assadi and Gillat Kol and Zhijun Zhang}, title = {Optimal Multipass Lower Bounds for MST in Dynamic Streams}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {835846}, doi = {10.1145/3618260.3649755}, year = {2024}, } Publisher's Version 

Babichenko, Yakov 
STOC '24: "Fair Division via Quantile ..."
Fair Division via Quantile Shares
Yakov Babichenko , Michal Feldman , Ron Holzman , and Vishnu V. Narayan (Technion, Israel; Tel Aviv University, Israel) We consider the problem of fair division, where a set of indivisible goods should be distributed fairly among a set of agents with combinatorial valuations. To capture fairness, we adopt the notion of shares, where each agent is entitled to a fair share, based on some fairness criterion, and an allocation is considered fair if the value of every agent (weakly) exceeds her fair share. A sharebased notion is considered universally feasible if it admits a fair allocation for every profile of monotone valuations. A major question arises: is there a nontrivial sharebased notion that is universally feasible? The most wellknown sharebased notions, namely the proportional share and the maximin share, are not universally feasible, nor are any constant approximations of them. We propose a novel share notion, where an agent assesses the fairness of a bundle by comparing it to her valuation in a random allocation. In this framework, a bundle is considered qquantile fair, for q∈[0,1], if it is at least as good as a bundle obtained in a uniformly random allocation with probability at least q. Our main question is whether there exists a constant value of q for which the qquantile share is universally feasible. Our main result establishes a strong connection between the feasibility of quantile shares and the classical Erdős Matching Conjecture. Specifically, we show that if a version of this conjecture is true, then the 1/2equantile share is universally feasible. Furthermore, we provide unconditional feasibility results for additive, unitdemand and matroidrank valuations for constant values of q. Finally, we discuss the implications of our results for other share notions. @InProceedings{STOC24p1235, author = {Yakov Babichenko and Michal Feldman and Ron Holzman and Vishnu V. Narayan}, title = {Fair Division via Quantile Shares}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12351246}, doi = {10.1145/3618260.3649728}, year = {2024}, } Publisher's Version 

Bafna, Mitali 
STOC '24: "Characterizing Direct Product ..."
Characterizing Direct Product Testing via Coboundary Expansion
Mitali Bafna and Dor Minzer (Massachusetts Institute of Technology, USA) A ddimensional simplicial complex X is said to support a direct product tester if any locally consistent function defined on its kfaces (where k≪ d) necessarily come from a function over its vertices. More precisely, a direct product tester has a distribution µ over pairs of kfaces (A,A′), and given query access to F: X(k)→{0,1}^{k} it samples (A,A′)∼ µ and checks that F[A]_{A∩ A′} = F[A′]_{A∩ A′}. The tester should have (1) the ”completeness property”, meaning that any assignment F which is a direct product assignment passes the test with probability 1, and (2) the ”soundness property”, meaning that if F passes the test with probability s, then F must be correlated with a direct product function. Dinur and Kaufman showed that a sufficiently good spectral expanding complex X admits a direct product tester in the ”high soundness” regime where s is close to 1. They asked whether there are high dimensional expanders that support direct product tests in the ”low soundness”, when s is close to 0. We give a characterization of highdimensional expanders that support a direct product tester in the low soundness regime. We show that spectral expansion is insufficient, and the complex must additionally satisfy a variant of coboundary expansion, which we refer to as ”UniqueGames coboundary expanders”. Conversely, we show that this property is also sufficient to get direct product testers. This property can be seen as a highdimensional generalization of the standard notion of coboundary expansion over nonAbelian groups for 2dimensional complexes. It asserts that any locally consistent UniqueGames instance obtained using the lowlevel faces of the complex, must admit a good global solution. @InProceedings{STOC24p1978, author = {Mitali Bafna and Dor Minzer}, title = {Characterizing Direct Product Testing via Coboundary Expansion}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19781989}, doi = {10.1145/3618260.3649714}, year = {2024}, } Publisher's Version 

Bakshi, Ainesh 
STOC '24: "Learning Quantum Hamiltonians ..."
Learning Quantum Hamiltonians at Any Temperature in Polynomial Time
Ainesh Bakshi , Allen Liu , Ankur Moitra , and Ewin Tang (Massachusetts Institute of Technology, USA; University of California at Berkeley, USA) We study the problem of learning a local quantum Hamiltonian H given copies of its Gibbs state ρ = e^{−β H}/(e^{−β H}) at a known inverse temperature β>0. Anshu, Arunachalam, Kuwahara, and Soleimanifar gave an algorithm to learn a Hamiltonian on n qubits to precision with only polynomially many copies of the Gibbs state, but which takes exponential time. Obtaining a computationally efficient algorithm has been a major open problem, with prior work only resolving this in the limited cases of high temperature or commuting terms. We fully resolve this problem, giving a polynomial time algorithm for learning H to precision from polynomially many copies of the Gibbs state at any constant β > 0. Our main technical contribution is a new flat polynomial approximation to the exponential function, and a translation between multivariate scalar polynomials and nested commutators. This enables us to formulate Hamiltonian learning as a polynomial system. We then show that solving a lowdegree sumofsquares relaxation of this polynomial system suffices to accurately learn the Hamiltonian. @InProceedings{STOC24p1470, author = {Ainesh Bakshi and Allen Liu and Ankur Moitra and Ewin Tang}, title = {Learning Quantum Hamiltonians at Any Temperature in Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14701477}, doi = {10.1145/3618260.3649619}, year = {2024}, } Publisher's Version 

Bangachev, Kiril 
STOC '24: "On the Fourier Coefficients ..."
On the Fourier Coefficients of HighDimensional Random Geometric Graphs
Kiril Bangachev and Guy Bresler (Massachusetts Institute of Technology, USA) The random geometric graph RGG(n,S^{d−1},p) is formed by sampling n i.i.d. vectors {V_{i}}_{i = 1}^{n} uniformly on S^{d−1} and placing an edge between pairs of vertices i and j for which ⟨ V_{i},V_{j}⟩ ≥ τ_{d}^{p}, where τ_{d}^{p} is such that the expected density is p. We study the lowdegree Fourier coefficients of the distribution RGG(n,S^{d−1},p) and its Gaussian analogue. Our main conceptual contribution is a novel twostep strategy for bounding Fourier coefficients which we believe is more widely applicable to studying latent space distributions. First, we localize the dependence among edges to few fragile edges. Second, we partition the space of latent vector configurations (S^{d−1})^{⊗ n} based on the set of fragile edges and on each subset of configurations, we define a noise operator acting independently on edges not incident (in an appropriate sense) to fragile edges. We apply the resulting bounds to: 1) Settle the lowdegree polynomial complexity of distinguishing spherical and Gaussian random geometric graphs from ErdosRenyi both in the case of observing a complete set of edges and in the nonadaptively chosen mask M model recently introduced by Mardia, Verchand, and Wein; 2) Exhibit a statisticalcomputational gap for distinguishing RGG and a planted coloring model in a regime when RGG is distinguishable from ; 3) Reprove known bounds on the second eigenvalue of random geometric graphs. @InProceedings{STOC24p549, author = {Kiril Bangachev and Guy Bresler}, title = {On the Fourier Coefficients of HighDimensional Random Geometric Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {549560}, doi = {10.1145/3618260.3649676}, year = {2024}, } Publisher's Version 

Barooti, Khashayar 
STOC '24: "Nonlocality under Computational ..."
Nonlocality under Computational Assumptions
Grzegorz Gluch , Khashayar Barooti , Alexandru Gheorghiu , and MarcOlivier Renou (EPFL, Lausanne, Switzerland; Aztec Labs, United Kingdom; Chalmers University of Technology, Sweden; Inria  Université ParisSaclay  CPHT  École Polytechnique  Institut Polytechnique de Paris, France) Nonlocality and its connections to entanglement are fundamental features of quantum mechanics that have found numerous applications in quantum information science. A set of correlations is said to be nonlocal if it cannot be reproduced by spacelikeseparated parties sharing randomness and performing local operations. An important practical consideration is that the runtime of the parties has to be shorter than the time it takes light to travel between them. One way to model this restriction is to assume that the parties are computationally bounded. We therefore initiate the study of nonlocality under computational assumptions and derive the following results: (a) We define the set NEL (notefficientlylocal) as consisting of all bipartite states whose correlations arising from local measurements cannot be reproduced with shared randomness and polynomialtime local operations. (b) Under the assumption that the Learning With Errors problem cannot be solved in quantum polynomialtime, we show that NEL=ENT, where ENT is the set of all bipartite entangled states (pure and mixed). This is in contrast to the standard notion of nonlocality where it is known that some entangled states, e.g. Werner states, are local. In essence, we show that there exist (efficient) local measurements producing correlations that cannot be reproduced through shared randomness and quantum polynomialtime computation. (c) We prove that if NEL=ENT unconditionally, then BQP≠PP. In other words, the ability to certify all bipartite entangled states against computationally bounded adversaries gives a nontrivial separation of complexity classes. (d) Using (c), we show that a certain natural class of 1round delegated quantum computation protocols that are sound against PP provers cannot exist. @InProceedings{STOC24p1018, author = {Grzegorz Gluch and Khashayar Barooti and Alexandru Gheorghiu and MarcOlivier Renou}, title = {Nonlocality under Computational Assumptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10181026}, doi = {10.1145/3618260.3649750}, year = {2024}, } Publisher's Version 

Bartusek, James 
STOC '24: "Quantum State Obfuscation ..."
Quantum State Obfuscation from Classical Oracles
James Bartusek , Zvika Brakerski , and Vinod Vaikuntanathan (University of California at Berkeley, USA; Weizmann Institute of Science, Israel; Massachusetts Institute of Technology, USA) A major unresolved question in quantum cryptography is whether it is possible to obfuscate arbitrary quantum computation. Indeed, there is much yet to understand about the feasibility of quantum obfuscation even in the classical oracle model, where one is given for free the ability to obfuscate any classical circuit. In this work, we develop a new array of techniques that we use to construct a quantum state obfuscator, a powerful notion formalized recently by Coladangelo and Gunn (arXiv:2311.07794) in their pursuit of better software copyprotection schemes. Quantum state obfuscation refers to the task of compiling a quantum program, consisting of a quantum circuit C with a classical description and an auxiliary quantum state ψ, into a functionallyequivalent obfuscated quantum program that hides as much as possible about C and ψ. We prove the security of our obfuscator when applied to any pseudodeterministic quantum program, i.e. one that computes a (nearly) deterministic classical input / classical output functionality. Our security proof is with respect to an efficient classical oracle, which may be heuristically instantiated using quantumsecure indistinguishability obfuscation for classical circuits. Our result improves upon the recent work of Bartusek, Kitagawa, Nishimaki and Yamakawa (STOC 2023) who also showed how to obfuscate pseudodeterministic quantum circuits in the classical oracle model, but only ones with a completely classical description. Furthermore, our result answers a question of Coladangelo and Gunn, who provide a construction of quantum state indistinguishability obfuscation with respect to a quantum oracle, but leave the existence of a concrete realworld candidate as an open problem. Indeed, our quantum state obfuscator together with ColadangeloGunn gives the first candidate realization of a “bestpossible” copyprotection scheme for all polynomialtime functionalities. Our techniques deviate significantly from previous works on quantum obfuscation. We develop several novel technical tools which we expect to be broadly useful in quantum cryptography. These tools include a publiclyverifiable, linearlyhomomorphic quantum authentication scheme with classicallydecodable ZX measurements (which we build from coset states), and a method for compiling any quantum circuit into a ”linear + measurement” () quantum program: an alternating sequence of CNOT operations and partial ZX measurements. @InProceedings{STOC24p1009, author = {James Bartusek and Zvika Brakerski and Vinod Vaikuntanathan}, title = {Quantum State Obfuscation from Classical Oracles}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10091017}, doi = {10.1145/3618260.3649673}, year = {2024}, } Publisher's Version 

Beame, Paul 
STOC '24: "Quantum TimeSpace Tradeoffs ..."
Quantum TimeSpace Tradeoffs for Matrix Problems
Paul Beame , Niels Kornerup , and Michael Whitmeyer (University of Washington, USA; University of Texas at Austin, USA) We prove lower bounds on the time and space required for quantum computers to solve a wide variety of problems involving matrices, many of which have only been analyzed classically in prior work. Using a novel way of applying recording query methods we show that for many linear algebra problems—including matrixvector product, matrix inversion, matrix multiplication and powering—existing classical timespace tradeoffs also apply to quantum algorithms with at most a constant factor loss. For example, for almost all fixed matrices A, including the discrete Fourier transform (DFT) matrix, we prove that quantum circuits with at most T input queries and S qubits of memory require T=Ω(n^{2}/S) to compute matrixvector product Ax for x ∈ {0,1}^{n}. We similarly prove that matrix multiplication for n× n binary matrices requires T=Ω(n^{3} / √S). Because many of our lower bounds are matched by deterministic algorithms with the same time and space complexity, our results show that quantum computers cannot provide any asymptotic advantage for these problems at any space bound. We also improve the previous quantum timespace tradeoff lower bounds for n× n Boolean (i.e. ANDOR) matrix multiplication from T=Ω(n^{2.5}/S^{1/2}) to T=Ω(n^{2.5}/S^{1/4}) which has optimal exponents for the powerful query algorithms to which it applies. Our method also yields improved lower bounds for classical algorithms. @InProceedings{STOC24p596, author = {Paul Beame and Niels Kornerup and Michael Whitmeyer}, title = {Quantum TimeSpace Tradeoffs for Matrix Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {596607}, doi = {10.1145/3618260.3649700}, year = {2024}, } Publisher's Version 

Behera, Amik Raj 
STOC '24: "Local Correction of Linear ..."
Local Correction of Linear Functions over the Boolean Cube
Prashanth Amireddy , Amik Raj Behera , Manaswi Paraashar , Srikanth Srinivasan , and Madhu Sudan (Harvard University, USA; Aarhus University, Aarhus, Denmark; University of Copenhagen, Copenhagen, Denmark) We consider the task of locally correcting, and locally listcorrecting, multivariate linear functions over the domain {0,1}^{n} over arbitrary fields and more generally Abelian groups. Such functions form errorcorrecting codes of relative distance 1/2 and we give localcorrection algorithms correcting up to nearly 1/4fraction errors making O(logn) queries. This query complexity is optimal up to poly(loglogn) factors. We also give local listcorrecting algorithms correcting (1/2 − ε)fraction errors with O_{ε}(logn) queries. These results may be viewed as natural generalizations of the classical work of Goldreich and Levin whose work addresses the special case where the underlying group is ℤ_{2}. By extending to the case where the underlying group is, say, the reals, we give the first nontrivial locally correctable codes (LCCs) over the reals (with query complexity being sublinear in the dimension (also known as message length)). Previous works in the area mostly focused on the case where the domain is a vector space or a group and this lends to tools that exploit symmetry. Since our domains lack such symmetries, we encounter new challenges whose resolution may be of independent interest. The central challenge in constructing the local corrector is constructing “nearly balanced vectors” over {−1,1}^{n} that span 1^{n} — we show how to construct O(logn) vectors that do so, with entries in each vector summing to ±1. The challenge to the locallistcorrection algorithms, given the local corrector, is principally combinatorial, i.e., in proving that the number of linear functions within any Hamming ball of radius (1/2−ε) is O_{ε}(1). Getting this general result covering every Abelian group requires integrating a variety of known methods with some new combinatorial ingredients analyzing the structural properties of codewords that lie within small Hamming balls. @InProceedings{STOC24p764, author = {Prashanth Amireddy and Amik Raj Behera and Manaswi Paraashar and Srikanth Srinivasan and Madhu Sudan}, title = {Local Correction of Linear Functions over the Boolean Cube}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {764775}, doi = {10.1145/3618260.3649746}, year = {2024}, } Publisher's Version 

Behnezhad, Soheil 
STOC '24: "Approximating Maximum Matching ..."
Approximating Maximum Matching Requires Almost Quadratic Time
Soheil Behnezhad , Mohammad Roghani , and Aviad Rubinstein (Northeastern University, USA; Stanford University, USA) We study algorithms for estimating the size of maximum matching. This problem has been subject to extensive research. For nvertex graphs, Bhattacharya, Kiss, and Saranurak [FOCS’23] (BKS) showed that an estimate that is within є n of the optimal solution can be achieved in n^{2−Ωє(1)} time, where n is the number of vertices. While this is subquadratic in n for any fixed є > 0, it gets closer and closer to the trivial Θ(n^{2}) time algorithm that reads the entire input as є is made smaller and smaller. In this work, we close this gap and show that the algorithm of BKS is close to optimal. In particular, we prove that for any fixed δ > 0, there is another fixed є = є(δ) > 0 such that estimating the size of maximum matching within an additive error of є n requires Ω(n^{2−δ}) time in the adjacency list model. @InProceedings{STOC24p444, author = {Soheil Behnezhad and Mohammad Roghani and Aviad Rubinstein}, title = {Approximating Maximum Matching Requires Almost Quadratic Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {444454}, doi = {10.1145/3618260.3649785}, year = {2024}, } Publisher's Version 

Bérczi, Kristóf 
STOC '24: "Reconfiguration of Basis Pairs ..."
Reconfiguration of Basis Pairs in Regular Matroids
Kristóf Bérczi , Bence Mátravölgyi , and Tamás Schwarcz (University of Eötvös Loránd, Hungary; ETH Zurich, Switzerland) In recent years, combinatorial reconfiguration problems have attracted great attention due to their connection to various topics such as optimization, counting, enumeration, or sampling. One of the most intriguing open questions concerns the exchange distance of two matroid basis sequences, a problem that appears in several areas of computer science and mathematics. In 1980, White proposed a conjecture for the characterization of two basis sequences being reachable from each other by symmetric exchanges, which received a significant interest also in algebra due to its connection to toric ideals and Gr'obner bases. In this work, we verify White’s conjecture for basis sequences of length two in regular matroids, a problem that was formulated as a separate question by Farber, Richter, and Shank and Andres, Hochst'attler, and Merkel. Most of previous work on White’s conjecture has not considered the question from an algorithmic perspective. We study the problem from an optimization point of view: our proof implies a polynomial algorithm for determining a sequence of symmetric exchanges that transforms a basis pair into another, thus providing the first polynomial upper bound on the exchange distance of basis pairs in regular matroids. As a byproduct, we verify a conjecture of Gabow from 1976 on the serial symmetric exchange property of matroids for the regular case. @InProceedings{STOC24p1653, author = {Kristóf Bérczi and Bence Mátravölgyi and Tamás Schwarcz}, title = {Reconfiguration of Basis Pairs in Regular Matroids}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {16531664}, doi = {10.1145/3618260.3649660}, year = {2024}, } Publisher's Version 

Berendsohn, Benjamin Aram 
STOC '24: "Optimization with PatternAvoiding ..."
Optimization with PatternAvoiding Input
Benjamin Aram Berendsohn , László Kozma , and Michal Opler (Freie Universität Berlin, Berlin, Germany; Czech Technical University, Prague, Czechia) Permutation patternavoidance is a central concept of both enumerative and extremal combinatorics. In this paper we study the effect of permutation patternavoidance on the complexity of optimization problems. In the context of the dynamic optimality conjecture (Sleator, Tarjan, STOC 1983), Chalermsook, Goswami, Kozma, Mehlhorn, and Saranurak (FOCS 2015) conjectured that the amortized search cost of an optimal binary search tree (BST) is constant whenever the search sequence is patternavoiding. The best known bound to date is 2^{α(n)(1+o(1))} recently obtained by Chalermsook, Pettie, and Yingchareonthawornchai (SODA 2024); here n is the BST size and α(·) the inverseAckermann function. In this paper we resolve the conjecture, showing a tight (1) bound. This indicates a barrier to dynamic optimality: any candidate online BST (e.g., splay trees or greedy trees) must match this optimum, but current analysis techniques only give superconstant bounds. More broadly, we argue that the easiness of patternavoiding input is a general phenomenon, not limited to BSTs or even to data structures. To illustrate this, we show that when the input avoids an arbitrary, fixed, a priori unknown pattern, one can efficiently compute: (1) a kserver solution of n requests from a unit interval, with total cost n^{(1/logk)}, in contrast to the worstcase Θ(n/k) bound, and (2) a traveling salesman tour of n points from a unit box, of length (logn), in contrast to the worstcase Θ(√n) bound; similar results hold for the euclidean minimum spanning tree, Steiner tree, and nearestneighbor graphs. We show both results to be tight. Our techniques build on the MarcusTardos proof of the StanleyWilf conjecture, and on the recently emerging concept of twinwidth. @InProceedings{STOC24p671, author = {Benjamin Aram Berendsohn and László Kozma and Michal Opler}, title = {Optimization with PatternAvoiding Input}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {671682}, doi = {10.1145/3618260.3649631}, year = {2024}, } Publisher's Version 

Beretta, Lorenzo 
STOC '24: "Approximate Earth Mover’s ..."
Approximate Earth Mover’s Distance in TrulySubquadratic Time
Lorenzo Beretta and Aviad Rubinstein (University of Copenhagen, Copenhagen, Denmark; Stanford University, USA) We design an additive approximation scheme for estimating the cost of the minweight bipartite matching problem: given a bipartite graph with nonnegative edge costs and ε > 0, our algorithm estimates the cost of matching all but O(ε)fraction of the vertices in truly subquadratic time O(n^{2−δ(ε)}). Our algorithm has a natural interpretation for computing the Earth Mover’s Distance (EMD), up to a εadditive approximation. Notably, we make no assumptions about the underlying metric (more generally, the costs do not have to satisfy triangle inequality). Note that compared to the size of the instance (an arbitrary n × n cost matrix), our algorithm runs in sublinear time. Our algorithm can approximate a slightly more general problem: maxcardinality bipartite matching with a knapsack constraint, where the goal is to maximize the number of vertices that can be matched up to a total cost B. @InProceedings{STOC24p47, author = {Lorenzo Beretta and Aviad Rubinstein}, title = {Approximate Earth Mover’s Distance in TrulySubquadratic Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {4758}, doi = {10.1145/3618260.3649629}, year = {2024}, } Publisher's Version 

Bergamaschi, Thiago 
STOC '24: "Approaching the Quantum Singleton ..."
Approaching the Quantum Singleton Bound with Approximate Error Correction
Thiago Bergamaschi , Louis Golowich , and Sam Gunn (University of California at Berkeley, USA) It is well known that no quantum error correcting code of rate R can correct adversarial errors on more than a (1−R)/4 fraction of symbols. But what if we only require our codes to approximately recover the message? In this work, we construct efficientlydecodable approximate quantum codes against adversarial error rates approaching the quantum Singleton bound of (1−R)/2, for any constant rate R. Specifically, for every R ∈ (0,1) and γ>0, we construct codes of rate R, message length k, and alphabet size 2^{O(1/γ5)}, that are efficiently decodable against a (1−R−γ)/2 fraction of adversarial errors and recover the message up to inverseexponential error 2^{−Ω(k)}. At a technical level, we use classical robust secret sharing and quantum purity testing to reduce approximate quantum error correction to a suitable notion of quantum list decoding. We then instantiate our notion of quantum list decoding by (i) introducing folded quantum ReedSolomon codes, and (ii) applying a new, quantum version of distance amplification. @InProceedings{STOC24p1507, author = {Thiago Bergamaschi and Louis Golowich and Sam Gunn}, title = {Approaching the Quantum Singleton Bound with Approximate Error Correction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {15071516}, doi = {10.1145/3618260.3649680}, year = {2024}, } Publisher's Version 

Bernasconi, Martino 
STOC '24: "NoRegret Learning in Bilateral ..."
NoRegret Learning in Bilateral Trade via Global Budget Balance
Martino Bernasconi , Matteo Castiglioni , Andrea Celli , and Federico Fusco (Bocconi University, Italy; Politecnico di Milano, Italy; Sapienza University of Rome, Italy) Bilateral trade models the problem of intermediating between two rational agents — a seller and a buyer — both characterized by a private valuation for an item they want to trade. We study the online learning version of the problem, in which at each time step a new seller and buyer arrive and the learner has to set prices for them without any knowledge about their (adversarially generated) valuations. In this setting, known impossibility results rule out the existence of noregret algorithms when budget balanced has to be enforced at each time step. In this paper, we introduce the notion of global budget balance, which only requires the learner to fulfill budget balance over the entire time horizon. Under this natural relaxation, we provide the first noregret algorithms for adversarial bilateral trade under various feedback models. First, we show that in the fullfeedback model, the learner can guarantee Õ(√T) regret against the best fixed prices in hindsight, and that this bound is optimal up to polylogarithmic terms. Second, we provide a learning algorithm guaranteeing a Õ(T^{ 34}) regret upper bound with onebit feedback, which we complement with a Ω(T^{ 57}) lower bound that holds even in the twobit feedback model. Finally, we introduce and analyze an alternative benchmark that is provably stronger than the best fixed prices in hindsight and is inspired by the literature on bandits with knapsacks. @InProceedings{STOC24p247, author = {Martino Bernasconi and Matteo Castiglioni and Andrea Celli and Federico Fusco}, title = {NoRegret Learning in Bilateral Trade via Global Budget Balance}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {247258}, doi = {10.1145/3618260.3649653}, year = {2024}, } Publisher's Version 

Besselman, Tyler 
STOC '24: "Tight TimeSpace Tradeoffs ..."
Tight TimeSpace Tradeoffs for the Decisional DiffieHellman Problem
Akshima , Tyler Besselman , Siyao Guo , Zhiye Xie , and Yuping Ye (NYU Shanghai, China; East China Normal University, China) In the (preprocessing) Decisional DiffieHellman (DDH) problem, we are given a cyclic group G with a generator g and a prime order N, and want to prepare some advice of S, such that we can efficiently distinguish (g^{x},g^{y},g^{xy}) from (g^{x},g^{y},g^{z}) in time T for uniformly and independently chosen x,y,z from [N]. This is a central cryptographic problem whose computational hardness underpins many widely deployed schemes such as the Diffie–Hellman key exchange protocol. We prove that any generic preprocessing DDH algorithm (operating in any cyclic group) achieves advantage at most O(ST^{2}/N). This bound matches the best known attack up to polylog factors, and confirms that DDH is as secure as the (seemingly harder) discrete logarithm problem against preprocessing attacks. Our result resolves an open question by CorriganGibbs and Kogan (EUROCRYPT 2018), which proved optimal bounds for many variants of discrete logarithm problems except DDH (with an O(√ST^{2}/N) bound). We obtain our results by adopting and refining the approach by Gravin, Guo, Kwok, Lu (SODA 2021) and by Yun (EUROCRYPT 2015). Along the way, we significantly simplified and extended above techniques which may be of independent interests. The highlights of our techniques are following: 1. We obtain a simpler reduction from decisional problems against Sbit advice to their Swise XOR lemmas against zeroadvice, recovering the reduction by Gravin, Guo, Kwok and Lu (SODA 2021). 2. We show how to reduce generic hardness of decisional problems to their variants in the simpler hyperplane model proposed by Yun (EUROCRYPT 2015). This is the first work analyzing a decisional problem in Yun’s model, answering an open problem proposed by Auerbach, Hoffman, and PascualPerez (TCC 2023). 3. We prove an Swise XOR lemma of DDH in Yun’s model. As a corollary, we obtain the generic hardness of the SXOR DDH problem. @InProceedings{STOC24p1739, author = { Akshima and Tyler Besselman and Siyao Guo and Zhiye Xie and Yuping Ye}, title = {Tight TimeSpace Tradeoffs for the Decisional DiffieHellman Problem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17391749}, doi = {10.1145/3618260.3649752}, year = {2024}, } Publisher's Version 

Bhangale, Amey 
STOC '24: "On Approximability of Satisfiable ..."
On Approximability of Satisfiable kCSPs: IV
Amey Bhangale , Subhash Khot , and Dor Minzer (University of California at Riverside, USA; New York University, USA; Massachusetts Institute of Technology, USA) We prove a stability result for general 3wise correlations over distributions satisfying mild connectivity properties. More concretely, we show that if Σ,Γ and Φ are alphabets of constant size, and µ is a distribution over Σ×Γ×Φ satisfying: (1) the probability of each atom is at least Ω(1), (2) µ is pairwise connected, and (3) µ has no Abelian embeddings into (ℤ,+), then the following holds. Any triplets of 1bounded functions f∶ Σ^{n}→ℂ, g∶ Γ^{n}→ℂ, h∶ Φ^{n}→ℂ satisfying (x,y,z)∼ µ^{⊗ n}f(x)g(y)h(z)≥ must arise from an Abelian group associated with the distribution µ. More specifically, we show that there is an Abelian group (H,+) of constant size such that for any such f,g and h, the function f (and similarly g and h) is correlated with a function of the form f(x) = χ(σ(x_{1}),…,σ(x_{n})) L (x), where σ∶ Σ → H is some map, χ∈ Ĥ^{⊗ n} is a character, and L∶ Σ^{n}→ℂ is a lowdegree function with bounded 2norm. En route we prove a few additional results that may be of independent interest, such as an improved direct product theorem, as well as a result we refer to as a “restriction inverse theorem” about the structure of functions that, under random restrictions, with noticeable probability have significant correlation with a product function. In companion papers, we show applications of our results to the fields of Probabilistically Checkable Proofs, as well as various areas in discrete mathematics such as extremal combinatorics and additive combinatorics. @InProceedings{STOC24p1423, author = {Amey Bhangale and Subhash Khot and Dor Minzer}, title = {On Approximability of Satisfiable kCSPs: IV}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14231434}, doi = {10.1145/3618260.3649610}, year = {2024}, } Publisher's Version 

Bhargav, C. S. 
STOC '24: "Learning the Coefficients: ..."
Learning the Coefficients: A Presentable Version of Border Complexity and Applications to Circuit Factoring
C. S. Bhargav , Prateek Dwivedi , and Nitin Saxena (IIT Kanpur, India) The border, or the approximative, model of algebraic computation (VP) is quite popular due to the Geometric Complexity Theory (GCT) approach to P≠NP conjecture, and its complex analytic origins. On the flip side, the definition of the border is inherently existential in the field constants that the model employs. In particular, a polysize border circuit C(ε, x) cannot be compactly presented in reality, as the limit parameter ε may require exponential precision. In this work we resolve this issue by giving a constructive, or a presentable, version of border circuits and state its applications. We make border presentable by restricting the circuit C to use only those constants, in the function field F_{q}(ε), that it can generate by the ring operations on {ε}∪F_{q}, and their division, within polysize circuit. This model is more expressive than VP as it affords exponentialdegree in ε; and analogous to the usual border, we define new border classes called VP_{ε} and VNP_{ε}. We prove that both these (now called presentable border) classes lie in VNP. Such a ’debordering’ result is not known for the classical border classes VP and respectively for VNP. We pose VP_{ε}=VP as a new conjecture to study the border. The heart of our technique is a newly formulated exponential interpolation over a finite field, to bound the Boolean complexity of the coefficients before deducing the algebraic complexity. It attacks two factorization problems which were open before. We make progress on (Conj.8.3 in Bürgisser 2000, FOCS 2001) and solve (Conj.2.1 in Bürgisser 2000; Chou,Kumar,Solomon CCC 2018) over all finite fields: 1. Each polydegree irreducible factor, with multiplicity coprime to field characteristic, of a polysize circuit (of possibly exponentialdegree), is in VNP. 2. For all finite fields, and all factors, VNP is closed under factoring. Consequently, factors of VP are always in VNP. The prime characteristic cases were open before due to the inseparability obstruction (i.e. when the multiplicity is not coprime to q). @InProceedings{STOC24p130, author = {C. S. Bhargav and Prateek Dwivedi and Nitin Saxena}, title = {Learning the Coefficients: A Presentable Version of Border Complexity and Applications to Circuit Factoring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {130140}, doi = {10.1145/3618260.3649743}, year = {2024}, } Publisher's Version 

Bhaskara, Aditya 
STOC '24: "New Tools for Smoothed Analysis: ..."
New Tools for Smoothed Analysis: Least Singular Value Bounds for Random Matrices with Dependent Entries
Aditya Bhaskara , Eric Evert , Vaidehi Srinivas , and Aravindan Vijayaraghavan (University of Utah, USA; Northwestern University, USA) We develop new techniques for proving lower bounds on the least singular value of random matrices with limited randomness. The matrices we consider have entries that are given by polynomials of a few underlying base random variables. This setting captures a core technical challenge for obtaining smoothed analysis guarantees in many algorithmic settings. Least singular value bounds often involve showing strong anticoncentration inequalities that are intricate and much less understood compared to concentration (or large deviation) bounds. First, we introduce a general technique for proving anticoncentration that uses wellconditionedness properties of the Jacobian of a polynomial map, and show how to combine this with a hierarchical єnet argument to prove least singular value bounds. Our second tool is a new statement about least singular values to reason about higherorder lifts of smoothed matrices and the action of linear operators on them. Apart from getting simpler proofs of existing smoothed analysis results, we use these tools to now handle more general families of random matrices. This allows us to produce smoothed analysis guarantees in several previously open settings. These new settings include smoothed analysis guarantees for power sum decompositions and certifying robust entanglement of subspaces, where prior work could only establish least singular value bounds for fully random instances or only show nonrobust genericity guarantees. @InProceedings{STOC24p375, author = {Aditya Bhaskara and Eric Evert and Vaidehi Srinivas and Aravindan Vijayaraghavan}, title = {New Tools for Smoothed Analysis: Least Singular Value Bounds for Random Matrices with Dependent Entries}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {375386}, doi = {10.1145/3618260.3649765}, year = {2024}, } Publisher's Version Info 

Bhattacharya, Sayan 
STOC '24: "NearOptimal Dynamic Rounding ..."
NearOptimal Dynamic Rounding of Fractional Matchings in Bipartite Graphs
Sayan Bhattacharya , Peter Kiss , Aaron Sidford , and David Wajc (University of Warwick, United Kingdom; Stanford University, USA; Technion, Israel) We study dynamic (1−є)approximate rounding of fractional matchings—a key ingredient in numerous breakthroughs in the dynamic graph algorithms literature. Our first contribution is a surprisingly simple deterministic rounding algorithm in bipartite graphs with amortized update time O(є^{−1} log^{2} (є^{−1} · n)), matching an (unconditional) recourse lower bound of Ω(є^{−1}) up to logarithmic factors. Moreover, this algorithm’s update time improves provided the minimum (nonzero) weight in the fractional matching is lower bounded throughout. Combining this algorithm with novel dynamic partial rounding algorithms to increase this minimum weight, we obtain a number of algorithms that improve this dependence on n. For example, we give a highprobability randomized algorithm with Õ(є^{−1} · (loglogn)^{2})update time against adaptive adversaries. Using our rounding algorithms, we also round known (1−є)decremental fractional bipartite matching algorithms with no asymptotic overhead, thus improving on stateoftheart algorithms for the decremental bipartite matching problem. Further, we provide extensions of our results to general graphs and to maintaining almostmaximal matchings. @InProceedings{STOC24p59, author = {Sayan Bhattacharya and Peter Kiss and Aaron Sidford and David Wajc}, title = {NearOptimal Dynamic Rounding of Fractional Matchings in Bipartite Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {5970}, doi = {10.1145/3618260.3649648}, year = {2024}, } Publisher's Version 

Bitansky, Nir 
STOC '24: "Batch Proofs Are Statistically ..."
Batch Proofs Are Statistically Hiding
Nir Bitansky , Chethan Kamath , Omer Paneth , Ron D. Rothblum , and Prashant Nalini Vasudevan (Tel Aviv University, Israel; IIT Bombay, India; Technion, Israel; National University of Singapore, Singapore) Batch proofs are proof systems that convince a verifier that x_{1},…,x_{t} ∈ L, for some NP language L, with communication that is much shorter than sending the t witnesses. In the case of statistical soundness (where the cheating prover is unbounded but the honest prover is efficient given the witnesses), interactive batch proofs are known for UP, the class of uniquewitness NP languages. In the case of computational soundness (where both honest and dishonest provers are efficient), noninteractive solutions are now known for all of NP, assuming standard lattice or group assumptions. We exhibit the first negative results regarding the existence of batch proofs and arguments:  Statistically sound batch proofs for L imply that L has a statistically witness indistinguishable (SWI) proof, with inverse polynomial SWI error, and a nonuniform honest prover. The implication is unconditional for obtaining honestverifier SWI or for obtaining fullfledged SWI from publiccoin protocols, whereas for privatecoin protocols fullfledged SWI is obtained assuming oneway functions. This poses a barrier for achieving batch proofs beyond UP (where witness indistinguishability is trivial). In particular, assuming that NP does not have SWI proofs, batch proofs for all of NP do not exist.  Computationally sound batch proofs (a.k.a batch arguments or BARGs) for NP, together with oneway functions, imply statistical zeroknowledge (SZK) arguments for NP with roughly the same number of rounds, an inverse polynomial zeroknowledge error, and nonuniform honest prover. Thus, constantround interactive BARGs from oneway functions would yield constantround SZK arguments from oneway functions. This would be surprising as SZK arguments are currently only known assuming constantround statisticallyhiding commitments. We further prove new positive implications of noninteractive batch arguments to noninteractive zero knowledge arguments (with explicit uniform prover and verifier):  Noninteractive BARGs for NP, together with oneway functions, imply noninteractive computational zeroknowledge arguments for NP. Assuming also dualmode commitments, the zero knowledge can be made statistical. Both our negative and positive results stem from a new framework showing how to transform a batch protocol for a language L into an SWI protocol for L. @InProceedings{STOC24p435, author = {Nir Bitansky and Chethan Kamath and Omer Paneth and Ron D. Rothblum and Prashant Nalini Vasudevan}, title = {Batch Proofs Are Statistically Hiding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {435443}, doi = {10.1145/3618260.3649775}, year = {2024}, } Publisher's Version 

Björklund, Andreas 
STOC '24: "The Asymptotic Rank Conjecture ..."
The Asymptotic Rank Conjecture and the Set Cover Conjecture Are Not Both True
Andreas Björklund and Petteri Kaski (IT University of Copenhagen, Copenhagen, Denmark; Aalto University, Finland) Strassen’s asymptotic rank conjecture [Progr. Math. 120 (1994)] claims a strong submultiplicative upper bound on the rank of a threetensor obtained as an iterated Kronecker product of a constantsize base tensor. The conjecture, if true, most notably would put square matrix multiplication in quadratic time. We note here that some moreorless unexpected algorithmic results in the area of exponentialtime algorithms would also follow. Specifically, we study the socalled set cover conjecture, which states that for any є>0 there exists a positive integer constant k such that no algorithm solves the kSet Cover problem in worstcase time ((2−є)^{n}Fpoly(n)). The kSet Cover problem asks, given as input an nelement universe U, a family F of sizeatmostk subsets of U, and a positive integer t, whether there is a subfamily of at most t sets in F whose union is U. The conjecture was formulated by Cygan, Fomin, Kowalik, Lokshtanov, Marx, Pilipczuk, Pilipczuk, and Saurabh in the monograph Parameterized Algorithms [Springer, 2015], but was implicit as a hypothesis already in Cygan, Dell, Lokshtanov, Marx, Nederlof, Okamoto, Paturi, Saurabh, and Wahlstr'om [CCC 2012, ACM Trans. Algorithms 2016], there conjectured to follow from the Strong Exponential Time Hypothesis. We prove that if the asymptotic rank conjecture is true, then the set cover conjecture is false. Using a reduction by Krauthgamer and Trabelsi [STACS 2019], in this scenario we would also get an ((2−δ)^{n})time randomized algorithm for some constant δ>0 for another wellstudied problem for which no such algorithm is known, namely that of deciding whether a given nvertex directed graph has a Hamiltonian cycle. At a finegrained level, our results do not need the full strength of the asymptotic rank conjecture; it suffices that the conclusion of the conjecture holds approximately for a single 7× 7× 7 tensor. @InProceedings{STOC24p859, author = {Andreas Björklund and Petteri Kaski}, title = {The Asymptotic Rank Conjecture and the Set Cover Conjecture Are Not Both True}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {859870}, doi = {10.1145/3618260.3649656}, year = {2024}, } Publisher's Version 

Blais, Eric 
STOC '24: "New Graph and Hypergraph Container ..."
New Graph and Hypergraph Container Lemmas with Applications in Property Testing
Eric Blais and Cameron Seth (University of Waterloo, Canada) The graph and hypergraph container methods are powerful tools with a wide range of applications across combinatorics. Recently, Blais and Seth (FOCS 2023) showed that the graph container method is particularly wellsuited for the analysis of the natural canonical tester for two fundamental graph properties: having a large independent set and kcolorability. In this work, we show that the connection between the container method and property testing extends further along two different directions. First, we show that the container method can be used to analyze the canonical tester for many other properties of graphs and hypergraphs. We introduce a new hypergraph container lemma and use it to give an upper bound of O(kq^{3}/є) on the sample complexity of єtesting satisfiability, where q is the number of variables per constraint and k is the size of the alphabet. This is the first upper bound for the problem that is polynomial in all of k, q and 1/є. As a corollary, we get new upper bounds on the sample complexity of the canonical testers for hypergraph colorability and for every semihomogeneous graph partition property. Second, we show that the container method can also be used to study the query complexity of (noncanonical) graph property testers. This result is obtained by introducing a new container lemma for the class of all independent set stars, a strict superset of the class of all independent sets. We use this container lemma to give a new upper bound of O(ρ^{5}/є^{7/2}) on the query complexity of єtesting the ρindependent set property. This establishes for the first time the nonoptimality of the canonical tester for a nonhomogeneous graph partition property. @InProceedings{STOC24p1793, author = {Eric Blais and Cameron Seth}, title = {New Graph and Hypergraph Container Lemmas with Applications in Property Testing}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17931804}, doi = {10.1145/3618260.3649708}, year = {2024}, } Publisher's Version 

Blikstad, Joakim 
STOC '24: "Minimum Star Partitions of ..."
Minimum Star Partitions of Simple Polygons in Polynomial Time
Mikkel Abrahamsen , Joakim Blikstad , André Nusser , and Hanwen Zhang (University of Copenhagen, Copenhagen, Denmark; KTH Royal Institute of Technology, Stockholm, Sweden; MPIINF, Germany) We devise a polynomialtime algorithm for partitioning a simple polygon P into a minimum number of starshaped polygons. The question of whether such an algorithm exists has been open for more than four decades [Avis and Toussaint, Pattern Recognit., 1981] and it has been repeated frequently, for example in O’Rourke’s famous book [Art Gallery Theorems and Algorithms, 1987]. In addition to its strong theoretical motivation, the problem is also motivated by practical domains such as CNC pocket milling, motion planning, and shape parameterization. The only previously known algorithm for a nontrivial special case is for P being both monotone and rectilinear [Liu and Ntafos, Algorithmica, 1991]. For general polygons, an algorithm was only known for the restricted version in which Steiner points are disallowed [Keil, SIAM J. Comput., 1985], meaning that each corner of a piece in the partition must also be a corner of P. Interestingly, the solution size for the restricted version may be linear for instances where the unrestricted solution has constant size. The covering variant in which the pieces are starshaped but allowed to overlap—known as the Art Gallery Problem—was recently shown to be ∃ℝcomplete and is thus likely not in NP [Abrahamsen, Adamaszek and Miltzow, STOC 2018 & J. ACM 2022]; this is in stark contrast to our result. Arguably the most related work to ours is the polynomialtime algorithm to partition a simple polygon into a minimum number of convex pieces by Chazelle and Dobkin [STOC, 1979 & Comp. Geom., 1985]. @InProceedings{STOC24p904, author = {Mikkel Abrahamsen and Joakim Blikstad and André Nusser and Hanwen Zhang}, title = {Minimum Star Partitions of Simple Polygons in Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {904910}, doi = {10.1145/3618260.3649756}, year = {2024}, } Publisher's Version STOC '24: "Online Edge Coloring Is (Nearly) ..." Online Edge Coloring Is (Nearly) as Easy as Offline Joakim Blikstad , Ola Svensson , Radu Vintan , and David Wajc (KTH Royal Institute of Technology, Stockholm, Sweden; MPIINF, Germany; EPFL, Lausanne, Switzerland; Technion, Israel) The classic theorem of Vizing (Diskret. Analiz.’64) asserts that any graph of maximum degree Δ can be edge colored (offline) using no more than Δ+1 colors (with Δ being a trivial lower bound). In the online setting, BarNoy, Motwani and Naor (IPL’92) conjectured that a (1+o(1))Δedgecoloring can be computed online in nvertex graphs of maximum degree Δ=ω(logn). Numerous algorithms made progress on this question, using a higher number of colors or assuming restricted arrival models, such as randomorder edge arrivals or vertex arrivals (e.g., AGKM FOCS’03, BMM SODA’10, CPW FOCS’19, BGW SODA’21, KLSST STOC’22). In this work, we resolve this longstanding conjecture in the affirmative in the most general setting of adversarial edge arrivals. We further generalize this result to obtain online counterparts of the list edge coloring result of Kahn (J. Comb. Theory. A’96) and of the recent “local” edge coloring result of Christiansen (STOC’23). @InProceedings{STOC24p36, author = {Joakim Blikstad and Ola Svensson and Radu Vintan and David Wajc}, title = {Online Edge Coloring Is (Nearly) as Easy as Offline}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {3646}, doi = {10.1145/3618260.3649741}, year = {2024}, } Publisher's Version 

Bostanci, John 
STOC '24: "An Efficient Quantum Parallel ..."
An Efficient Quantum Parallel Repetition Theorem and Applications
John Bostanci , Luowen Qian , Nicholas Spooner , and Henry Yuen (Columbia University, USA; Boston University, USA; University of Warwick, United Kingdom; New York University, USA) We prove a tight parallel repetition theorem for 3message computationallysecure quantum interactive protocols between an efficient challenger and an efficient adversary. We also prove under plausible assumptions that the security of 4message computationally secure protocols does not generally decrease under parallel repetition. These mirror the classical results of Bellare, Impagliazzo, and Naor. Finally, we prove that all quantum argument systems can be generically compiled to an equivalent 3message argument system, mirroring the transformation for quantum proof systems. As immediate applications, we show how to derive hardness amplification theorems for quantum bit commitment schemes (answering a question of Yan), EFI pairs (answering a question of Brakerski, Canetti, and Qian), publickey quantum money schemes (answering a question of Aaronson and Christiano), and quantum zeroknowledge argument systems. We also derive an XOR lemma for quantum predicates as a corollary. @InProceedings{STOC24p1478, author = {John Bostanci and Luowen Qian and Nicholas Spooner and Henry Yuen}, title = {An Efficient Quantum Parallel Repetition Theorem and Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14781487}, doi = {10.1145/3618260.3649603}, year = {2024}, } Publisher's Version 

Boyle, Elette 
STOC '24: "Memory Checking Requires Logarithmic ..."
Memory Checking Requires Logarithmic Overhead
Elette Boyle , Ilan Komargodski , and Neekon Vafa (Reichman University, Israel; NTT Research, USA; Hebrew University of Jerusalem, Israel; Massachusetts Institute of Technology, USA) We study the complexity of memory checkers with computational security and prove the first general tight lower bound. Memory checkers, first introduced over 30 years ago by Blum, Evans, Gemmel, Kannan, and Naor (FOCS ’91, Algorithmica ’94), allow a user to store and maintain a large memory on a remote and unreliable server by using small trusted local storage. The user can issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. The main complexity measure of interest is the size of the local storage and the number of queries the memory checker makes upon every logical instruction. The most efficient known construction has query complexity O(logn/loglogn) and local space proportional to a computational security parameter, assuming oneway functions, where n is the logical memory size. Dwork, Naor, Rothblum, and Vaikuntanathan (TCC ’09) showed that for a restricted class of “deterministic and nonadaptive” memory checkers, this construction is optimal, up to constant factors. However, going beyond the small class of deterministic and nonadaptive constructions has remained a major open problem. In this work, we fully resolve the complexity of memory checkers by showing that any construction with local space p and query complexity q must satisfy p ≥ n/(logn)^{O(q)} . This implies, as a special case, that q≥ Ω(logn/loglogn) in any scheme, assuming that p≤ n^{1−ε} for ε>0. The bound applies to any scheme with computational security, completeness 2/3, and inverse polynomial in n soundness (all of which make our lower bound only stronger). We further extend the lower bound to schemes where the read complexity q_{r} and write complexity q_{w} differ. For instance, we show the tight bound that if q_{r}=O(1) and p≤ n^{1−ε} for ε>0, then q_{w}≥ n^{Ω(1)}. This is the first lower bound, for any nontrivial class of constructions, showing a readwrite query complexity tradeoff. Our proof is via a delicate compression argument showing that a “too good to be true” memory checker can be used to compress random bits of information. We draw inspiration from tools recently developed for lower bounds for relaxed locally decodable codes. However, our proof itself significantly departs from these works, necessitated by the differences between settings. @InProceedings{STOC24p1712, author = {Elette Boyle and Ilan Komargodski and Neekon Vafa}, title = {Memory Checking Requires Logarithmic Overhead}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17121723}, doi = {10.1145/3618260.3649686}, year = {2024}, } Publisher's Version 

Brakensiek, Joshua 
STOC '24: "Generalized GMMDS: Polynomial ..."
Generalized GMMDS: Polynomial Codes Are Higher Order MDS
Joshua Brakensiek , Manik Dhar , and Sivakanth Gopi (Independent, USA; Massachusetts Institute of Technology, USA; Microsoft Research, USA) The GMMDS theorem, conjectured by DauSongDongYuen and proved by Lovett and YildizHassibi, shows that the generator matrices of ReedSolomon codes can attain every possible configuration of zeros for an MDS code. The recently emerging theory of higher order MDS codes has connected the GMMDS theorem to other important properties of ReedSolomon codes, including showing that ReedSolomon codes can achieve list decoding capacity, even over fields of size linear in the message length. A few works have extended the GMMDS theorem to other families of codes, including Gabidulin and skew polynomial codes. In this paper, we generalize all these previous results by showing that the GMMDS theorem applies to any polynomial code, i.e., a code where the columns of the generator matrix are obtained by evaluating linearly independent polynomials at different points. We also show that the GMMDS theorem applies to dual codes of such polynomial codes, which is nontrivial since the dual of a polynomial code may not be a polynomial code. More generally, we show that GMMDS theorem also holds for algebraic codes (and their duals) where columns of the generator matrix are chosen to be points on some irreducible variety which is not contained in a hyperplane through the origin. Our generalization has applications to constructing capacityachieving listdecodable codes as shown in a followup work [Brakensiek, Dhar, Gopi, Zhang; 2024], where it is proved that randomly punctured algebraicgeometric (AG) codes achieve listdecoding capacity over constantsized fields. @InProceedings{STOC24p728, author = {Joshua Brakensiek and Manik Dhar and Sivakanth Gopi}, title = {Generalized GMMDS: Polynomial Codes Are Higher Order MDS}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {728739}, doi = {10.1145/3618260.3649637}, year = {2024}, } Publisher's Version STOC '24: "AG Codes Achieve List Decoding ..." AG Codes Achieve List Decoding Capacity over ConstantSized Fields Joshua Brakensiek , Manik Dhar , Sivakanth Gopi , and Zihan Zhang (Independent, USA; Massachusetts Institute of Technology, USA; Microsoft Research, USA; Ohio State University, USA) The recentlyemerging field of higher order MDS codes has sought to unify a number of concepts in coding theory. Such areas captured by higher order MDS codes include maximally recoverable (MR) tensor codes, codes with optimal listdecoding guarantees, and codes with constrained generator matrices (as in the GMMDS theorem). By proving these equivalences, BrakensiekGopiMakam showed the existence of optimally listdecodable ReedSolomon codes over exponential sized fields. Building on this, recent breakthroughs by GuoZhang and AlrabiahGuruswamiLi have shown that randomly punctured ReedSolomon codes achieve listdecoding capacity (which is a relaxation of optimal listdecodability) over linear size fields. We extend these works by developing a formal theory of relaxed higher order MDS codes. In particular, we show that there are two inequivalent relaxations which we call lower and upper relaxations. The lower relaxation is equivalent to relaxed optimal listdecodable codes and the upper relaxation is equivalent to relaxed MR tensor codes with a single parity check per column. We then generalize the techniques of GuoZhang and AlrabiahGuruswamiLi to show that both these relaxations can be constructed over constant size fields by randomly puncturing suitable algebraicgeometric codes. For this, we crucially use the generalized GMMDS theorem for polynomial codes recently proved by BrakensiekDharGopi. We obtain the following corollaries from our main result: Randomly punctured algebraicgeometric codes of rate R are listdecodable up to radius L/L+1(1−R−є) with list size L over fields of size exp(O(L/є)). In particular, they achieve listdecoding capacity with list size O(1/є) and field size exp(O(1/є^{2})). Prior to this work, AG codes were not even known to achieve listdecoding capacity. By randomly puncturing algebraicgeometric codes, we can construct relaxed MR tensor codes with a single parity check per column over constantsized fields, whereas (nonrelaxed) MR tensor codes require exponential field size. @InProceedings{STOC24p740, author = {Joshua Brakensiek and Manik Dhar and Sivakanth Gopi and Zihan Zhang}, title = {AG Codes Achieve List Decoding Capacity over ConstantSized Fields}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {740751}, doi = {10.1145/3618260.3649651}, year = {2024}, } Publisher's Version 

Brakerski, Zvika 
STOC '24: "Quantum State Obfuscation ..."
Quantum State Obfuscation from Classical Oracles
James Bartusek , Zvika Brakerski , and Vinod Vaikuntanathan (University of California at Berkeley, USA; Weizmann Institute of Science, Israel; Massachusetts Institute of Technology, USA) A major unresolved question in quantum cryptography is whether it is possible to obfuscate arbitrary quantum computation. Indeed, there is much yet to understand about the feasibility of quantum obfuscation even in the classical oracle model, where one is given for free the ability to obfuscate any classical circuit. In this work, we develop a new array of techniques that we use to construct a quantum state obfuscator, a powerful notion formalized recently by Coladangelo and Gunn (arXiv:2311.07794) in their pursuit of better software copyprotection schemes. Quantum state obfuscation refers to the task of compiling a quantum program, consisting of a quantum circuit C with a classical description and an auxiliary quantum state ψ, into a functionallyequivalent obfuscated quantum program that hides as much as possible about C and ψ. We prove the security of our obfuscator when applied to any pseudodeterministic quantum program, i.e. one that computes a (nearly) deterministic classical input / classical output functionality. Our security proof is with respect to an efficient classical oracle, which may be heuristically instantiated using quantumsecure indistinguishability obfuscation for classical circuits. Our result improves upon the recent work of Bartusek, Kitagawa, Nishimaki and Yamakawa (STOC 2023) who also showed how to obfuscate pseudodeterministic quantum circuits in the classical oracle model, but only ones with a completely classical description. Furthermore, our result answers a question of Coladangelo and Gunn, who provide a construction of quantum state indistinguishability obfuscation with respect to a quantum oracle, but leave the existence of a concrete realworld candidate as an open problem. Indeed, our quantum state obfuscator together with ColadangeloGunn gives the first candidate realization of a “bestpossible” copyprotection scheme for all polynomialtime functionalities. Our techniques deviate significantly from previous works on quantum obfuscation. We develop several novel technical tools which we expect to be broadly useful in quantum cryptography. These tools include a publiclyverifiable, linearlyhomomorphic quantum authentication scheme with classicallydecodable ZX measurements (which we build from coset states), and a method for compiling any quantum circuit into a ”linear + measurement” () quantum program: an alternating sequence of CNOT operations and partial ZX measurements. @InProceedings{STOC24p1009, author = {James Bartusek and Zvika Brakerski and Vinod Vaikuntanathan}, title = {Quantum State Obfuscation from Classical Oracles}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10091017}, doi = {10.1145/3618260.3649673}, year = {2024}, } Publisher's Version 

Braverman, Mark 
STOC '24: "A New Information Complexity ..."
A New Information Complexity Measure for Multipass Streaming with Applications
Mark Braverman , Sumegha Garg , Qian Li , Shuo Wang , David P. Woodruff , and Jiapeng Zhang (Princeton University, USA; Rutgers University, USA; Shenzhen Research Institute of Big Data, China; Shanghai Jiao Tong University, China; Carnegie Mellon University, USA; University of Southern California, USA) We introduce a new notion of information complexity for multipass streaming problems and use it to resolve several important questions in data streams. In the coin problem, one sees a stream of n i.i.d. uniformly random bits and one would like to compute the majority with constant advantage. We show that any constantpass algorithm must use Ω(logn) bits of memory, significantly extending an earlier Ω(logn) bit lower bound for singlepass algorithms of BravermanGargWoodruff (FOCS, 2020). This also gives the first Ω(logn) bit lower bound for the problem of approximating a counter up to a constant factor in worstcase turnstile streams for more than one pass. In the needle problem, one either sees a stream of n i.i.d. uniform samples from a domain [t], or there is a randomly chosen needle α ∈[t] for which each item independently is chosen to equal α with probability p, and is otherwise uniformly random in [t]. The problem of distinguishing these two cases is central to understanding the space complexity of the frequency moment estimation problem in random order streams. We show tight multipass space bounds for this problem for every p < 1/√n log^{3} n, resolving an open question of Lovett and Zhang (FOCS, 2023); even for 1pass our bounds are new. To show optimality, we improve both lower and upper bounds from existing results. Our information complexity framework significantly extends the toolkit for proving multipass streaming lower bounds, and we give a wide number of additional streaming applications of our lower bound techniques, including multipass lower bounds for ℓ_{p}norm estimation, ℓ_{p}point query and heavy hitters, and compressed sensing problems. @InProceedings{STOC24p1781, author = {Mark Braverman and Sumegha Garg and Qian Li and Shuo Wang and David P. Woodruff and Jiapeng Zhang}, title = {A New Information Complexity Measure for Multipass Streaming with Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17811792}, doi = {10.1145/3618260.3649672}, year = {2024}, } Publisher's Version 

Bravyi, Sergey 
STOC '24: "Classical Simulation of Peaked ..."
Classical Simulation of Peaked Shallow Quantum Circuits
Sergey Bravyi , David Gosset , and Yinchen Liu (IBM Research, USA; University of Waterloo, Canada; Institute for Quantum Computing, Canada; Perimeter Institute for Theoretical Physics, Canada) An nqubit quantum circuit is said to be peaked if it has an output probability that is at least inversepolynomially large as a function of n. We describe a classical algorithm with quasipolynomial runtime n^{O(logn)} that approximately samples from the output distribution of a peaked constantdepth circuit. We give even faster algorithms for circuits composed of nearestneighbor gates on a Ddimensional grid of qubits, with polynomial runtime n^{O(1)} if D=2 and almostpolynomial runtime n^{O(loglogn)} for D>2. Our sampling algorithms can be used to estimate output probabilities of shallow circuits to within a given inversepolynomial additive error, improving previously known methods. As a simple application, we obtain a quasipolynomial algorithm to estimate the magnitude of the expected value of any Pauli observable in the output state of a shallow circuit (which may or may not be peaked). This is a dramatic improvement over the prior stateoftheart algorithm which had an exponential scaling in √n. @InProceedings{STOC24p561, author = {Sergey Bravyi and David Gosset and Yinchen Liu}, title = {Classical Simulation of Peaked Shallow Quantum Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {561572}, doi = {10.1145/3618260.3649638}, year = {2024}, } Publisher's Version 

Bresler, Guy 
STOC '24: "On the Fourier Coefficients ..."
On the Fourier Coefficients of HighDimensional Random Geometric Graphs
Kiril Bangachev and Guy Bresler (Massachusetts Institute of Technology, USA) The random geometric graph RGG(n,S^{d−1},p) is formed by sampling n i.i.d. vectors {V_{i}}_{i = 1}^{n} uniformly on S^{d−1} and placing an edge between pairs of vertices i and j for which ⟨ V_{i},V_{j}⟩ ≥ τ_{d}^{p}, where τ_{d}^{p} is such that the expected density is p. We study the lowdegree Fourier coefficients of the distribution RGG(n,S^{d−1},p) and its Gaussian analogue. Our main conceptual contribution is a novel twostep strategy for bounding Fourier coefficients which we believe is more widely applicable to studying latent space distributions. First, we localize the dependence among edges to few fragile edges. Second, we partition the space of latent vector configurations (S^{d−1})^{⊗ n} based on the set of fragile edges and on each subset of configurations, we define a noise operator acting independently on edges not incident (in an appropriate sense) to fragile edges. We apply the resulting bounds to: 1) Settle the lowdegree polynomial complexity of distinguishing spherical and Gaussian random geometric graphs from ErdosRenyi both in the case of observing a complete set of edges and in the nonadaptively chosen mask M model recently introduced by Mardia, Verchand, and Wein; 2) Exhibit a statisticalcomputational gap for distinguishing RGG and a planted coloring model in a regime when RGG is distinguishable from ; 3) Reprove known bounds on the second eigenvalue of random geometric graphs. @InProceedings{STOC24p549, author = {Kiril Bangachev and Guy Bresler}, title = {On the Fourier Coefficients of HighDimensional Random Geometric Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {549560}, doi = {10.1145/3618260.3649676}, year = {2024}, } Publisher's Version 

Breuckmann, Nikolas P. 
STOC '24: "CircuittoHamiltonian from ..."
CircuittoHamiltonian from Tensor Networks and Fault Tolerance
Anurag Anshu , Nikolas P. Breuckmann , and Quynh T. Nguyen (Harvard University, USA; University of Bristol, United Kingdom) We define a map from an arbitrary quantum circuit to a local Hamiltonian whose ground state encodes the quantum computation. All previous maps relied on the FeynmanKitaev construction, which introduces an ancillary "clock register" to track the computational steps. Our construction, on the other hand, relies on injective tensor networks with associated parent Hamiltonians, avoiding the introduction of a clock register. This comes at the cost of the ground state containing only a noisy version of the quantum computation, with independent stochastic noise. We can remedy this  making our construction robust  by using quantum fault tolerance. In addition to the stochastic noise, we show that any state with energy density exponentially small in the circuit depth encodes a noisy version of the quantum computation with adversarial noise. We also show that any "combinatorial state" with energy density polynomially small in depth encodes the quantum computation with adversarial noise. This serves as evidence that any state with energy density polynomially small in depth has a similar property. As an application, we show that contracting injective tensor networks to additive error is BQPhard. We also discuss the implication of our construction to the quantum PCP conjecture, combining with an observation that QMA verification can be done in logarithmic depth. @InProceedings{STOC24p585, author = {Anurag Anshu and Nikolas P. Breuckmann and Quynh T. Nguyen}, title = {CircuittoHamiltonian from Tensor Networks and Fault Tolerance}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {585595}, doi = {10.1145/3618260.3649690}, year = {2024}, } Publisher's Version 

Bringmann, Karl 
STOC '24: "Knapsack with Small Items ..."
Knapsack with Small Items in NearQuadratic Time
Karl Bringmann (Saarland University, Saarbrücken, Germany; MPIINF, Germany) The Knapsack problem is one of the most fundamental NPcomplete problems at the intersection of computer science, optimization, and operations research. A recent line of research worked towards understanding the complexity of pseudopolynomialtime algorithms for Knapsack parameterized by the maximum item weight w_{max} and the number of items n. A conditional lower bound rules out that Knapsack can be solved in time O((n+w_{max})^{2−δ}) for any δ > 0 [Cygan, Mucha, Wegrzycki, Wlodarczyk’17, Künnemann, Paturi, Schneider’17]. This raised the question whether Knapsack can be solved in time Õ((n+w_{max})^{2}). This was open both for 01Knapsack (where each item can be picked at most once) and Bounded Knapsack (where each item comes with a multiplicity). The quest of resolving this question lead to algorithms that solve Bounded Knapsack in time Õ(n^{3} w_{max}^{2}) [Tamir’09], Õ(n^{2} w_{max}^{2}) and Õ(n w_{max}^{3}) [Bateni, Hajiaghayi, Seddighin, Stein’18], O(n^{2} w_{max}^{2}) and Õ(n w_{max}^{2}) [Eisenbrand and Weismantel’18], O(n + w_{max}^{3}) [Polak, Rohwedder, Wegrzycki’21], and very recently Õ(n + w_{max}^{12/5}) [Chen, Lian, Mao, Zhang’23]. In this paper we resolve this question by designing an algorithm for Bounded Knapsack with running time Õ(n + w_{max}^{2}), which is conditionally nearoptimal. This resolves the question both for the classic 01Knapsack problem and for the Bounded Knapsack problem. @InProceedings{STOC24p259, author = {Karl Bringmann}, title = {Knapsack with Small Items in NearQuadratic Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {259270}, doi = {10.1145/3618260.3649719}, year = {2024}, } Publisher's Version 

Broughton, Michael 
STOC '24: "Learning Shallow Quantum Circuits ..."
Learning Shallow Quantum Circuits
HsinYuan Huang , Yunchao Liu , Michael Broughton , Isaac Kim , Anurag Anshu , Zeph Landau , and Jarrod R. McClean (California Institute of Technology, USA; Google Quantum AI, USA; University of California at Berkeley, USA; University of California at Davis, USA; Harvard University, USA) Despite fundamental interests in learning quantum circuits, the existence of a computationally efficient algorithm for learning shallow quantum circuits remains an open question. Because shallow quantum circuits can generate distributions that are classically hard to sample from, existing learning algorithms do not apply. In this work, we present a polynomialtime classical algorithm for learning the description of any unknown nqubit shallow quantum circuit U (with arbitrary unknown architecture) within a small diamond distance using singlequbit measurement data on the output states of U. We also provide a polynomialtime classical algorithm for learning the description of any unknown nqubit state  ψ ⟩ = U  0^{n} ⟩ prepared by a shallow quantum circuit U (on a 2D lattice) within a small trace distance using singlequbit measurements on copies of  ψ ⟩. Our approach uses a quantum circuit representation based on local inversions and a technique to combine these inversions. This circuit representation yields an optimization landscape that can be efficiently navigated and enables efficient learning of quantum circuits that are classically hard to simulate. @InProceedings{STOC24p1343, author = {HsinYuan Huang and Yunchao Liu and Michael Broughton and Isaac Kim and Anurag Anshu and Zeph Landau and Jarrod R. McClean}, title = {Learning Shallow Quantum Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13431351}, doi = {10.1145/3618260.3649722}, year = {2024}, } Publisher's Version 

Buchbinder, Niv 
STOC '24: "Constrained Submodular Maximization ..."
Constrained Submodular Maximization via New Bounds for DRSubmodular Functions
Niv Buchbinder and Moran Feldman (Tel Aviv University, Israel; University of Haifa, Israel) Submodular maximization under various constraints is a fundamental problem studied continuously, in both computer science and operations research, since the late 1970’s. A central technique in this field is to approximately optimize the multilinear extension of the submodular objective, and then round the solution. The use of this technique requires a solver able to approximately maximize multilinear extensions. Following a long line of work, Buchbinder and Feldman (2019) described such a solver guaranteeing 0.385approximation for downclosed constraints, while Oveis Gharan and Vondrák (2011) showed that no solver can guarantee better than 0.478approximation. In this paper, we present a solver guaranteeing 0.401approximation, which significantly reduces the gap between the best known solver and the inapproximability result. The design and analysis of our solver are based on a novel bound that we prove for DRsubmodular functions. This bound improves over a previous bound due to Feldman et al. (2011) that is used by essentially all stateoftheart results for constrained maximization of general submodular/DRsubmodular functions. Hence, we believe that our new bound is likely to find many additional applications in related problems, and to be a key component for further improvement. @InProceedings{STOC24p1820, author = {Niv Buchbinder and Moran Feldman}, title = {Constrained Submodular Maximization via New Bounds for DRSubmodular Functions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18201831}, doi = {10.1145/3618260.3649630}, year = {2024}, } Publisher's Version 

Cai, Yang 
STOC '24: "The Power of TwoSided Recruitment ..."
The Power of TwoSided Recruitment in TwoSided Markets
Yang Cai , Christopher Liaw , Aranyak Mehta , and Mingfei Zhao (Yale University, USA; Google Research, USA) We consider the problem of maximizing the gains from trade (GFT) in twosided markets. The seminal impossibility result by Myerson and Satterthwaite (1983) shows that even for bilateral trade, there is no individually rational (IR), Bayesian incentive compatible (BIC) and budget balanced (BB) mechanism that can achieve the full GFT. Moreover, the optimal BIC, IR and BB mechanism that maximizes the GFT is known to be complex and heavily depends on the prior. In this paper, we pursue a BulowKlempererstyle question, i.e., does augmentation allow for priorindependent mechanisms to compete against the optimal mechanism? Our first main result shows that in the double auction setting with m i.i.d. buyers and n i.i.d. sellers, by augmenting O(1) buyers and sellers to the market, the GFT of a simple, dominant strategy incentive compatible (DSIC), and priorindependent mechanism in the augmented market is at least the optimal in the original market, when the buyers’ distribution firstorder stochastically dominates the sellers’ distribution. The mechanism we consider is a slight variant of the standard Trade Reduction mechanism due to McAfee (1992). For comparison, Babaioff, Goldner, and Gonczarowski (2020) showed that if one is restricted to augmenting only one side of the market, then n(m + 4√m) additional agents are sufficient for their mechanism to beat the original optimal and ⌊ log_{2} m ⌋ additional agents are necessary for any priorindependent mechanism. Next, we go beyond the i.i.d. setting and study the power of twosided recruitment in more general markets. Our second main result is that for any ε > 0 and any set of O(1/ε) buyers and sellers where the buyers’ value exceeds the sellers’ value with constant probability, if we add these additional agents into any market with arbitrary correlations, the Trade Reduction mechanism obtains a (1−ε)approximation of the GFT of the augmented market. Importantly, the newly recruited agents are agnostic to the original market. @InProceedings{STOC24p201, author = {Yang Cai and Christopher Liaw and Aranyak Mehta and Mingfei Zhao}, title = {The Power of TwoSided Recruitment in TwoSided Markets}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {201212}, doi = {10.1145/3618260.3649669}, year = {2024}, } Publisher's Version 

Cannon, Sarah 
STOC '24: "Sampling Balanced Forests ..."
Sampling Balanced Forests of Grids in Polynomial Time
Sarah Cannon , Wesley Pegden , and Jamie TuckerFoltz (Claremont McKenna College, USA; Carnegie Mellon University, USA; Harvard University, USA) We prove that a polynomial fraction of the set of kcomponent forests in the m × n grid graph have equal numbers of vertices in each component, for any constant k. This resolves a conjecture of Charikar, Liu, Liu, and Vuong, and establishes the first provably polynomialtime algorithm for (exactly or approximately) sampling balanced grid graph partitions according to the spanning tree distribution, which weights each kpartition according to the product, across its k pieces, of the number of spanning trees of each piece. Our result follows from a careful analysis of the probability a uniformly random spanning tree of the grid can be cut into balanced pieces. Beyond grids, we show that for a broad family of latticelike graphs, we achieve balance up to any multiplicative (1 ± ε) constant with constant probability. More generally, we show that, with constant probability, components derived from uniform spanning trees can approximate any given partition of a planar region specified by Jordan curves. This implies polynomialtime algorithms for sampling approximately balanced treeweighted partitions for latticelike graphs. Our results have applications to understanding political districtings, where there is an underlying graph of indivisible geographic units that must be partitioned into k populationbalanced connected subgraphs. In this setting, treeweighted partitions have interesting geometric properties, and this has stimulated significant effort to develop methods to sample them. @InProceedings{STOC24p1676, author = {Sarah Cannon and Wesley Pegden and Jamie TuckerFoltz}, title = {Sampling Balanced Forests of Grids in Polynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {16761687}, doi = {10.1145/3618260.3649699}, year = {2024}, } Publisher's Version 

Cao, Nairen 
STOC '24: "Understanding the Cluster ..."
Understanding the Cluster Linear Program for Correlation Clustering
Nairen Cao , Vincent CohenAddad , Euiwoong Lee , Shi Li , Alantha Newman , and Lukas Vogl (Boston College, USA; Google Research, France; University of Michigan, USA; Nanjing University, China; CNRS  Université Grenoble Alpes, France; EPFL, Lausanne, Switzerland) In the classic Correlation Clustering problem introduced by Bansal, Blum, and Chawla (FOCS 2002), the input is a complete graph where edges are labeled either + or −, and the goal is to find a partition of the vertices that minimizes the sum of the +edges across parts plus the sum of the edges within parts. In recent years, Chawla, Makarychev, Schramm and Yaroslavtsev (STOC 2015) gave a 2.06approximation by providing a nearoptimal rounding of the standard LP, and CohenAddad, Lee, Li, and Newman (FOCS 2022, 2023) finally bypassed the integrality gap of 2 for this LP giving a 1.73approximation for the problem. While introducing new ideas for Correlation Clustering, their algorithm is more complicated than typical approximation algorithms in the following two aspects: (1) It is based on two different relaxations with separate rounding algorithms connected by the roundorcut procedure. (2) Each of the rounding algorithms has to separately handle seemingly inevitable correlated rounding errors, coming from correlated rounding of SheraliAdams and other strong LP relaxations. In order to create a simple and unified framework for Correlation Clustering similar to those for typical approximate optimization tasks, we propose the cluster LP as a strong linear program that might tightly capture the approximability of Correlation Clustering. It unifies all the previous relaxations for the problem. It is exponentialsized, but we show that it can be (1+є)approximately solved in polynomial time for any є > 0, providing the framework for designing rounding algorithms without worrying about correlated rounding errors; these errors are handled uniformly in solving the relaxation. We demonstrate the power of the cluster LP by presenting a simple rounding algorithm, and providing two analyses, one analytically proving a 1.49approximation and the other solving a factorrevealing SDP to show a 1.437approximation. Both proofs introduce principled methods by which to analyze the performance of the algorithm, resulting in a significantly improved approximation guarantee. Finally, we prove an integrality gap of 4/3 for the cluster LP, showing our 1.437upper bound cannot be drastically improved. Our gap instance directly inspires an improved NPhardness of approximation with a ratio 24/23 ≈ 1.042; no explicit hardness ratio was known before. @InProceedings{STOC24p1605, author = {Nairen Cao and Vincent CohenAddad and Euiwoong Lee and Shi Li and Alantha Newman and Lukas Vogl}, title = {Understanding the Cluster Linear Program for Correlation Clustering}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {16051616}, doi = {10.1145/3618260.3649749}, year = {2024}, } Publisher's Version Info 

Caputo, Pietro 
STOC '24: "Nonlinear Dynamics for the ..."
Nonlinear Dynamics for the Ising Model
Pietro Caputo and Alistair Sinclair (University Rome III, Italy; University of California at Berkeley, USA) We introduce and analyze a natural class of nonlinear dynamics for spin systems such as the Ising model. This class of dynamics is based on the framework of mass action kinetics, which models the evolution of systems of entities under pairwise interactions, and captures a number of important nonlinear models from various fields, including chemical reaction networks, Boltzmann’s model of an ideal gas, recombination in population genetics, and genetic algorithms. In the context of spin systems, it is a natural generalization of linear dynamics based on Markov chains, such as Glauber dynamics and block dynamics, which are by now well understood. However, the inherent nonlinearity makes the dynamics much harder to analyze, and rigorous quantitative results so far are limited to processes which converge to essentially trivial stationary distributions that are product measures. In this paper we provide the first quantitative convergence analysis for natural nonlinear dynamics in a combinatorial setting where the stationary distribution contains nontrivial correlations, namely spin systems at high temperatures. We prove that nonlinear versions of both the Glauber dynamics and the block dynamics converge to the Gibbs distribution of the Ising model (with given external fields) in times O(nlogn) and O(logn) respectively, where n is the size of the underlying graph (number of spins). Given the lack of general analytical methods for such nonlinear systems, our analysis is unconventional, and combines tools such as information percolation (due in the linear setting to Lubetzky and Sly), a novel coupling of the Ising model with ErdősRényi random graphs, and nontraditional branching processes augmented by a ”fragmentation” process. Our results extend immediately to any spin system with a finite number of spins and bounded interactions. @InProceedings{STOC24p515, author = {Pietro Caputo and Alistair Sinclair}, title = {Nonlinear Dynamics for the Ising Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {515526}, doi = {10.1145/3618260.3649759}, year = {2024}, } Publisher's Version 

Casacuberta, Sílvia 
STOC '24: "ComplexityTheoretic Implications ..."
ComplexityTheoretic Implications of Multicalibration
Sílvia Casacuberta , Cynthia Dwork , and Salil Vadhan (University of Oxford, United Kingdom; Harvard University, USA) We present connections between the recent literature on multigroup fairness for prediction algorithms and classical results in computational complexity. Multiaccurate predictors are correct in expectation on each member of an arbitrary collection of prespecified sets. Multicalibrated predictors satisfy a stronger condition: they are calibrated on each set in the collection. Multiaccuracy is equivalent to a regularity notion for functions defined by Trevisan, Tulsiani, and Vadhan (2009). They showed that, given a class F of (possibly simple) functions, an arbitrarily complex function g can be approximated by a lowcomplexity function h that makes a small number of oracle calls to members of F, where the notion of approximation requires that h cannot be distinguished from g by members of F. This complexitytheoretic Regularity Lemma is known to have implications in different areas, including in complexity theory, additive number theory, information theory, graph theory, and cryptography. Starting from the stronger notion of multicalibration, we obtain stronger and more general versions of a number of applications of the Regularity Lemma, including the Hardcore Lemma, the Dense Model Theorem, and the equivalence of conditional pseudominentropy and unpredictability. For example, we show that every boolean function (regardless of its hardness) has a small collection of disjoint hardcore sets, where the sizes of those hardcore sets are related to how balanced the function is on corresponding pieces of an efficient partition of the domain. @InProceedings{STOC24p1071, author = {Sílvia Casacuberta and Cynthia Dwork and Salil Vadhan}, title = {ComplexityTheoretic Implications of Multicalibration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10711082}, doi = {10.1145/3618260.3649748}, year = {2024}, } Publisher's Version 

Castiglioni, Matteo 
STOC '24: "NoRegret Learning in Bilateral ..."
NoRegret Learning in Bilateral Trade via Global Budget Balance
Martino Bernasconi , Matteo Castiglioni , Andrea Celli , and Federico Fusco (Bocconi University, Italy; Politecnico di Milano, Italy; Sapienza University of Rome, Italy) Bilateral trade models the problem of intermediating between two rational agents — a seller and a buyer — both characterized by a private valuation for an item they want to trade. We study the online learning version of the problem, in which at each time step a new seller and buyer arrive and the learner has to set prices for them without any knowledge about their (adversarially generated) valuations. In this setting, known impossibility results rule out the existence of noregret algorithms when budget balanced has to be enforced at each time step. In this paper, we introduce the notion of global budget balance, which only requires the learner to fulfill budget balance over the entire time horizon. Under this natural relaxation, we provide the first noregret algorithms for adversarial bilateral trade under various feedback models. First, we show that in the fullfeedback model, the learner can guarantee Õ(√T) regret against the best fixed prices in hindsight, and that this bound is optimal up to polylogarithmic terms. Second, we provide a learning algorithm guaranteeing a Õ(T^{ 34}) regret upper bound with onebit feedback, which we complement with a Ω(T^{ 57}) lower bound that holds even in the twobit feedback model. Finally, we introduce and analyze an alternative benchmark that is provably stronger than the best fixed prices in hindsight and is inspired by the literature on bandits with knapsacks. @InProceedings{STOC24p247, author = {Martino Bernasconi and Matteo Castiglioni and Andrea Celli and Federico Fusco}, title = {NoRegret Learning in Bilateral Trade via Global Budget Balance}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {247258}, doi = {10.1145/3618260.3649653}, year = {2024}, } Publisher's Version 

Cavallaro, Dario Giuliano 
STOC '24: "EdgeDisjoint Paths in Eulerian ..."
EdgeDisjoint Paths in Eulerian Digraphs
Dario Giuliano Cavallaro , Kenichi Kawarabayashi , and Stephan Kreutzer (TU Berlin, Berlin, Germany; National Institute of Informatics, Tokyo, Japan; University of Tokyo, Tokyo, Japan) Disjoint paths problems are among the most prominent problems in combinatorial optimisation. The edge as well as the VertexDisjoint Paths problem are NPcomplete, both on directed and undirected graphs. But on undirected graphs, Robertson and Seymour developed an algorithm for both problems that runs in cubic time for every fixed number p of terminal pairs, i.e. they proved that the problem is fixedparameter tractable on undirected graphs. This is in sharp contrast to the situation on directed graphs, where Fortune, Hopcroft, and Wyllie proved that both problems are NPcomplete already for p=2 terminal pairs. In this paper, we study the EdgeDisjoint Paths problem (EDPP) on Eulerian digraphs, a problem that has received significant attention in the literature. Marx proved that the Eulerian EDPP is NPcomplete even on structurally very simple Eulerian digraphs. On the positive side, polynomial time algorithms are known only for very restricted cases, such as p≤ 3 or where the demand graph is a union of two stars. The question for which values of p the EdgeDisjoint Paths problem can be solved in polynomial time on Eulerian digraphs has already been raised by Frank, Ibaraki, and Nagamochi almost 30 years ago. But despite considerable effort, the complexity of the problem is still wide open and is considered to be the main open problem in this area. In this paper, we solve this longopen problem by showing that the EdgeDisjoint Paths problem is fixedparameter tractable on Eulerian digraphs in general (parameterized by the number of terminal pairs). The algorithm itself is reasonably simple but the proof of its correctness requires a deep structural analysis of Eulerian digraphs. @InProceedings{STOC24p704, author = {Dario Giuliano Cavallaro and Kenichi Kawarabayashi and Stephan Kreutzer}, title = {EdgeDisjoint Paths in Eulerian Digraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {704715}, doi = {10.1145/3618260.3649758}, year = {2024}, } Publisher's Version 

Celli, Andrea 
STOC '24: "NoRegret Learning in Bilateral ..."
NoRegret Learning in Bilateral Trade via Global Budget Balance
Martino Bernasconi , Matteo Castiglioni , Andrea Celli , and Federico Fusco (Bocconi University, Italy; Politecnico di Milano, Italy; Sapienza University of Rome, Italy) Bilateral trade models the problem of intermediating between two rational agents — a seller and a buyer — both characterized by a private valuation for an item they want to trade. We study the online learning version of the problem, in which at each time step a new seller and buyer arrive and the learner has to set prices for them without any knowledge about their (adversarially generated) valuations. In this setting, known impossibility results rule out the existence of noregret algorithms when budget balanced has to be enforced at each time step. In this paper, we introduce the notion of global budget balance, which only requires the learner to fulfill budget balance over the entire time horizon. Under this natural relaxation, we provide the first noregret algorithms for adversarial bilateral trade under various feedback models. First, we show that in the fullfeedback model, the learner can guarantee Õ(√T) regret against the best fixed prices in hindsight, and that this bound is optimal up to polylogarithmic terms. Second, we provide a learning algorithm guaranteeing a Õ(T^{ 34}) regret upper bound with onebit feedback, which we complement with a Ω(T^{ 57}) lower bound that holds even in the twobit feedback model. Finally, we introduce and analyze an alternative benchmark that is provably stronger than the best fixed prices in hindsight and is inspired by the literature on bandits with knapsacks. @InProceedings{STOC24p247, author = {Martino Bernasconi and Matteo Castiglioni and Andrea Celli and Federico Fusco}, title = {NoRegret Learning in Bilateral Trade via Global Budget Balance}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {247258}, doi = {10.1145/3618260.3649653}, year = {2024}, } Publisher's Version 

Cen, Ruoxu 
STOC '24: "Hypergraph Unreliability in ..."
Hypergraph Unreliability in QuasiPolynomial Time
Ruoxu Cen , Jason Li , and Debmalya Panigrahi (Duke University, USA; Carnegie Mellon University, USA) The hypergraph unreliability problem asks for the probability that a hypergraph gets disconnected when every hyperedge fails independently with a given probability. For graphs, the unreliability problem has been studied over many decades, and multiple fully polynomialtime approximation schemes are known starting with the work of Karger (STOC 1995). In contrast, prior to this work, no nontrivial result was known for hypergraphs (of arbitrary rank). In this paper, we give quasipolynomial time approximation schemes for the hypergraph unreliability problem. For any fixed ε ∈ (0, 1), we first give a (1+ε)approximation algorithm that runs in m^{O(logn)} time on an mhyperedge, nvertex hypergraph. Then, we improve the running time to m· n^{O(log2 n)} with an additional exponentially small additive term in the approximation. @InProceedings{STOC24p1700, author = {Ruoxu Cen and Jason Li and Debmalya Panigrahi}, title = {Hypergraph Unreliability in QuasiPolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17001711}, doi = {10.1145/3618260.3649753}, year = {2024}, } Publisher's Version 

CesaBianchi, Nicolo 
STOC '24: "The Role of Transparency in ..."
The Role of Transparency in Repeated FirstPrice Auctions with Unknown Valuations
Nicolo CesaBianchi , Tommaso Cesari , Roberto Colomboni , Federico Fusco , and Stefano Leonardi (University of Milan, Italy; Politecnico di Milano, Italy; University of Ottawa, Canada; Italian Institute of Technology, Italy; Sapienza University of Rome, Italy) We study the problem of regret minimization for a single bidder in a sequence of firstprice auctions where the bidder discovers the item’s value only if the auction is won. Our main contribution is a complete characterization, up to logarithmic factors, of the minimax regret in terms of the auction’s transparency, which controls the amount of information on competing bids disclosed by the auctioneer at the end of each auction. Our results hold under different assumptions (stochastic, adversarial, and their smoothed variants) on the environment generating the bidder’s valuations and competing bids. These minimax rates reveal how the interplay between transparency and the nature of the environment affects how fast one can learn to bid optimally in firstprice auctions. @InProceedings{STOC24p225, author = {Nicolo CesaBianchi and Tommaso Cesari and Roberto Colomboni and Federico Fusco and Stefano Leonardi}, title = {The Role of Transparency in Repeated FirstPrice Auctions with Unknown Valuations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {225236}, doi = {10.1145/3618260.3649658}, year = {2024}, } Publisher's Version 

Cesari, Tommaso 
STOC '24: "The Role of Transparency in ..."
The Role of Transparency in Repeated FirstPrice Auctions with Unknown Valuations
Nicolo CesaBianchi , Tommaso Cesari , Roberto Colomboni , Federico Fusco , and Stefano Leonardi (University of Milan, Italy; Politecnico di Milano, Italy; University of Ottawa, Canada; Italian Institute of Technology, Italy; Sapienza University of Rome, Italy) We study the problem of regret minimization for a single bidder in a sequence of firstprice auctions where the bidder discovers the item’s value only if the auction is won. Our main contribution is a complete characterization, up to logarithmic factors, of the minimax regret in terms of the auction’s transparency, which controls the amount of information on competing bids disclosed by the auctioneer at the end of each auction. Our results hold under different assumptions (stochastic, adversarial, and their smoothed variants) on the environment generating the bidder’s valuations and competing bids. These minimax rates reveal how the interplay between transparency and the nature of the environment affects how fast one can learn to bid optimally in firstprice auctions. @InProceedings{STOC24p225, author = {Nicolo CesaBianchi and Tommaso Cesari and Roberto Colomboni and Federico Fusco and Stefano Leonardi}, title = {The Role of Transparency in Repeated FirstPrice Auctions with Unknown Valuations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {225236}, doi = {10.1145/3618260.3649658}, year = {2024}, } Publisher's Version 

Chan, Siu On 
STOC '24: "How Random CSPs Fool Hierarchies ..."
How Random CSPs Fool Hierarchies
Siu On Chan , Hiu Tsun Ng , and Sijin Peng (Unaffiliated, Hong Kong, China; Chinese University of Hong Kong, China; Tsinghua University, China) Relaxations for the constraint satisfaction problem (CSP) include bounded width, linear program (LP), semidefinite program (SDP), affine integer program (AIP), and the combined LP+AIP of Brakensiek, Guruswami, Wrochna, and Živný (SICOMP 2020). Tightening relaxations systematically leads to hierarchies and stronger algorithms. For the LP+AIP hierarchy, a constant level lower bound for approximate graph coloring was given by Ciardo and Živný (STOC 2023). We prove the first linear (and hence optimal) level lower bound for LP+AIP and its stronger variant, SDP+AIP. For each hierarchy, our bound holds for random instances of a broad class of CSPs that we call 𝜏wise neutral. We extend to other hierarchies the LP lower bound techniques in Benabbas, Georgiou, Magen and Tulsiani (ToC 2012) and Kothari, Mori, O’Donnell, and Witmer (STOC 2017), and simplify the SDP solution construction in the latter. @InProceedings{STOC24p1944, author = {Siu On Chan and Hiu Tsun Ng and Sijin Peng}, title = {How Random CSPs Fool Hierarchies}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19441955}, doi = {10.1145/3618260.3649613}, year = {2024}, } Publisher's Version 

Chan, Swee Hong 
STOC '24: "Equality Cases of the Alexandrov–Fenchel ..."
Equality Cases of the Alexandrov–Fenchel Inequality Are Not in the Polynomial Hierarchy
Swee Hong Chan and Igor Pak (Rutgers University, USA; University of California at Los Angeles, USA) Describing the equality conditions of the Alexandrov–Fenchel inequality has been a major open problem for decades. We prove that for a natural class of convex polytopes, the equality cases of the AF inequality are not in unless the polynomial hierarchy collapses to a finite level. This is the first hardness result for the problem. The proof involves Stanley’s order polytopes and a delicate analysis of linear extensions of finite posets, with some number theoretic results added to the mix. We also give applications to combinatorial interpretations of the defect of Stanley’s logconcave inequality for the number of linear extensions. @InProceedings{STOC24p875, author = {Swee Hong Chan and Igor Pak}, title = {Equality Cases of the Alexandrov–Fenchel Inequality Are Not in the Polynomial Hierarchy}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {875883}, doi = {10.1145/3618260.3649646}, year = {2024}, } Publisher's Version 

Chase, Zachary 
STOC '24: "Local BorsukUlam, Stability, ..."
Local BorsukUlam, Stability, and Replicability
Zachary Chase , Bogdan Chornomaz , Shay Moran , and Amir Yehudayoff (Technion, Israel; Google Research, Israel; University of Copenhagen, Copenhagen, Denmark) We use and adapt the BorsukUlam Theorem from topology to derive limitations on listreplicable and globally stable learning algorithms. We further demonstrate the applicability of our methods in combinatorics and topology. We show that, besides trivial cases, both listreplicable and globally stable learning are impossible in the agnostic PAC setting. This is in contrast with the realizable case where it is known that any class with a finite Littlestone dimension can be learned by such algorithms. In the realizable PAC setting, we sharpen previous impossibility results and broaden their scope. Specifically, we establish optimal bounds for list replicability and global stability numbers in finite classes. This provides an exponential improvement over previous works and implies an exponential separation from the Littlestone dimension. We further introduce lower bounds for weak learners, i.e., learners that are only marginally better than random guessing. Lower bounds from previous works apply only to stronger learners. To offer a broader and more comprehensive view of our topological approach, we prove a local variant of the BorsukUlam theorem in topology and a result in combinatorics concerning Kneser colorings. In combinatorics, we prove that if c is a coloring of all nonempty subsets of [n] such that disjoint sets have different colors, then there is a chain of subsets that receives at least 1+ ⌊ n/2⌋ colors (this bound is sharp). In topology, we prove e.g. that for any open antipodalfree cover of the ddimensional sphere, there is a point x that belongs to at least t=⌈d+3/2⌉ sets. @InProceedings{STOC24p1769, author = {Zachary Chase and Bogdan Chornomaz and Shay Moran and Amir Yehudayoff}, title = {Local BorsukUlam, Stability, and Replicability}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17691780}, doi = {10.1145/3618260.3649632}, year = {2024}, } Publisher's Version 

Chatterjee, Abhranil 
STOC '24: "BlackBox Identity Testing ..."
BlackBox Identity Testing of Noncommutative Rational Formulas in Deterministic Quasipolynomial Time
V. Arvind , Abhranil Chatterjee , and Partha Mukhopadhyay (Institute of Mathematical Sciences, India; Chennai Mathematical Institute, India; Indian Statistical Institute, Kolkata, India) Rational Identity Testing (RIT) is the decision problem of determining whether or not a noncommutative rational formula computes zero in the free skew field. It admits a deterministic polynomialtime whitebox algorithm [Garg, Gurvits, Oliveira, and Wigderson (2016); Ivanyos, Qiao, Subrahmanyam (2018); Hamada and Hirai (2021)], and a randomized polynomialtime algorithm [Derksen and Makam (2017)] in the blackbox setting, via singularity testing of linear matrices over the free skew field. Indeed, a randomized NC algorithm for RIT in the whitebox setting follows from the result of Derksen and Makam (2017). Designing an efficient deterministic blackbox algorithm for RIT and understanding the parallel complexity of RIT are major open problems in this area. Despite being open since the work of Garg, Gurvits, Oliveira, and Wigderson (2016), these questions have seen limited progress. In fact, the only known result in this direction is the construction of a quasipolynomialsize hitting set for rational formulas of only inversion height two [Arvind, Chatterjee, and Mukhopadhyay (2022)]. In this paper, we significantly improve the blackbox complexity of this problem and obtain the first quasipolynomialsize hitting set for all rational formulas of polynomial size. Our construction also yields the first deterministic quasiNC upper bound for RIT in the whitebox setting. @InProceedings{STOC24p106, author = {V. Arvind and Abhranil Chatterjee and Partha Mukhopadhyay}, title = {BlackBox Identity Testing of Noncommutative Rational Formulas in Deterministic Quasipolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {106117}, doi = {10.1145/3618260.3649693}, year = {2024}, } Publisher's Version 

Chen, ChiFang 
STOC '24: "Local Minima in Quantum Systems ..."
Local Minima in Quantum Systems
ChiFang Chen , HsinYuan Huang , John Preskill , and Leo Zhou (California Institute of Technology, USA; AWS Center for Quantum Computing, USA; Google Quantum AI, USA; Massachusetts Institute of Technology, USA) Finding ground states of quantum manybody systems is known to be hard for both classical and quantum computers. As a result, when Nature cools a quantum system in a lowtemperature thermal bath, the ground state cannot always be found efficiently. Instead, Nature finds a local minimum of the energy. In this work, we study the problem of finding local minima in quantum systems under thermal perturbations. While local minima are much easier to find than ground states, we show that finding a local minimum is computationally hard for classical computers, even when the task is to output a singlequbit observable at any local minimum. In contrast, we prove that a quantum computer can always find a local minimum efficiently using a thermal gradient descent algorithm that mimics the cooling process in Nature. To establish the classical hardness of finding local minima, we consider a family of twodimensional Hamiltonians such that any problem solvable by polynomialtime quantum algorithms can be reduced to finding local minima of these Hamiltonians. Therefore, cooling systems to local minima is universal for quantum computation, and, assuming quantum computation is more powerful than classical computation, finding local minima is classically hard and quantumly easy. @InProceedings{STOC24p1323, author = {ChiFang Chen and HsinYuan Huang and John Preskill and Leo Zhou}, title = {Local Minima in Quantum Systems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13231330}, doi = {10.1145/3618260.3649675}, year = {2024}, } Publisher's Version 

Chen, Hongjie 
STOC '24: "Private Graphon Estimation ..."
Private Graphon Estimation via SumofSquares
Hongjie Chen , Jingqiu Ding , Tommaso D'Orsi , Yiding Hua , ChihHung Liu , and David Steurer (ETH Zurich, Switzerland; Bocconi University, Italy; National Taiwan University, Taiwan) We develop the first pure nodedifferentiallyprivate algorithms for learning stochastic block models and for graphon estimation with polynomial running time for any constant number of blocks. The statistical utility guarantees match those of the previous best informationtheoretic (exponentialtime) nodeprivate mechanisms for these problems. The algorithm is based on an exponential mech anism for a score function defined in terms of a sumofsquares relaxation whose level depends on the number of blocks. The key ingredients of our results are (1) a characterization of the distance between the block graphons in terms of a quadratic optimization over the polytope of doubly stochastic matrices, (2) a general sumofsquares convergence result for polynomial op timization over arbitrary polytopes, and (3) a general approach to perform Lipschitz extensions of score functions as part of the sumofsquares algorithmic paradigm. @InProceedings{STOC24p172, author = {Hongjie Chen and Jingqiu Ding and Tommaso D'Orsi and Yiding Hua and ChihHung Liu and David Steurer}, title = {Private Graphon Estimation via SumofSquares}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {172182}, doi = {10.1145/3618260.3649643}, year = {2024}, } Publisher's Version 

Chen, Li 
STOC '24: "AlmostLinear Time Algorithms ..."
AlmostLinear Time Algorithms for Incremental Graphs: Cycle Detection, SCCs, st Shortest Path, and MinimumCost Flow
Li Chen , Rasmus Kyng , Yang P. Liu , Simon Meierhans , and Maximilian Probst Gutenberg (Carnegie Mellon University, USA; ETH Zurich, Switzerland; Institute for Advanced Study, Princeton, USA) We give the first almostlinear time algorithms for several problems in incremental graphs including cycle detection, strongly connected component maintenance, st shortest path, maximum flow, and minimumcost flow. To solve these problems, we give a deterministic data structure that returns a m^{o(1)}approximate minimumratio cycle in fully dynamic graphs in amortized m^{o(1)} time per update. Combining this with the interior point method framework of BrandLiuSidford (STOC 2023) gives the first almostlinear time algorithm for deciding the first update in an incremental graph after which the cost of the minimumcost flow attains value at most some given threshold F. By rather direct reductions to minimumcost flow, we are then able to solve the problems in incremental graphs mentioned above. Our new data structure also leads to a modular and deterministic almostlinear time algorithm for minimumcost flow by removing the need for complicated modeling of a restricted adversary, in contrast to the recent randomized and deterministic algorithms for minimumcost flow in ChenKyngLiuPengProbst GutenbergSachdeva (FOCS 2022)BrandChenKyngLiuPengProbst GutenbergSachdevaSidford (FOCS 2023). At a high level, our algorithm dynamizes the ℓ_{1} oblivious routing of RozhoňGrunauHaeuplerZuzicLi (STOC 2022), and develops a method to extract an approximate minimum ratio cycle from the structure of the oblivious routing. To maintain the oblivious routing, we use tools from concurrent work of KyngMeierhansProbst Gutenberg (STOC 2024) which designed vertex sparsifiers for shortest paths, in order to maintain a sparse neighborhood cover in fully dynamic graphs. To find a cycle, we first show that an approximate minimum ratio cycle can be represented as a fundamental cycle on a small set of trees resulting from the oblivious routing. Then, we find a cycle whose quality is comparable to the best tree cycle. This final cycle query step involves vertex and edge sparsification procedures reminiscent of the techniques introduced in ChenKyngLiuPengProbst GutenbergSachdeva (FOCS 2022), but crucially requires a more powerful dynamic spanner, which can handle far more edge insertions than prior work. We build such a spanner via a construction that hearkens back to the classic greedy spanner algorithm of AlthöferDasDobkinJosephSoares (DiscreteComputational Geometry 1993). @InProceedings{STOC24p1165, author = {Li Chen and Rasmus Kyng and Yang P. Liu and Simon Meierhans and Maximilian Probst Gutenberg}, title = {AlmostLinear Time Algorithms for Incremental Graphs: Cycle Detection, SCCs, st Shortest Path, and MinimumCost Flow}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11651173}, doi = {10.1145/3618260.3649745}, year = {2024}, } Publisher's Version 

Chen, Lijie 
STOC '24: "Symmetric Exponential Time ..."
Symmetric Exponential Time Requires NearMaximum Circuit Size
Lijie Chen , Shuichi Hirahara , and Hanlin Ren (University of California at Berkeley, USA; National Institute of Informatics, Tokyo, Japan; University of Oxford, United Kingdom) We show that there is a language in S_{2}E/_{1} (symmetric exponential time with one bit of advice) with circuit complexity at least 2^{n}/n. In particular, the above also implies the same nearmaximum circuit lower bounds for the classes Σ_{2}E, (Σ_{2}E∩Π_{2}E)/_{1}, and ZPE^{NP}/_{1}. Previously, only ”halfexponential” circuit lower bounds for these complexity classes were known, and the smallest complexity class known to require exponential circuit complexity was Δ_{3}E = E^{Σ2P} (Miltersen, Vinodchandran, and Watanabe COCOON’99). Our circuit lower bounds are corollaries of an unconditional zeroerror pseudodeterministic algorithm with an NP oracle and one bit of advice (FZPP^{NP}/_{1}) that solves the range avoidance problem infinitely often. This algorithm also implies unconditional infinitelyoften pseudodeterministic FZPP^{NP}/_{1} constructions for Ramsey graphs, rigid matrices, twosource extractors, linear codes, and K^{poly}random strings with nearly optimal parameters. Our proofs relativize. The two main technical ingredients are (1) Korten’s P^{NP} reduction from the range avoidance problem to constructing hard truth tables (FOCS’21), which was in turn inspired by a result of Jeřábek on provability in Bounded Arithmetic (Ann. Pure Appl. Log. 2004); and (2) the recent iterative winwin paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS’23). @InProceedings{STOC24p1990, author = {Lijie Chen and Shuichi Hirahara and Hanlin Ren}, title = {Symmetric Exponential Time Requires NearMaximum Circuit Size}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19901999}, doi = {10.1145/3618260.3649624}, year = {2024}, } Publisher's Version 

Chen, Lin 
STOC '24: "A Nearly QuadraticTime FPTAS ..."
A Nearly QuadraticTime FPTAS for Knapsack
Lin Chen , Jiayi Lian , Yuchen Mao , and Guochuan Zhang (Zhejiang University, China) We investigate the classic Knapsack problem and propose a fully polynomialtime approximation scheme (FPTAS) that runs in O(n + (1/)^{2}) time. Prior to our work, the best running time is O(n + (1/)^{11/5}) [Deng, Jin, and Mao’23]. Our algorithm is the best possible (up to a polylogarithmic factor), as Knapsack has no O((n + 1/)^{2−δ})time FPTAS for any constant δ > 0, conditioned on the conjecture that (min, +)convolution has no truly subquadratictime algorithm. @InProceedings{STOC24p283, author = {Lin Chen and Jiayi Lian and Yuchen Mao and Guochuan Zhang}, title = {A Nearly QuadraticTime FPTAS for Knapsack}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {283294}, doi = {10.1145/3618260.3649730}, year = {2024}, } Publisher's Version STOC '24: "Approximating Partition in ..." Approximating Partition in NearLinear Time Lin Chen , Jiayi Lian , Yuchen Mao , and Guochuan Zhang (Zhejiang University, China) We propose an O(n + 1/)time FPTAS (Fully PolynomialTime Approximation Scheme) for the classical Partition problem. This is the best possible (up to a polylogarithmic factor) assuming SETH (Strong Exponential Time Hypothesis) [Abboud, Bringmann, Hermelin, and Shabtay’22]. Prior to our work, the best known FPTAS for Partition runs in O(n + 1/^{5/4}) time [Deng, Jin and Mao’23, Wu and Chen’22]. Our result is obtained by solving a more general problem of weakly approximating Subset Sum. @InProceedings{STOC24p307, author = {Lin Chen and Jiayi Lian and Yuchen Mao and Guochuan Zhang}, title = {Approximating Partition in NearLinear Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {307318}, doi = {10.1145/3618260.3649727}, year = {2024}, } Publisher's Version 

Chen, Sitan 
STOC '24: "An Optimal Tradeoff between ..."
An Optimal Tradeoff between Entanglement and Copy Complexity for State Tomography
Sitan Chen , Jerry Li , and Allen Liu (Harvard University, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA) There has been significant interest in understanding how practical constraints on contemporary quantum devices impact the complexity of quantum learning. For the classic question of tomography, recent work tightly characterized the copy complexity for any protocol that can only measure one copy of the unknown state at a time, showing it is polynomially worse than if one can make fullyentangled measurements. While we now have a fairly complete picture of the rates for such tasks in the nearterm and faulttolerant regimes, it remains poorly understood what the landscape in between these extremes looks like, and in particular how to gracefully scale up our protocols as we transition away from NISQ. In this work, we study tomography in the natural setting where one can make measurements of t copies at a time. For sufficiently small є, we show that for any t ≤ d^{2}, Θ(d^{3}/√tє^{2}) copies are necessary and sufficient to learn an unknown ddimensional state ρ to trace distance є. This gives a smooth and optimal interpolation between the known rates for singlecopy measurements and fullyentangled measurements. To our knowledge, this is the first smooth entanglementcopy tradeoff known for any quantum learning task, and for tomography, no intermediate point on this curve was known, even at t = 2. An important obstacle is that unlike the optimal singlecopy protocol, the optimal fullyentangled protocol is inherently a biased estimator. This bias precludes naive batching approaches for interpolating between the two protocols. Instead, we devise a novel twostage procedure that uses Keyl’s algorithm to refine a crude estimate for ρ based on singlecopy measurements. A key insight is to use SchurWeyl sampling not to estimate the spectrum of ρ, but to estimate the deviation of ρ from the maximally mixed state. When ρ is far from the maximally mixed state, we devise a novel quantum splitting procedure that reduces to the case where ρ is close to maximally mixed. @InProceedings{STOC24p1331, author = {Sitan Chen and Jerry Li and Allen Liu}, title = {An Optimal Tradeoff between Entanglement and Copy Complexity for State Tomography}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13311342}, doi = {10.1145/3618260.3649704}, year = {2024}, } Publisher's Version 

Chen, Xi 
STOC '24: "Computing a Fixed Point of ..."
Computing a Fixed Point of Contraction Maps in Polynomial Queries
Xi Chen , Yuhao Li , and Mihalis Yannakakis (Columbia University, USA) We give an algorithm for finding an єfixed point of a contraction map f:[0,1]^{k}↦[0,1]^{k} under the ℓ_{∞}norm with query complexity O (k^{2}log(1/є ) ). @InProceedings{STOC24p1364, author = {Xi Chen and Yuhao Li and Mihalis Yannakakis}, title = {Computing a Fixed Point of Contraction Maps in Polynomial Queries}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13641373}, doi = {10.1145/3618260.3649623}, year = {2024}, } Publisher's Version STOC '24: "DistributionFree Testing ..." DistributionFree Testing of Decision Lists with a Sublinear Number of Queries Xi Chen , Yumou Fei , and Shyamal Patel (Columbia University, USA; Peking University, China) We give a distributionfree testing algorithm for decision lists with Õ(n^{11/12}/ε^{3}) queries. This is the first sublinear algorithm for this problem, which shows that, unlike halfspaces, testing is strictly easier than learning for decision lists. Complementing the algorithm, we show that any distributionfree tester for decision lists must make Ω(√n) queries, or draw Ω(n) samples when the algorithm is samplebased. @InProceedings{STOC24p1051, author = {Xi Chen and Yumou Fei and Shyamal Patel}, title = {DistributionFree Testing of Decision Lists with a Sublinear Number of Queries}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10511062}, doi = {10.1145/3618260.3649717}, year = {2024}, } Publisher's Version 

Chen, Yilei 
STOC '24: "Hardness of Range Avoidance ..."
Hardness of Range Avoidance and Remote Point for Restricted Circuits via Cryptography
Yilei Chen and Jiatu Li (Tsinghua University, China; Shanghai Qi Zhi Institute, Shanghai, China; Massachusetts Institute of Technology, USA) A recent line of research has introduced a systematic approach to exploring the complexity of explicit construction problems through the use of metaproblems, namely, the range avoidance problem (abbrev. Avoid) and the remote point problem (abbrev. ). The upper and lower bounds for these meta problems provide a unified perspective on the complexity of specific explicit construction problems that were previously studied independently. An interesting question largely unaddressed by previous works is whether we can show hardness of Avoid and RPP for simple circuits, such as lowdepth circuits. In this paper, we demonstrate, under plausible cryptographic assumptions, that both the range avoidance problem and the remote point problem cannot be efficiently solved by nondeterministic search algorithms, even when the input circuits are as simple as constantdepth circuits. This extends a hardness result established by Ilango, Li, and Williams (STOC’23) against deterministic algorithms employing witness encryption for NP, where the inputs to Avoid are general Boolean circuits. Our primary technical contribution is a novel construction of witness encryption inspired by publickey encryption for certain promise language in NP that is unlikely to be NPcomplete. We introduce a generic approach to transform a publickey encryption scheme with particular properties into a witness encryption scheme for a promise language related to the initial publickey encryption scheme. Based on this translation and variants of standard latticebased or codingbased PKE schemes, we obtain, under plausible assumption, a provably secure witness encryption scheme for some promise language in NPcoNP/poly. Additionally, we show that our constructions of witness encryption are plausibly secure against nondeterministic adversaries under a generalized notion of security in the spirit of Rudich’s superbits (RANDOM’97), which is crucial for demonstrating the hardness of Avoid and RPP against nondeterministic algorithms. @InProceedings{STOC24p620, author = {Yilei Chen and Jiatu Li}, title = {Hardness of Range Avoidance and Remote Point for Restricted Circuits via Cryptography}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {620629}, doi = {10.1145/3618260.3649602}, year = {2024}, } Publisher's Version 

Chenakkod, Shabarish 
STOC '24: "Optimal Embedding Dimension ..."
Optimal Embedding Dimension for Sparse Subspace Embeddings
Shabarish Chenakkod , Michał Dereziński , Xiaoyu Dong , and Mark Rudelson (University of Michigan, USA) A random m× n matrix S is an oblivious subspace embedding (OSE) with parameters є>0, δ∈(0,1/3) and d≤ m≤ n, if for any ddimensional subspace W⊆ R^{n}, P( ∀_{x∈ W} (1+є)^{−1}x≤ Sx≤ (1+є)x )≥ 1−δ. It is known that the embedding dimension of an OSE must satisfy m≥ d, and for any θ > 0, a Gaussian embedding matrix with m≥ (1+θ) d is an OSE with є = O_{θ}(1). However, such optimal embedding dimension is not known for other embeddings. Of particular interest are sparse OSEs, having s≪ m nonzeros per column (Clarkson and Woodruff, STOC 2013), with applications to problems such as least squares regression and lowrank approximation. We show that, given any θ > 0, an m× n random matrix S with m≥ (1+θ)d consisting of randomly sparsified ±1/√s entries and having s= O(log^{4}(d)) nonzeros per column, is an oblivious subspace embedding with є = O_{θ}(1). Our result addresses the main open question posed by Nelson and Nguyen (FOCS 2013), who conjectured that sparse OSEs can achieve m=O(d) embedding dimension, and it improves on m=O(dlog(d)) shown by Cohen (SODA 2016). We use this to construct the first oblivious subspace embedding with O(d) embedding dimension that can be applied faster than current matrix multiplication time, and to obtain an optimal singlepass algorithm for least squares regression. We further extend our results to Leverage Score Sparsification (LESS), which is a recently introduced nonoblivious embedding technique. We use LESS to construct the first subspace embedding with low distortion є=o(1) and optimal embedding dimension m=O(d/є^{2}) that can be applied in current matrix multiplication time, addressing a question posed by Cherapanamjeri, Silwal, Woodruff and Zhou (SODA 2023). @InProceedings{STOC24p1106, author = {Shabarish Chenakkod and Michał Dereziński and Xiaoyu Dong and Mark Rudelson}, title = {Optimal Embedding Dimension for Sparse Subspace Embeddings}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11061117}, doi = {10.1145/3618260.3649762}, year = {2024}, } Publisher's Version 

Chornomaz, Bogdan 
STOC '24: "Local BorsukUlam, Stability, ..."
Local BorsukUlam, Stability, and Replicability
Zachary Chase , Bogdan Chornomaz , Shay Moran , and Amir Yehudayoff (Technion, Israel; Google Research, Israel; University of Copenhagen, Copenhagen, Denmark) We use and adapt the BorsukUlam Theorem from topology to derive limitations on listreplicable and globally stable learning algorithms. We further demonstrate the applicability of our methods in combinatorics and topology. We show that, besides trivial cases, both listreplicable and globally stable learning are impossible in the agnostic PAC setting. This is in contrast with the realizable case where it is known that any class with a finite Littlestone dimension can be learned by such algorithms. In the realizable PAC setting, we sharpen previous impossibility results and broaden their scope. Specifically, we establish optimal bounds for list replicability and global stability numbers in finite classes. This provides an exponential improvement over previous works and implies an exponential separation from the Littlestone dimension. We further introduce lower bounds for weak learners, i.e., learners that are only marginally better than random guessing. Lower bounds from previous works apply only to stronger learners. To offer a broader and more comprehensive view of our topological approach, we prove a local variant of the BorsukUlam theorem in topology and a result in combinatorics concerning Kneser colorings. In combinatorics, we prove that if c is a coloring of all nonempty subsets of [n] such that disjoint sets have different colors, then there is a chain of subsets that receives at least 1+ ⌊ n/2⌋ colors (this bound is sharp). In topology, we prove e.g. that for any open antipodalfree cover of the ddimensional sphere, there is a point x that belongs to at least t=⌈d+3/2⌉ sets. @InProceedings{STOC24p1769, author = {Zachary Chase and Bogdan Chornomaz and Shay Moran and Amir Yehudayoff}, title = {Local BorsukUlam, Stability, and Replicability}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17691780}, doi = {10.1145/3618260.3649632}, year = {2024}, } Publisher's Version 

Chuzhoy, Julia 
STOC '24: "Maximum Bipartite Matching ..."
Maximum Bipartite Matching in 𝑛^{2+𝑜(1)} Time via a Combinatorial Algorithm
Julia Chuzhoy and Sanjeev Khanna (Toyota Technological Institute, Chicago, USA; University of Pennsylvania, USA) Maximum bipartite matching (MBM) is a fundamental problem in combinatorial optimization with a long and rich history. A classic result of Hopcroft and Karp (1973) provides an O(m √n)time algorithm for the problem, where n and m are the number of vertices and edges in the input graph, respectively. For dense graphs, an approach based on fast matrix multiplication achieves a running time of O(n^{2.371}). For several decades, these results represented stateoftheart algorithms, until, in 2013, Madry introduced a powerful new approach for solving MBM using continuous optimization techniques. This line of research, that builds on continuous techniques based on interiorpoint methods, led to several spectacular results, culminating in a breakthrough m^{1+o(1)}time algorithm for mincost flow, that implies an m^{1+o(1)}time algorithm for MBM as well. These striking advances naturally raise the question of whether combinatorial algorithms can match the performance of the algorithms that are based on continuous techniques for MBM. One reason to explore combinatorial algorithms is that they are often more transparent than their continuous counterparts, and that the tools and techniques developed for such algorithms may be useful in other settings, including, for example, developing faster algorithms for maximum matching in general graphs. A recent work of Chuzhoy and Khanna (2024) made progress on this question by giving a combinatorial Õ(m^{1/3}n^{5/3})time algorithm for MBM, thus outperforming both the HopcroftKarp algorithm and matrix multiplication based approaches, on sufficiently dense graphs. Still, a large gap remains between the running time of their algorithm and the almost lineartime achievable by algorithms based on continuous techniques. In this work, we take another step towards narrowing this gap, and present a randomized n^{2+o(1)}time combinatorial algorithm for MBM. Thus in dense graphs, our algorithm essentially matches the performance of algorithms that are based on continuous methods. Similar to the classical algorithms for MBM and the approach used in the work of Chuzhoy and Khanna (2024), our algorithm is based on iterative augmentation of a current matching using augmenting paths in the corresponding (directed) residual flow network. Our main contribution is a recursive algorithm that exploits the special structure of the resulting flow problem to recover an Ω(1/log^{2} n)fraction of the remaining augmentations in n^{2+o(1)} time. Finally, we obtain a randomized n^{2+o(1)}time algorithm for maximum vertexcapacitated st flow in directed graphs when all vertex capacities are identical, using a standard reduction from this problem to MBM. @InProceedings{STOC24p83, author = {Julia Chuzhoy and Sanjeev Khanna}, title = {Maximum Bipartite Matching in 𝑛<sup>2+𝑜(1)</sup> Time via a Combinatorial Algorithm}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {8394}, doi = {10.1145/3618260.3649725}, year = {2024}, } Publisher's Version 

Ciardo, Lorenzo 
STOC '24: "Semidefinite Programming and ..."
Semidefinite Programming and Linear Equations vs. Homomorphism Problems
Lorenzo Ciardo and Stanislav Živný (University of Oxford, United Kingdom) We introduce a relaxation for homomorphism problems that combines semidefinite programming with linear Diophantine equations, and propose a framework for the analysis of its power based on the spectral theory of association schemes. We use this framework to establish an unconditional lower bound against the semidefinite programming + linear equations model, by showing that the relaxation does not solve the approximate graph homomorphism problem and thus, in particular, the approximate graph colouring problem. @InProceedings{STOC24p1935, author = {Lorenzo Ciardo and Stanislav Živný}, title = {Semidefinite Programming and Linear Equations vs. Homomorphism Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19351943}, doi = {10.1145/3618260.3649635}, year = {2024}, } Publisher's Version 

CohenAddad, Vincent 
STOC '24: "Understanding the Cluster ..."
Understanding the Cluster Linear Program for Correlation Clustering
Nairen Cao , Vincent CohenAddad , Euiwoong Lee , Shi Li , Alantha Newman , and Lukas Vogl (Boston College, USA; Google Research, France; University of Michigan, USA; Nanjing University, China; CNRS  Université Grenoble Alpes, France; EPFL, Lausanne, Switzerland) In the classic Correlation Clustering problem introduced by Bansal, Blum, and Chawla (FOCS 2002), the input is a complete graph where edges are labeled either + or −, and the goal is to find a partition of the vertices that minimizes the sum of the +edges across parts plus the sum of the edges within parts. In recent years, Chawla, Makarychev, Schramm and Yaroslavtsev (STOC 2015) gave a 2.06approximation by providing a nearoptimal rounding of the standard LP, and CohenAddad, Lee, Li, and Newman (FOCS 2022, 2023) finally bypassed the integrality gap of 2 for this LP giving a 1.73approximation for the problem. While introducing new ideas for Correlation Clustering, their algorithm is more complicated than typical approximation algorithms in the following two aspects: (1) It is based on two different relaxations with separate rounding algorithms connected by the roundorcut procedure. (2) Each of the rounding algorithms has to separately handle seemingly inevitable correlated rounding errors, coming from correlated rounding of SheraliAdams and other strong LP relaxations. In order to create a simple and unified framework for Correlation Clustering similar to those for typical approximate optimization tasks, we propose the cluster LP as a strong linear program that might tightly capture the approximability of Correlation Clustering. It unifies all the previous relaxations for the problem. It is exponentialsized, but we show that it can be (1+є)approximately solved in polynomial time for any є > 0, providing the framework for designing rounding algorithms without worrying about correlated rounding errors; these errors are handled uniformly in solving the relaxation. We demonstrate the power of the cluster LP by presenting a simple rounding algorithm, and providing two analyses, one analytically proving a 1.49approximation and the other solving a factorrevealing SDP to show a 1.437approximation. Both proofs introduce principled methods by which to analyze the performance of the algorithm, resulting in a significantly improved approximation guarantee. Finally, we prove an integrality gap of 4/3 for the cluster LP, showing our 1.437upper bound cannot be drastically improved. Our gap instance directly inspires an improved NPhardness of approximation with a ratio 24/23 ≈ 1.042; no explicit hardness ratio was known before. @InProceedings{STOC24p1605, author = {Nairen Cao and Vincent CohenAddad and Euiwoong Lee and Shi Li and Alantha Newman and Lukas Vogl}, title = {Understanding the Cluster Linear Program for Correlation Clustering}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {16051616}, doi = {10.1145/3618260.3649749}, year = {2024}, } Publisher's Version Info STOC '24: "Combinatorial Correlation ..." Combinatorial Correlation Clustering Vincent CohenAddad , David Rasmussen Lolck , Marcin Pilipczuk , Mikkel Thorup , Shuyi Yan , and Hanwen Zhang (Google Research, France; University of Copenhagen, Copenhagen, Denmark; University of Warsaw, Poland) Correlation Clustering is a classic clustering objective arising in numerous machine learning and data mining applications. Given a graph G=(V,E), the goal is to partition the vertex set into clusters so as to minimize the number of edges between clusters plus the number of edges missing within clusters. The problem is APXhard and the best known polynomial time approximation factor is 1.73 by CohenAddad, Lee, Li, and Newman [FOCS’23]. They use an LP with V^{1/єΘ(1)} variables for some small є. However, due to the practical relevance of correlation clustering, there has also been great interest in getting more efficient sequential and parallel algorithms. The classic combinatorial pivot algorithm of Ailon, Charikar and Newman [JACM’08] provides a 3approximation in linear time. Like most other algorithms discussed here, this uses randomization. Recently, Behnezhad, Charikar, Ma and Tan [FOCS’22] presented a 3+єapproximate solution for solving problem in a constant number of rounds in the Massively Parallel Computation (MPC) setting. Very recently, Cao, Huang, Su [SODA’24] provided a 2.4approximation in a polylogarithmic number of rounds in the MPC model and in Õ (E^{1.5}) time in the classic sequential setting. They asked whether it is possible to get a better than 3approximation in nearlinear time? We resolve this problem with an efficient combinatorial algorithm providing a drastically better approximation factor. It achieves a ∼ 2−2/13 < 1.847approximation in sublinear (Õ(V)) sequential time or in sublinear (Õ(V)) space in the streaming setting, and it uses only a constant number of rounds in the MPC model. @InProceedings{STOC24p1617, author = {Vincent CohenAddad and David Rasmussen Lolck and Marcin Pilipczuk and Mikkel Thorup and Shuyi Yan and Hanwen Zhang}, title = {Combinatorial Correlation Clustering}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {16171628}, doi = {10.1145/3618260.3649712}, year = {2024}, } Publisher's Version Info 

CoiteuxRoy, Xavier 
STOC '24: "No Distributed Quantum Advantage ..."
No Distributed Quantum Advantage for Approximate Graph Coloring
Xavier CoiteuxRoy , Francesco d'Amore , Rishikesh Gajjala , Fabian Kuhn , François Le Gall , Henrik Lievonen , Augusto Modanese , MarcOlivier Renou , Gustav Schmid , and Jukka Suomela (TU Munich, Germany; Munich Center for Quantum Science and Technology, Germany; Aalto University, Finland; Bocconi University, Italy; Indian Institute of Science, India; University of Freiburg, Freiburg, Germany; Nagoya University, Nagoya, Japan; Inria, France; Université ParisSaclay, France; Institut Polytechnique de Paris, France) We give an almost complete characterization of the hardness of ccoloring χchromatic graphs with distributed algorithms, for a wide range of models of distributed computing. In particular, we show that these problems do not admit any distributed quantum advantage. To do that: We give a new distributed algorithm that finds a ccoloring in χchromatic graphs in Õ(n^{1/α}) rounds, with α = ⌊c−1/χ − 1⌋. We prove that any distributed algorithm for this problem requires Ω(n^{1/α}) rounds. Our upper bound holds in the classical, deterministic LOCAL model, while the nearmatching lower bound holds in the nonsignaling model. This model, introduced by Arfaoui and Fraigniaud in 2014, captures all models of distributed graph algorithms that obey physical causality; this includes not only classical deterministic LOCAL and randomized LOCAL but also quantumLOCAL, even with a preshared quantum state. We also show that similar arguments can be used to prove that, e.g., 3coloring 2dimensional grids or ccoloring trees remain hard problems even for the nonsignaling model, and in particular do not admit any quantum advantage. Our lowerbound arguments are purely graphtheoretic at heart; no background on quantum information theory is needed to establish the proofs. @InProceedings{STOC24p1901, author = {Xavier CoiteuxRoy and Francesco d'Amore and Rishikesh Gajjala and Fabian Kuhn and François Le Gall and Henrik Lievonen and Augusto Modanese and MarcOlivier Renou and Gustav Schmid and Jukka Suomela}, title = {No Distributed Quantum Advantage for Approximate Graph Coloring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19011910}, doi = {10.1145/3618260.3649679}, year = {2024}, } Publisher's Version 

Coladangelo, Andrea 
STOC '24: "How to Use Quantum Indistinguishability ..."
How to Use Quantum Indistinguishability Obfuscation
Andrea Coladangelo and Sam Gunn (University of Washington, USA; University of California at Berkeley, USA) Quantum copy protection, introduced by Aaronson, enables giving out a quantum programdescription that cannot be meaningfully duplicated. Despite over a decade of study, copy protection is only known to be possible for a very limited class of programs. As our first contribution, we show how to achieve "bestpossible" copy protection for all programs. We do this by introducing quantum state indistinguishability obfuscation (qsiO), a notion of obfuscation for quantum descriptions of classical programs. We show that applying qsiO to a program immediately achieves bestpossible copy protection. Our second contribution is to show that, assuming injective oneway functions exist, qsiO is concrete copy protection for a large family of puncturable programs  significantly expanding the class of copyprotectable programs. A key tool in our proof is a new variant of unclonable encryption (UE) that we call coupled unclonable encryption (cUE). While constructing UE in the standard model remains an important open problem, we are able to build cUE from oneway functions. If we additionally assume the existence of UE, then we can further expand the class of puncturable programs for which qsiO is copy protection. Finally, we construct qsiO relative to an efficient quantum oracle. @InProceedings{STOC24p1003, author = {Andrea Coladangelo and Sam Gunn}, title = {How to Use Quantum Indistinguishability Obfuscation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10031008}, doi = {10.1145/3618260.3649779}, year = {2024}, } Publisher's Version 

Colomboni, Roberto 
STOC '24: "The Role of Transparency in ..."
The Role of Transparency in Repeated FirstPrice Auctions with Unknown Valuations
Nicolo CesaBianchi , Tommaso Cesari , Roberto Colomboni , Federico Fusco , and Stefano Leonardi (University of Milan, Italy; Politecnico di Milano, Italy; University of Ottawa, Canada; Italian Institute of Technology, Italy; Sapienza University of Rome, Italy) We study the problem of regret minimization for a single bidder in a sequence of firstprice auctions where the bidder discovers the item’s value only if the auction is won. Our main contribution is a complete characterization, up to logarithmic factors, of the minimax regret in terms of the auction’s transparency, which controls the amount of information on competing bids disclosed by the auctioneer at the end of each auction. Our results hold under different assumptions (stochastic, adversarial, and their smoothed variants) on the environment generating the bidder’s valuations and competing bids. These minimax rates reveal how the interplay between transparency and the nature of the environment affects how fast one can learn to bid optimally in firstprice auctions. @InProceedings{STOC24p225, author = {Nicolo CesaBianchi and Tommaso Cesari and Roberto Colomboni and Federico Fusco and Stefano Leonardi}, title = {The Role of Transparency in Repeated FirstPrice Auctions with Unknown Valuations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {225236}, doi = {10.1145/3618260.3649658}, year = {2024}, } Publisher's Version 

Compton, Spencer 
STOC '24: "NearOptimal Mean Estimation ..."
NearOptimal Mean Estimation with Unknown, Heteroskedastic Variances
Spencer Compton and Gregory Valiant (Stanford University, USA) Given data drawn from a collection of Gaussian variables with a common mean but different and unknown variances, what is the best algorithm for estimating their common mean? We present an intuitive and efficient algorithm for this task. As different closedform guarantees can be hard to compare, the SubsetofSignals model serves as a benchmark for “heteroskedastic” mean estimation: given n Gaussian variables with an unknown subset of m variables having variance bounded by 1, what is the optimal estimation error as a function of n and m? Our algorithm resolves this open question up to logarithmic factors, improving upon the previous best known estimation error by polynomial factors when m = n^{c} for all 0<c<1. Of particular note, we obtain error o(1) with m = Õ(n^{1/4}) variancebounded samples, whereas previous work required m = Ω(n^{1/2}). Finally, we show that in the multidimensional setting, even for d=2, our techniques enable rates comparable to knowing the variance of each sample. @InProceedings{STOC24p194, author = {Spencer Compton and Gregory Valiant}, title = {NearOptimal Mean Estimation with Unknown, Heteroskedastic Variances}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {194200}, doi = {10.1145/3618260.3649754}, year = {2024}, } Publisher's Version 

Cook, James 
STOC '24: "Tree Evaluation Is in Space ..."
Tree Evaluation Is in Space 𝑂 (log 𝑛 · log log 𝑛)
James Cook and Ian Mertz (Unaffiliated, Canada; University of Warwick, United Kingdom) The Tree Evaluation Problem (TreeEval) (Cook et al. 2009) is a central candidate for separating polynomial time (P) from logarithmic space (L) via composition. While space lower bounds of Ω(log^{2} n) are known for multiple restricted models, it was recently shown by Cook and Mertz (2020) that TreeEval can be solved in space O(log^{2} n/loglogn). Thus its status as a candidate hard problem for L remains a mystery. Our main result is to improve the space complexity of TreeEval to O(logn · loglogn), thus greatly strengthening the case that Tree Evaluation is in fact in L. We show two consequences of these results. First, we show that the KRW conjecture (Karchmer, Raz, and Wigderson 1995) implies L ⊈NC^{1}; this itself would have many implications, such as branching programs not being efficiently simulable by formulas. Our second consequence is to increase our understanding of amortized branching programs, also known as catalytic branching programs; we show that every function f on n bits can be computed by such a program of length Poly(n) and width 2^{O(n)}. @InProceedings{STOC24p1268, author = {James Cook and Ian Mertz}, title = {Tree Evaluation Is in Space 𝑂 (log 𝑛 · log log 𝑛)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12681278}, doi = {10.1145/3618260.3649664}, year = {2024}, } Publisher's Version 

Cristi, Andrés 
STOC '24: "Prophet Inequalities Require ..."
Prophet Inequalities Require Only a Constant Number of Samples
Andrés Cristi and Bruno Ziliotto (University of Chile, Chile; Center for Mathematical Modeling, Chile; CNRS, France; Paris Dauphine University, France) In a prophet inequality problem, n independent random variables are presented to a gambler one by one. The gambler decides when to stop the sequence and obtains the most recent value as reward. We evaluate a stopping rule by the worstcase ratio between its expected reward and the expectation of the maximum variable. In the classic setting, the order is fixed, and the optimal ratio is known to be 1/2. Three variants of this problem have been extensively studied: the prophetsecretary model, where variables arrive in uniformly random order; the freeorder model, where the gambler chooses the arrival order; and the i.i.d. model, where the distributions are all the same, rendering the arrival order irrelevant. Most of the literature assumes that distributions are known to the gambler. Recent work has considered the question of what is achievable when the gambler has access only to a few samples per distribution. Surprisingly, in the fixedorder case, a single sample from each distribution is enough to approximate the optimal ratio, but this is not the case in any of the three variants. We provide a unified proof that for all three variants of the problem, a constant number of samples (independent of n) for each distribution is good enough to approximate the optimal ratios. Prior to our work, this was known to be the case only in the i.i.d. variant. Previous works relied on explicitly constructing samplebased algorithms that match the best possible ratio. Remarkably, the optimal ratios for the prophetsecretary and the freeorder variants with full information are still unknown. Consequently, our result requires a significantly different approach than for the classic problem and the i.i.d. variant, where the optimal ratios and the algorithms that achieve them are known. We complement our result showing that our algorithms can be implemented in polynomial time. A key ingredient in our proof is an existential result based on a minimax argument, which states that there must exist an algorithm that attains the optimal ratio and does not rely on the knowledge of the upper tail of the distributions. A second key ingredient is a refined samplebased version of a decomposition of the instance into “small” and “large” variables, first introduced by Liu et al. [EC’21]. The universality of our approach opens avenues for generalization to other samplebased models. Furthermore, we uncover structural properties that might help pinpoint the optimal ratios in the fullinformation cases. @InProceedings{STOC24p491, author = {Andrés Cristi and Bruno Ziliotto}, title = {Prophet Inequalities Require Only a Constant Number of Samples}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {491502}, doi = {10.1145/3618260.3649773}, year = {2024}, } Publisher's Version 

Dadush, Daniel 
STOC '24: "A Strongly Polynomial Algorithm ..."
A Strongly Polynomial Algorithm for Linear Programs with At Most Two Nonzero Entries per Row or Column
Daniel Dadush , Zhuan Khye Koh , Bento Natura , Neil Olver , and László A. Végh (CWI, Amsterdam, Netherlands; Georgia Institute of Technology, USA; London School of Economics, United Kingdom) We give a strongly polynomial algorithm for minimum cost generalized flow, and hence for optimizing any linear program with at most two nonzero entries per row, or at most two nonzero entries per column. Primal and dual feasibility were shown by Végh (MOR ’17) and Megiddo (SICOMP ’83), respectively. Our result can be viewed as progress towards understanding whether all linear programs can be solved in strongly polynomial time, also referred to as Smale’s 9th problem. Our approach is based on the recent primaldual interior point method (IPM) by Allamigeon, Dadush, Loho, Natura, and Végh (FOCS ’22). The number of iterations needed by the IPM is bounded, up to a polynomial factor in the number of inequalities, by the straight line complexity of the central path. Roughly speaking, this is the minimum number of pieces of any piecewise linear curve that multiplicatively approximates the central path. As our main contribution, we show that the straight line complexity of any minimum cost generalized flow instance is polynomial in the number of arcs and vertices. By applying a reduction of Hochbaum (ORL ’04), the same bound applies to any linear program with at most two nonzeros per column or per row. To be able to run the IPM, one requires a suitable initial point. For this purpose, we develop a novel multistage approach, where each stage can be solved in strongly polynomial time given the result of the previous stage. Beyond this, substantial work is needed to ensure that the bit complexity of each iterate remains bounded during the execution of the algorithm. For this purpose, we show that one can maintain a representation of the iterates as a low complexity convex combination of vertices and extreme rays. Our approach is blackbox and can be applied to any logbarrier pathfollowing method. @InProceedings{STOC24p1561, author = {Daniel Dadush and Zhuan Khye Koh and Bento Natura and Neil Olver and László A. Végh}, title = {A Strongly Polynomial Algorithm for Linear Programs with At Most Two Nonzero Entries per Row or Column}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {15611572}, doi = {10.1145/3618260.3649764}, year = {2024}, } Publisher's Version Info 

Dagan, Yuval 
STOC '24: "From External to Swap Regret ..."
From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces
Yuval Dagan , Constantinos Daskalakis , Maxwell Fishelson , and Noah Golowich (University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We provide a novel reduction from swapregret minimization to externalregret minimization, which improves upon the classical reductions of BlumMansour and StoltzLugosi in that it does not require finiteness of the space of actions. We show that, whenever there exists a noexternalregret algorithm for some hypothesis class, there must also exist a noswapregret algorithm for that same class. For the problem of learning with expert advice, our result implies that it is possible to guarantee that the swap regret is bounded by є after (logN)^{Õ(1/є)} rounds and with O(N) per iteration complexity, where N is the number of experts, while the classical reductions of BlumMansour and StoltzLugosi require at least Ω(N/є^{2}) rounds and at least Ω(N^{3}) total computational cost. Our result comes with an associated lower bound, which—in contrast to that of BlumMansour—holds for oblivious and ℓ_{1}constrained adversaries and learners that can employ distributions over experts, showing that the number of rounds must be Ω(N/є^{2}) or exponential in 1/є. Our reduction implies that, if noregret learning is possible in some game, then this game must have approximate correlated equilibria, of arbitrarily good approximation. This strengthens the folklore implication of noregret learning that approximate coarse correlated equilibria exist. Importantly, it provides a sufficient condition for the existence of approximate correlated equilibrium which vastly extends the requirement that the action set is finite or the requirement that the action set is compact and the utility functions are continuous, allowing for games with finite Littlestone or finite sequential fat shattering dimension, thus answering a question left open in “Fast rates for nonparametric online learning: from realizability to learning in games” and “ Online learning and solving infinite games with an ERM oracle”. Moreover, it answers several outstanding questions about equilibrium computation and/or learning in games. In particular, for constant values of є: (a) we show that єapproximate correlated equilibria in extensiveform games can be computed efficiently, advancing a longstanding open problem for extensiveform games; see e.g. “ Extensiveform correlated equilibrium: Definition and computational complexity” and “ PolynomialTime LinearSwap Regret Minimization in ImperfectInformation Sequential Games”; (b) we show that the query and communication complexities of computing єapproximate correlated equilibria in Naction normalform games are N · poly log(N) and poly logN respectively, advancing an open problem of “Informational Bounds on Equilibria”; (c) we show that єapproximate correlated equilibria of sparsity poly logN can be computed efficiently, advancing an open problem of “Simple Approximate Equilibria in Large Games”; (d) finally, we show that in the adversarial bandit setting, sublinear swap regret can be achieved in only Õ(N) rounds, advancing an open problem of “From External to Internal Regret” and “Tight Lower Bound and Efficient Reduction for Swap Regret”. @InProceedings{STOC24p1216, author = {Yuval Dagan and Constantinos Daskalakis and Maxwell Fishelson and Noah Golowich}, title = {From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12161222}, doi = {10.1145/3618260.3649681}, year = {2024}, } Publisher's Version 

Dalirrooyfard, Mina 
STOC '24: "Towards Optimal OutputSensitive ..."
Towards Optimal OutputSensitive Clique Listing or: Listing Cliques from Smaller Cliques
Mina Dalirrooyfard , Surya Mathialagan , Virginia Vassilevska Williams , and Yinzhan Xu (Morgan Stanley Research, Canada; Massachusetts Institute of Technology, USA) We study the problem of finding and listing kcliques in an medge, nvertex graph, for constant k≥ 3. This is a fundamental problem of both theoretical and practical importance. Our first contribution is an algorithmic framework for finding kcliques that gives the first improvement in 19 years over the old runtimes for 4 and 5clique finding, as a function of m [Eisenbrand and Grandoni, TCS’04]. With the current bounds on matrix multiplication, our algorithms run in O(m^{1.66}) and O(m^{2.06}) time, respectively, for 4clique and 5clique finding. Our main contribution is an outputsensitive algorithm for listing kcliques, for any constant k≥ 3. We complement the algorithm with tight lower bounds based on standard finegrained assumptions. Previously, the only known conditionally optimal outputsensitive algorithms were for the case of 3cliques given by Bj'orklund, Pagh, Vassilevska W. and Zwick [ICALP’14]. If the matrix multiplication exponent ω is 2, and if the number of kcliques t is large enough, the running time of our algorithms is Õ(min{m^{1/k−2}t^{1 − 2/k(k−2)},n^{2/k−1}t^{1−2/k(k−1)}}), and this is tight under the ExactkClique Hypothesis. This running time naturally extends the running time obtained by Bj'orklund, Pagh, Vassilevska W. and Zwick for k=3. Our framework is very general in that it gives kclique listing algorithms whose running times can be measured in terms of the number of ℓcliques Δ_{ℓ} in the graph for any 1≤ ℓ<k. This generalizes the typical parameterization in terms of n (the number of 1cliques) and m (the number of 2cliques). If ω is 2, and if the size of the output, Δ_{k}, is sufficiently large, then for every ℓ<k, the running time of our algorithm for listing kcliques is Õ(Δ_{ℓ}^{2/ℓ (k − ℓ)}Δ_{k}^{1−2/k(k−ℓ)}). We also show that this runtime is optimal for all 1 ≤ ℓ < k under the Exact kClique hypothesis. @InProceedings{STOC24p923, author = {Mina Dalirrooyfard and Surya Mathialagan and Virginia Vassilevska Williams and Yinzhan Xu}, title = {Towards Optimal OutputSensitive Clique Listing or: Listing Cliques from Smaller Cliques}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {923934}, doi = {10.1145/3618260.3649663}, year = {2024}, } Publisher's Version 

D'Amore, Francesco 
STOC '24: "No Distributed Quantum Advantage ..."
No Distributed Quantum Advantage for Approximate Graph Coloring
Xavier CoiteuxRoy , Francesco d'Amore , Rishikesh Gajjala , Fabian Kuhn , François Le Gall , Henrik Lievonen , Augusto Modanese , MarcOlivier Renou , Gustav Schmid , and Jukka Suomela (TU Munich, Germany; Munich Center for Quantum Science and Technology, Germany; Aalto University, Finland; Bocconi University, Italy; Indian Institute of Science, India; University of Freiburg, Freiburg, Germany; Nagoya University, Nagoya, Japan; Inria, France; Université ParisSaclay, France; Institut Polytechnique de Paris, France) We give an almost complete characterization of the hardness of ccoloring χchromatic graphs with distributed algorithms, for a wide range of models of distributed computing. In particular, we show that these problems do not admit any distributed quantum advantage. To do that: We give a new distributed algorithm that finds a ccoloring in χchromatic graphs in Õ(n^{1/α}) rounds, with α = ⌊c−1/χ − 1⌋. We prove that any distributed algorithm for this problem requires Ω(n^{1/α}) rounds. Our upper bound holds in the classical, deterministic LOCAL model, while the nearmatching lower bound holds in the nonsignaling model. This model, introduced by Arfaoui and Fraigniaud in 2014, captures all models of distributed graph algorithms that obey physical causality; this includes not only classical deterministic LOCAL and randomized LOCAL but also quantumLOCAL, even with a preshared quantum state. We also show that similar arguments can be used to prove that, e.g., 3coloring 2dimensional grids or ccoloring trees remain hard problems even for the nonsignaling model, and in particular do not admit any quantum advantage. Our lowerbound arguments are purely graphtheoretic at heart; no background on quantum information theory is needed to establish the proofs. @InProceedings{STOC24p1901, author = {Xavier CoiteuxRoy and Francesco d'Amore and Rishikesh Gajjala and Fabian Kuhn and François Le Gall and Henrik Lievonen and Augusto Modanese and MarcOlivier Renou and Gustav Schmid and Jukka Suomela}, title = {No Distributed Quantum Advantage for Approximate Graph Coloring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19011910}, doi = {10.1145/3618260.3649679}, year = {2024}, } Publisher's Version 

Das, Bireswar 
STOC '24: "The Minimal Faithful Permutation ..."
The Minimal Faithful Permutation Degree of Groups without Abelian Normal Subgroups
Bireswar Das and Dhara Thakkar (IIT Gandhinagar, India) Cayley’s theorem says that every finite group G can be viewed as a subgroup of a symmetric group S_{m} for some integer m. The minimal faithful permutation degree µ(G) of a finite group G is the smallest integer m such that there is an injective homomorphism φ from G to S_{m}. The main result of this paper is a randomized polynomial time algorithm for computing the minimal faithful permutation degree of semisimple permutation groups. Semisimple groups are groups without any abelian normal subgroups. Apart from this, we show that: 1. For any primitive permutation group G, µ(G) can be computed in quasipolynomial time. 2. Given a permutation group G and an integer k, the problem of deciding if µ(G) ≤ k is in NP. 3. For a group G given by its Cayley table, µ(G) can be computed in DSPACE(log^{3} G). @InProceedings{STOC24p118, author = {Bireswar Das and Dhara Thakkar}, title = {The Minimal Faithful Permutation Degree of Groups without Abelian Normal Subgroups}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {118129}, doi = {10.1145/3618260.3649641}, year = {2024}, } Publisher's Version 

Daskalakis, Constantinos 
STOC '24: "From External to Swap Regret ..."
From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces
Yuval Dagan , Constantinos Daskalakis , Maxwell Fishelson , and Noah Golowich (University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We provide a novel reduction from swapregret minimization to externalregret minimization, which improves upon the classical reductions of BlumMansour and StoltzLugosi in that it does not require finiteness of the space of actions. We show that, whenever there exists a noexternalregret algorithm for some hypothesis class, there must also exist a noswapregret algorithm for that same class. For the problem of learning with expert advice, our result implies that it is possible to guarantee that the swap regret is bounded by є after (logN)^{Õ(1/є)} rounds and with O(N) per iteration complexity, where N is the number of experts, while the classical reductions of BlumMansour and StoltzLugosi require at least Ω(N/є^{2}) rounds and at least Ω(N^{3}) total computational cost. Our result comes with an associated lower bound, which—in contrast to that of BlumMansour—holds for oblivious and ℓ_{1}constrained adversaries and learners that can employ distributions over experts, showing that the number of rounds must be Ω(N/є^{2}) or exponential in 1/є. Our reduction implies that, if noregret learning is possible in some game, then this game must have approximate correlated equilibria, of arbitrarily good approximation. This strengthens the folklore implication of noregret learning that approximate coarse correlated equilibria exist. Importantly, it provides a sufficient condition for the existence of approximate correlated equilibrium which vastly extends the requirement that the action set is finite or the requirement that the action set is compact and the utility functions are continuous, allowing for games with finite Littlestone or finite sequential fat shattering dimension, thus answering a question left open in “Fast rates for nonparametric online learning: from realizability to learning in games” and “ Online learning and solving infinite games with an ERM oracle”. Moreover, it answers several outstanding questions about equilibrium computation and/or learning in games. In particular, for constant values of є: (a) we show that єapproximate correlated equilibria in extensiveform games can be computed efficiently, advancing a longstanding open problem for extensiveform games; see e.g. “ Extensiveform correlated equilibrium: Definition and computational complexity” and “ PolynomialTime LinearSwap Regret Minimization in ImperfectInformation Sequential Games”; (b) we show that the query and communication complexities of computing єapproximate correlated equilibria in Naction normalform games are N · poly log(N) and poly logN respectively, advancing an open problem of “Informational Bounds on Equilibria”; (c) we show that єapproximate correlated equilibria of sparsity poly logN can be computed efficiently, advancing an open problem of “Simple Approximate Equilibria in Large Games”; (d) finally, we show that in the adversarial bandit setting, sublinear swap regret can be achieved in only Õ(N) rounds, advancing an open problem of “From External to Internal Regret” and “Tight Lower Bound and Efficient Reduction for Swap Regret”. @InProceedings{STOC24p1216, author = {Yuval Dagan and Constantinos Daskalakis and Maxwell Fishelson and Noah Golowich}, title = {From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12161222}, doi = {10.1145/3618260.3649681}, year = {2024}, } Publisher's Version 

De, Anindya 
STOC '24: "Detecting LowDegree Truncation ..."
Detecting LowDegree Truncation
Anindya De , Huan Li , Shivam Nadimpalli , and Rocco A. Servedio (University of Pennsylvania, USA; Columbia University, USA) We consider the following basic, and very broad, statistical problem: Given a known highdimensional distribution D over ℝ^{n} and a collection of data points in ℝ^{n}, distinguish between the two possibilities that (i) the data was drawn from D, versus (ii) the data was drawn from D_{S}, i.e. from D subject to truncation by an unknown truncation set S ⊆ ℝ^{n}. We study this problem in the setting where D is a highdimensional i.i.d. product distribution and S is an unknown degreed polynomial threshold function (one of the most wellstudied types of Booleanvalued function over ℝ^{n}). Our main results are an efficient algorithm when D is a hypercontractive distribution, and a matching lower bound: 1. For any constant d, we give a polynomialtime algorithm which successfully distinguishes D from D_{S} using O(n^{d/2}) samples (subject to mild technical conditions on D and S); 2. Even for the simplest case of D being the uniform distribution over {±1}^{n}, we show that for any constant d, any distinguishing algorithm for degreed polynomial threshold functions must use Ω(n^{d/2}) samples. @InProceedings{STOC24p1027, author = {Anindya De and Huan Li and Shivam Nadimpalli and Rocco A. Servedio}, title = {Detecting LowDegree Truncation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10271038}, doi = {10.1145/3618260.3649633}, year = {2024}, } Publisher's Version 

DebrisAlazard, Thomas 
STOC '24: "Quantum Oblivious LWE Sampling ..."
Quantum Oblivious LWE Sampling and Insecurity of Standard Model LatticeBased SNARKs
Thomas DebrisAlazard , Pouria Fallahpour , and Damien Stehlé (Inria  Laboratoire LIX  École Polytechnique, France; ENS Lyon  LIP, France; CryptoLab, France) The Learning With Errors (LWE) problem asks to find s from an input of the form (A, b = As+e) ∈ (ℤ/qℤ)^{m × n} × (ℤ/qℤ)^{m}, for a vector e that has smallmagnitude entries. In this work, we do not focus on solving LWE but on the task of sampling instances. As these are extremely sparse in their range, it may seem plausible that the only way to proceed is to first create s and e and then set b = As+e. In particular, such an instance sampler knows the solution. This raises the question whether it is possible to obliviously sample (A, As+e), namely, without knowing the underlying s. A variant of the assumption that oblivious LWE sampling is hard has been used in a series of works to analyze the security of candidate constructions of Succinct Noninteractive Arguments of Knowledge (SNARKs). As the assumption is related to LWE, these SNARKs have been conjectured to be secure in the presence of quantum adversaries. Our main result is a quantum polynomialtime algorithm that samples welldistributed LWE instances while provably not knowing the solution, under the assumption that LWE is hard. Moreover, the approach works for a vast range of LWE parametrizations, including those used in the abovementioned SNARKs. This invalidates the assumptions used in their security analyses, although it does not yield attacks against the constructions themselves. @InProceedings{STOC24p423, author = {Thomas DebrisAlazard and Pouria Fallahpour and Damien Stehlé}, title = {Quantum Oblivious LWE Sampling and Insecurity of Standard Model LatticeBased SNARKs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {423434}, doi = {10.1145/3618260.3649766}, year = {2024}, } Publisher's Version 

Dereziński, Michał 
STOC '24: "Optimal Embedding Dimension ..."
Optimal Embedding Dimension for Sparse Subspace Embeddings
Shabarish Chenakkod , Michał Dereziński , Xiaoyu Dong , and Mark Rudelson (University of Michigan, USA) A random m× n matrix S is an oblivious subspace embedding (OSE) with parameters є>0, δ∈(0,1/3) and d≤ m≤ n, if for any ddimensional subspace W⊆ R^{n}, P( ∀_{x∈ W} (1+є)^{−1}x≤ Sx≤ (1+є)x )≥ 1−δ. It is known that the embedding dimension of an OSE must satisfy m≥ d, and for any θ > 0, a Gaussian embedding matrix with m≥ (1+θ) d is an OSE with є = O_{θ}(1). However, such optimal embedding dimension is not known for other embeddings. Of particular interest are sparse OSEs, having s≪ m nonzeros per column (Clarkson and Woodruff, STOC 2013), with applications to problems such as least squares regression and lowrank approximation. We show that, given any θ > 0, an m× n random matrix S with m≥ (1+θ)d consisting of randomly sparsified ±1/√s entries and having s= O(log^{4}(d)) nonzeros per column, is an oblivious subspace embedding with є = O_{θ}(1). Our result addresses the main open question posed by Nelson and Nguyen (FOCS 2013), who conjectured that sparse OSEs can achieve m=O(d) embedding dimension, and it improves on m=O(dlog(d)) shown by Cohen (SODA 2016). We use this to construct the first oblivious subspace embedding with O(d) embedding dimension that can be applied faster than current matrix multiplication time, and to obtain an optimal singlepass algorithm for least squares regression. We further extend our results to Leverage Score Sparsification (LESS), which is a recently introduced nonoblivious embedding technique. We use LESS to construct the first subspace embedding with low distortion є=o(1) and optimal embedding dimension m=O(d/є^{2}) that can be applied in current matrix multiplication time, addressing a question posed by Cherapanamjeri, Silwal, Woodruff and Zhou (SODA 2023). @InProceedings{STOC24p1106, author = {Shabarish Chenakkod and Michał Dereziński and Xiaoyu Dong and Mark Rudelson}, title = {Optimal Embedding Dimension for Sparse Subspace Embeddings}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11061117}, doi = {10.1145/3618260.3649762}, year = {2024}, } Publisher's Version STOC '24: "Solving Dense Linear Systems ..." Solving Dense Linear Systems Faster Than via Preconditioning Michał Dereziński and Jiaming Yang (University of Michigan, USA) We give a stochastic optimization algorithm that solves a dense n× n realvalued linear system Ax=b, returning x such that Ax−b≤ єb in time: Õ((n^{2}+nk^{ω−1})log1/є), where k is the number of singular values of A larger than O(1) times its smallest positive singular value, ω < 2.372 is the matrix multiplication exponent, and Õ hides a polylogarithmic in n factor. When k=O(n^{1−θ}) (namely, A has a flattailed spectrum, e.g., due to noisy data or regularization), this improves on both the cost of solving the system directly, as well as on the cost of preconditioning an iterative method such as conjugate gradient. In particular, our algorithm has an Õ(n^{2}) runtime when k=O(n^{0.729}). We further adapt this result to sparse positive semidefinite matrices and least squares regression. Our main algorithm can be viewed as a randomized block coordinate descent method, where the key challenge is simultaneously ensuring good convergence and fast periteration time. In our analysis, we use theory of majorization for elementary symmetric polynomials to establish a sharp convergence guarantee when coordinate blocks are sampled using a determinantal point process. We then use a Markov chain coupling argument to show that similar convergence can be attained with a cheaper sampling scheme, and accelerate the block coordinate descent update via matrix sketching. @InProceedings{STOC24p1118, author = {Michał Dereziński and Jiaming Yang}, title = {Solving Dense Linear Systems Faster Than via Preconditioning}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11181129}, doi = {10.1145/3618260.3649694}, year = {2024}, } Publisher's Version 

Dey, Dipan 
STOC '24: "Nearly Optimal Fault Tolerant ..."
Nearly Optimal Fault Tolerant Distance Oracle
Dipan Dey and Manoj Gupta (IIT Gandhinagar, India) We present an ffault tolerant distance oracle for an undirected weighted graph where each edge has an integral weight from [1 … W]. Given a set F of f edges, as well as a source node s and a destination node t, our oracle returns the shortest path from s to t avoiding F in O((cf log(nW))^{O(f2)}) time, where c > 1 is a constant. The space complexity of our oracle is O(f^{4}n^{2}log^{2} (nW)). For a constant f, our oracle is nearly optimal both in terms of space and time (barring some logarithmic factor). @InProceedings{STOC24p944, author = {Dipan Dey and Manoj Gupta}, title = {Nearly Optimal Fault Tolerant Distance Oracle}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {944955}, doi = {10.1145/3618260.3649697}, year = {2024}, } Publisher's Version 

Dhar, Manik 
STOC '24: "Generalized GMMDS: Polynomial ..."
Generalized GMMDS: Polynomial Codes Are Higher Order MDS
Joshua Brakensiek , Manik Dhar , and Sivakanth Gopi (Independent, USA; Massachusetts Institute of Technology, USA; Microsoft Research, USA) The GMMDS theorem, conjectured by DauSongDongYuen and proved by Lovett and YildizHassibi, shows that the generator matrices of ReedSolomon codes can attain every possible configuration of zeros for an MDS code. The recently emerging theory of higher order MDS codes has connected the GMMDS theorem to other important properties of ReedSolomon codes, including showing that ReedSolomon codes can achieve list decoding capacity, even over fields of size linear in the message length. A few works have extended the GMMDS theorem to other families of codes, including Gabidulin and skew polynomial codes. In this paper, we generalize all these previous results by showing that the GMMDS theorem applies to any polynomial code, i.e., a code where the columns of the generator matrix are obtained by evaluating linearly independent polynomials at different points. We also show that the GMMDS theorem applies to dual codes of such polynomial codes, which is nontrivial since the dual of a polynomial code may not be a polynomial code. More generally, we show that GMMDS theorem also holds for algebraic codes (and their duals) where columns of the generator matrix are chosen to be points on some irreducible variety which is not contained in a hyperplane through the origin. Our generalization has applications to constructing capacityachieving listdecodable codes as shown in a followup work [Brakensiek, Dhar, Gopi, Zhang; 2024], where it is proved that randomly punctured algebraicgeometric (AG) codes achieve listdecoding capacity over constantsized fields. @InProceedings{STOC24p728, author = {Joshua Brakensiek and Manik Dhar and Sivakanth Gopi}, title = {Generalized GMMDS: Polynomial Codes Are Higher Order MDS}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {728739}, doi = {10.1145/3618260.3649637}, year = {2024}, } Publisher's Version STOC '24: "AG Codes Achieve List Decoding ..." AG Codes Achieve List Decoding Capacity over ConstantSized Fields Joshua Brakensiek , Manik Dhar , Sivakanth Gopi , and Zihan Zhang (Independent, USA; Massachusetts Institute of Technology, USA; Microsoft Research, USA; Ohio State University, USA) The recentlyemerging field of higher order MDS codes has sought to unify a number of concepts in coding theory. Such areas captured by higher order MDS codes include maximally recoverable (MR) tensor codes, codes with optimal listdecoding guarantees, and codes with constrained generator matrices (as in the GMMDS theorem). By proving these equivalences, BrakensiekGopiMakam showed the existence of optimally listdecodable ReedSolomon codes over exponential sized fields. Building on this, recent breakthroughs by GuoZhang and AlrabiahGuruswamiLi have shown that randomly punctured ReedSolomon codes achieve listdecoding capacity (which is a relaxation of optimal listdecodability) over linear size fields. We extend these works by developing a formal theory of relaxed higher order MDS codes. In particular, we show that there are two inequivalent relaxations which we call lower and upper relaxations. The lower relaxation is equivalent to relaxed optimal listdecodable codes and the upper relaxation is equivalent to relaxed MR tensor codes with a single parity check per column. We then generalize the techniques of GuoZhang and AlrabiahGuruswamiLi to show that both these relaxations can be constructed over constant size fields by randomly puncturing suitable algebraicgeometric codes. For this, we crucially use the generalized GMMDS theorem for polynomial codes recently proved by BrakensiekDharGopi. We obtain the following corollaries from our main result: Randomly punctured algebraicgeometric codes of rate R are listdecodable up to radius L/L+1(1−R−є) with list size L over fields of size exp(O(L/є)). In particular, they achieve listdecoding capacity with list size O(1/є) and field size exp(O(1/є^{2})). Prior to this work, AG codes were not even known to achieve listdecoding capacity. By randomly puncturing algebraicgeometric codes, we can construct relaxed MR tensor codes with a single parity check per column over constantsized fields, whereas (nonrelaxed) MR tensor codes require exponential field size. @InProceedings{STOC24p740, author = {Joshua Brakensiek and Manik Dhar and Sivakanth Gopi and Zihan Zhang}, title = {AG Codes Achieve List Decoding Capacity over ConstantSized Fields}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {740751}, doi = {10.1145/3618260.3649651}, year = {2024}, } Publisher's Version 

Diakonikolas, Ilias 
STOC '24: "Super Nonsingular Decompositions ..."
Super Nonsingular Decompositions of Polynomials and Their Application to Robustly Learning LowDegree PTFs
Ilias Diakonikolas , Daniel M. Kane , Vasilis Kontonis , Sihan Liu , and Nikos Zarifis (University of WisconsinMadison, USA; University of California at San Diego, USA; University of Texas at Austin, USA) We study the efficient learnability of lowdegree polynomial threshold functions (PTFs) in the presence of a constant fraction of adversarial corruptions. Our main algorithmic result is a polynomialtime PAC learning algorithm for this concept class in the strong contamination model under the Gaussian distribution with error guarantee O_{d, c}(opt^{1−c}), for any desired constant c>0, where opt is the fraction of corruptions. In the strong contamination model, an omniscient adversary can arbitrarily corrupt an optfraction of the data points and their labels. This model generalizes the malicious noise model and the adversarial label noise model. Prior to our work, known polynomialtime algorithms in this corruption model (or even in the weaker adversarial label noise model) achieved error Õ_{d}(opt^{1/(d+1)}), which deteriorates significantly as a function of the degree d. Our algorithm employs an iterative approach inspired by localization techniques previously used in the context of learning linear threshold functions. Specifically, we use a robust perceptron algorithm to compute a good partial classifier and then iterate on the unclassified points. In order to achieve this, we need to take a set defined by a number of polynomial inequalities and partition it into several wellbehaved subsets. To this end, we develop new polynomial decomposition techniques that may be of independent interest. @InProceedings{STOC24p152, author = {Ilias Diakonikolas and Daniel M. Kane and Vasilis Kontonis and Sihan Liu and Nikos Zarifis}, title = {Super Nonsingular Decompositions of Polynomials and Their Application to Robustly Learning LowDegree PTFs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {152159}, doi = {10.1145/3618260.3649776}, year = {2024}, } Publisher's Version STOC '24: "Testing Closeness of Multivariate ..." Testing Closeness of Multivariate Distributions via Ramsey Theory Ilias Diakonikolas , Daniel M. Kane , and Sihan Liu (University of WisconsinMadison, USA; University of California at San Diego, USA) We investigate the statistical task of closeness (or equivalence) testing for multidimensional distributions. Specifically, given sample access to two unknown distributions p, q on ^{d}, we want to distinguish between the case that p=q versus p−q_{Ak} > є, where p−q_{Ak} denotes the generalized A_{k} distance between p and q — measuring the maximum discrepancy between the distributions over any collection of k disjoint, axisaligned rectangles. Our main result is the first closeness tester for this problem with sublearning sample complexity in any fixed dimension and a nearlymatching sample complexity lower bound. In more detail, we provide a computationally efficient closeness tester with sample complexity O((k^{6/7}/ poly_{d}(є)) log^{d}(k)). On the lower bound side, we establish a qualitatively matching sample complexity lower bound of Ω(k^{6/7}/poly(є)), even for d=2. These sample complexity bounds are surprising because the sample complexity of the problem in the univariate setting is Θ(k^{4/5}/poly(є)). This has the interesting consequence that the jump from one to two dimensions leads to a substantial increase in sample complexity, while increases beyond that do not. As a corollary of our general A_{k} tester, we obtain d_{TV}closeness testers for pairs of khistograms on ^{d} over a common unknown partition, and pairs of uniform distributions supported on the union of k unknown disjoint axisaligned rectangles. Both our algorithm and our lower bound make essential use of tools from Ramsey theory. @InProceedings{STOC24p340, author = {Ilias Diakonikolas and Daniel M. Kane and Sihan Liu}, title = {Testing Closeness of Multivariate Distributions via Ramsey Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {340347}, doi = {10.1145/3618260.3649657}, year = {2024}, } Publisher's Version 

Dikstein, Yotam 
STOC '24: "Agreement Theorems for High ..."
Agreement Theorems for High Dimensional Expanders in the Low Acceptance Regime: The Role of Covers
Yotam Dikstein and Irit Dinur (Institute for Advanced Study, Princeton, USA; Weizmann Institute of Science, Israel) Let X be a family of kelement subsets of [n] and let {f_{s}:s→Σ : s∈ X} be an ensemble of local functions, each defined over a subset s⊂ [n]. Is there a global function G:[n]→Σ such that f_{s} = G_{s} for all s∈ X ? An agreement test is a randomized property tester for this question. One such test is the Vtest, that chooses a random pair of sets s_{1},s_{2}∈ X with prescribed intersection size and accepts if f_{s1},f_{s2} agree on the elements in s_{1}∩ s_{2}. The low acceptance (or 1%) regime is concerned with the situation that the test succeeds with low but nonnegligible probability Agree({f_{s}}) ≥ ε>0. A “classical” low acceptance agreement theorem says Agree ({f_s}) > ε =⇒ ∃G:[n]→Σ, Pr_{s}[f_s0.99≈ G_s](ε). (*) Such statements are motivated by PCP questions. The case X={s ⊆ [n] : s=k} is wellstudied and known as “direct product testing”, which is related to the parallel repetition theorem. Finding sparser families X that satisfy (*) is known as derandomized direct product testing. Prior to this work, the sparsest family satisfying (*) had X≈ n^{25}, and we show X with X≈ n^{2}. We study the general behavior of high dimensional expanders with respect to agreement tests in the low acceptance regime. High dimensional expanders, even very sparse ones with X=O(n), are known to satisfy the high acceptance variant (where ε =1−o(1)). It has been an open challenge to analyze the low acceptance regime. Surprisingly, topological covers of X play an important role. We show that: If X has no connected covers, then (*) holds, provided that X satisfies an additional expansion property, called swap cosystolic expansion. If X has a connected cover, then (*) fails. If X has a connected cover (and swapcosystolicexpansion), we replace (*) by a statement that takes covers into account: Agree({f_s})> ε=⇒ ∃ cover ρ:Y↠X, and G:Y(0)→Σ, such that Pr_{s↠s}[f_s 0.99≈ G_s] (ε), (**) where s↠ s means that ρ(s)=s. Our main result is a proof of (LFD) for complexes X whose links are spherical buildings. The property of swapcosystolicexpansion holds for quotients of the Bruhat Tits buildings. As a corollary we derive (*) for X being a spherical building, yielding a derandomized family with X ≈ n^{2}. We also derive (**) for LSV complexes X, for which X=O(n). @InProceedings{STOC24p1967, author = {Yotam Dikstein and Irit Dinur}, title = {Agreement Theorems for High Dimensional Expanders in the Low Acceptance Regime: The Role of Covers}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19671977}, doi = {10.1145/3618260.3649685}, year = {2024}, } Publisher's Version STOC '24: "Swap Cosystolic Expansion ..." Swap Cosystolic Expansion Yotam Dikstein and Irit Dinur (Institute for Advanced Study, Princeton, USA; Weizmann Institute of Science, Israel) We introduce and study swap cosystolic expansion, a new expansion property of simplicial complexes. We prove lower bounds for swap coboundary expansion of spherical buildings and use them to lower bound swap cosystolic expansion of the LSV Ramanujan complexes. Our motivation is the recent work (in a companion paper) showing that swap cosystolic expansion implies agreement theorems. Together the two works show that these complexes support agreement tests in the low acceptance regime. We also study the closely related swap coboundary expansion. Swap cosystolic expansion is defined by considering, for a given complex X, its faces complex , whose vertices are rfaces of X and where two vertices are connected if their disjoint union is also a face in X. The faces complex is a derandomization of the product of X with itself r times. The graph underlying is the swap walk of X, known to have excellent spectral expansion. The swap cosystolic expansion of X is defined to be the cosystolic expansion of . Our main result is a exp(−O(√r)) lower bound on the swap coboundary expansion of the spherical building and the swap cosystolic expansion of the LSV complexes. For more general coboundary expanders we show a weaker lower bound of exp(−O(r)). @InProceedings{STOC24p1956, author = {Yotam Dikstein and Irit Dinur}, title = {Swap Cosystolic Expansion}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19561966}, doi = {10.1145/3618260.3649780}, year = {2024}, } Publisher's Version 

Ding, Jingqiu 
STOC '24: "Private Graphon Estimation ..."
Private Graphon Estimation via SumofSquares
Hongjie Chen , Jingqiu Ding , Tommaso D'Orsi , Yiding Hua , ChihHung Liu , and David Steurer (ETH Zurich, Switzerland; Bocconi University, Italy; National Taiwan University, Taiwan) We develop the first pure nodedifferentiallyprivate algorithms for learning stochastic block models and for graphon estimation with polynomial running time for any constant number of blocks. The statistical utility guarantees match those of the previous best informationtheoretic (exponentialtime) nodeprivate mechanisms for these problems. The algorithm is based on an exponential mech anism for a score function defined in terms of a sumofsquares relaxation whose level depends on the number of blocks. The key ingredients of our results are (1) a characterization of the distance between the block graphons in terms of a quadratic optimization over the polytope of doubly stochastic matrices, (2) a general sumofsquares convergence result for polynomial op timization over arbitrary polytopes, and (3) a general approach to perform Lipschitz extensions of score functions as part of the sumofsquares algorithmic paradigm. @InProceedings{STOC24p172, author = {Hongjie Chen and Jingqiu Ding and Tommaso D'Orsi and Yiding Hua and ChihHung Liu and David Steurer}, title = {Private Graphon Estimation via SumofSquares}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {172182}, doi = {10.1145/3618260.3649643}, year = {2024}, } Publisher's Version 

Dinur, Irit 
STOC '24: "Agreement Theorems for High ..."
Agreement Theorems for High Dimensional Expanders in the Low Acceptance Regime: The Role of Covers
Yotam Dikstein and Irit Dinur (Institute for Advanced Study, Princeton, USA; Weizmann Institute of Science, Israel) Let X be a family of kelement subsets of [n] and let {f_{s}:s→Σ : s∈ X} be an ensemble of local functions, each defined over a subset s⊂ [n]. Is there a global function G:[n]→Σ such that f_{s} = G_{s} for all s∈ X ? An agreement test is a randomized property tester for this question. One such test is the Vtest, that chooses a random pair of sets s_{1},s_{2}∈ X with prescribed intersection size and accepts if f_{s1},f_{s2} agree on the elements in s_{1}∩ s_{2}. The low acceptance (or 1%) regime is concerned with the situation that the test succeeds with low but nonnegligible probability Agree({f_{s}}) ≥ ε>0. A “classical” low acceptance agreement theorem says Agree ({f_s}) > ε =⇒ ∃G:[n]→Σ, Pr_{s}[f_s0.99≈ G_s](ε). (*) Such statements are motivated by PCP questions. The case X={s ⊆ [n] : s=k} is wellstudied and known as “direct product testing”, which is related to the parallel repetition theorem. Finding sparser families X that satisfy (*) is known as derandomized direct product testing. Prior to this work, the sparsest family satisfying (*) had X≈ n^{25}, and we show X with X≈ n^{2}. We study the general behavior of high dimensional expanders with respect to agreement tests in the low acceptance regime. High dimensional expanders, even very sparse ones with X=O(n), are known to satisfy the high acceptance variant (where ε =1−o(1)). It has been an open challenge to analyze the low acceptance regime. Surprisingly, topological covers of X play an important role. We show that: If X has no connected covers, then (*) holds, provided that X satisfies an additional expansion property, called swap cosystolic expansion. If X has a connected cover, then (*) fails. If X has a connected cover (and swapcosystolicexpansion), we replace (*) by a statement that takes covers into account: Agree({f_s})> ε=⇒ ∃ cover ρ:Y↠X, and G:Y(0)→Σ, such that Pr_{s↠s}[f_s 0.99≈ G_s] (ε), (**) where s↠ s means that ρ(s)=s. Our main result is a proof of (LFD) for complexes X whose links are spherical buildings. The property of swapcosystolicexpansion holds for quotients of the Bruhat Tits buildings. As a corollary we derive (*) for X being a spherical building, yielding a derandomized family with X ≈ n^{2}. We also derive (**) for LSV complexes X, for which X=O(n). @InProceedings{STOC24p1967, author = {Yotam Dikstein and Irit Dinur}, title = {Agreement Theorems for High Dimensional Expanders in the Low Acceptance Regime: The Role of Covers}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19671977}, doi = {10.1145/3618260.3649685}, year = {2024}, } Publisher's Version STOC '24: "Swap Cosystolic Expansion ..." Swap Cosystolic Expansion Yotam Dikstein and Irit Dinur (Institute for Advanced Study, Princeton, USA; Weizmann Institute of Science, Israel) We introduce and study swap cosystolic expansion, a new expansion property of simplicial complexes. We prove lower bounds for swap coboundary expansion of spherical buildings and use them to lower bound swap cosystolic expansion of the LSV Ramanujan complexes. Our motivation is the recent work (in a companion paper) showing that swap cosystolic expansion implies agreement theorems. Together the two works show that these complexes support agreement tests in the low acceptance regime. We also study the closely related swap coboundary expansion. Swap cosystolic expansion is defined by considering, for a given complex X, its faces complex , whose vertices are rfaces of X and where two vertices are connected if their disjoint union is also a face in X. The faces complex is a derandomization of the product of X with itself r times. The graph underlying is the swap walk of X, known to have excellent spectral expansion. The swap cosystolic expansion of X is defined to be the cosystolic expansion of . Our main result is a exp(−O(√r)) lower bound on the swap coboundary expansion of the spherical building and the swap cosystolic expansion of the LSV complexes. For more general coboundary expanders we show a weaker lower bound of exp(−O(r)). @InProceedings{STOC24p1956, author = {Yotam Dikstein and Irit Dinur}, title = {Swap Cosystolic Expansion}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19561966}, doi = {10.1145/3618260.3649780}, year = {2024}, } Publisher's Version 

Dobzinski, Shahar 
STOC '24: "A ConstantFactor Approximation ..."
A ConstantFactor Approximation for Nash Social Welfare with Subadditive Valuations
Shahar Dobzinski , Wenzheng Li , Aviad Rubinstein , and Jan Vondrák (Weizmann Institute of Science, Israel; Stanford University, USA) We present a constantfactor approximation algorithm for the Nash Social Welfare (NSW) maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a framework for NSW optimization which assumes two subroutines which (1) solve a configurationtype LP under certain additional conditions, and (2) round the fractional solution with respect to utilitarian social welfare. In particular, a constantfactor approximation for submodular valuations with value queries can also be derived from our framework. @InProceedings{STOC24p467, author = {Shahar Dobzinski and Wenzheng Li and Aviad Rubinstein and Jan Vondrák}, title = {A ConstantFactor Approximation for Nash Social Welfare with Subadditive Valuations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {467478}, doi = {10.1145/3618260.3649740}, year = {2024}, } Publisher's Version STOC '24: "Bilateral Trade with Correlated ..." Bilateral Trade with Correlated Values Shahar Dobzinski and Ariel Shaulker (Weizmann Institute of Science, Israel) We study the bilateral trade problem where a seller owns a single indivisible item, and a potential buyer seeks to purchase it. Previous mechanisms for this problem only considered the case where the values of the buyer and the seller are drawn from independent distributions. In contrast, this paper studies bilateral trade mechanisms when the values are drawn from a joint distribution. We prove that the buyeroffering mechanism guarantees an approximation ratio of e/e−1 ≈ 1.582 to the social welfare even if the values are drawn from a joint distribution. The buyeroffering mechanism is Bayesian incentive compatible, but the seller has a dominant strategy. We prove the buyeroffering mechanism is optimal in the sense that no Bayesian mechanism where one of the players has a dominant strategy can obtain an approximation ratio better than e/e−1. We also show that no mechanism in which both sides have a dominant strategy can provide any constant approximation to the social welfare when the values are drawn from a joint distribution. Finally, we prove some impossibility results on the power of general Bayesian incentive compatible mechanisms. In particular, we show that no deterministic Bayesian incentivecompatible mechanism can provide an approximation ratio better than 1+ln2/2≈ 1.346. @InProceedings{STOC24p237, author = {Shahar Dobzinski and Ariel Shaulker}, title = {Bilateral Trade with Correlated Values}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {237246}, doi = {10.1145/3618260.3649659}, year = {2024}, } Publisher's Version 

Dong, Ruiwen 
STOC '24: "Semigroup Algorithmic Problems ..."
Semigroup Algorithmic Problems in Metabelian Groups
Ruiwen Dong (Saarland University, Saarbrücken, Germany) We consider semigroup algorithmic problems in finitely generated metabelian groups. Our paper focuses on three decision problems introduced by Choffrut and Karhum'aki (2005): the Identity Problem (does a semigroup contain a neutral element?), the Group Problem (is a semigroup a group?) and the Inverse Problem (does a semigroup contain the inverse of a generator?). We show that all three problems are decidable for finitely generated subsemigroups of finitely generated metabelian groups. In particular, we establish a correspondence between polynomial semirings and subsemigroups of metabelian groups using an interaction of graph theory, convex polytopes, algebraic geometry and number theory. Since the Semigroup Membership problem (does a semigroup contain a given element?) is known to be undecidable in finitely generated metabelian groups, our result completes the decidability characterization of semigroup algorithmic problems in metabelian groups. @InProceedings{STOC24p884, author = {Ruiwen Dong}, title = {Semigroup Algorithmic Problems in Metabelian Groups}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {884891}, doi = {10.1145/3618260.3649609}, year = {2024}, } Publisher's Version 

Dong, Xiaoyu 
STOC '24: "Optimal Embedding Dimension ..."
Optimal Embedding Dimension for Sparse Subspace Embeddings
Shabarish Chenakkod , Michał Dereziński , Xiaoyu Dong , and Mark Rudelson (University of Michigan, USA) A random m× n matrix S is an oblivious subspace embedding (OSE) with parameters є>0, δ∈(0,1/3) and d≤ m≤ n, if for any ddimensional subspace W⊆ R^{n}, P( ∀_{x∈ W} (1+є)^{−1}x≤ Sx≤ (1+є)x )≥ 1−δ. It is known that the embedding dimension of an OSE must satisfy m≥ d, and for any θ > 0, a Gaussian embedding matrix with m≥ (1+θ) d is an OSE with є = O_{θ}(1). However, such optimal embedding dimension is not known for other embeddings. Of particular interest are sparse OSEs, having s≪ m nonzeros per column (Clarkson and Woodruff, STOC 2013), with applications to problems such as least squares regression and lowrank approximation. We show that, given any θ > 0, an m× n random matrix S with m≥ (1+θ)d consisting of randomly sparsified ±1/√s entries and having s= O(log^{4}(d)) nonzeros per column, is an oblivious subspace embedding with є = O_{θ}(1). Our result addresses the main open question posed by Nelson and Nguyen (FOCS 2013), who conjectured that sparse OSEs can achieve m=O(d) embedding dimension, and it improves on m=O(dlog(d)) shown by Cohen (SODA 2016). We use this to construct the first oblivious subspace embedding with O(d) embedding dimension that can be applied faster than current matrix multiplication time, and to obtain an optimal singlepass algorithm for least squares regression. We further extend our results to Leverage Score Sparsification (LESS), which is a recently introduced nonoblivious embedding technique. We use LESS to construct the first subspace embedding with low distortion є=o(1) and optimal embedding dimension m=O(d/є^{2}) that can be applied in current matrix multiplication time, addressing a question posed by Cherapanamjeri, Silwal, Woodruff and Zhou (SODA 2023). @InProceedings{STOC24p1106, author = {Shabarish Chenakkod and Michał Dereziński and Xiaoyu Dong and Mark Rudelson}, title = {Optimal Embedding Dimension for Sparse Subspace Embeddings}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11061117}, doi = {10.1145/3618260.3649762}, year = {2024}, } Publisher's Version 

Döring, Simon 
STOC '24: "Counting Small Induced Subgraphs ..."
Counting Small Induced Subgraphs with EdgeMonotone Properties
Simon Döring , Dániel Marx , and Philip Wellnitz (MPIINF, Germany; Saarbrücken Graduate School of Computer Science, Saarbrücken, Germany; Saarland Informatics Campus, Saarbrücken, Germany; CISPA Helmholtz Center for Information Security, Germany) We study the parameterized complexity of #IndSub(Φ), where given a graph G and an integer k, the task is to count the number of induced subgraphs on k vertices that satisfy the graph property Φ. Focke and Roth [STOC 2022] completely characterized the complexity for each Φ that is a hereditary property (that is, closed under vertex deletions): #IndSub(Φ) is #W[1]hard except in the degenerate cases when every graph satisfies Φ or only finitely many graphs satisfy Φ. We complement this result with a classification for each Φ that is edgemonotone (that is, closed under edge deletions): #IndSub(Φ) is #W[1]hard except in the degenerate case when there are only finitely many integers k such that Φ is nontrivial on kvertex graphs. Our result generalizes earlier results for specific properties Φ that are related to the connectivity or density of the graph. Further, we extend the #W[1]hardness result by a lower bound which shows that #IndSub(Φ) cannot be solved in time f(k) · V(G)^{o(√logk / loglogk)} for any function f, unless the ExponentialTime Hypothesis (ETH) fails. For many natural properties, we obtain even a tight bound f(k) · V(G)^{o(k)}; for example, this is the case for every property Φ that is nontrivial on kvertex graphs for each k greater than some k_{0}. @InProceedings{STOC24p1517, author = {Simon Döring and Dániel Marx and Philip Wellnitz}, title = {Counting Small Induced Subgraphs with EdgeMonotone Properties}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {15171525}, doi = {10.1145/3618260.3649644}, year = {2024}, } Publisher's Version 

Doron, Dean 
STOC '24: "Opening Up the Distinguisher: ..."
Opening Up the Distinguisher: A Hardness to Randomness Approach for BPL=L That Uses Properties of BPL
Dean Doron , Edward Pyne , and Roei Tell (BenGurion University of the Negev, Israel; Massachusetts Institute of Technology, USA; University of Toronto, Canada) We provide compelling evidence for the potential of hardnessvs.randomness approaches to make progress on the longstanding problem of derandomizing spacebounded computation. Our first contribution is a derandomization of boundedspace machines from hardness assumptions for classes of uniform deterministic algorithms, for which strong (but nonmatching) lower bounds can be unconditionally proved. We prove one such result for showing that BPL=L “on average”, and another similar result for showing that BPSPACE[O(n)]=DSPACE[O(n)]. Next, we significantly improve the main results of prior works on hardnessvs.randomness for logspace. As one of our results, we relax the assumptions needed for derandomization with minimal memory footprint (i.e., showing BPSPACE[S]⊆ DSPACE[c · S] for a small constant c), by completely eliminating a cryptographic assumption that was needed in prior work. A key contribution underlying all of our results is nonblackbox use of the descriptions of spacebounded Turing machines, when proving hardnesstorandomness results. That is, the crucial point allowing us to prove our results is that we use properties that are specific to spacebounded machines. @InProceedings{STOC24p2039, author = {Dean Doron and Edward Pyne and Roei Tell}, title = {Opening Up the Distinguisher: A Hardness to Randomness Approach for BPL=L That Uses Properties of BPL}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {20392049}, doi = {10.1145/3618260.3649772}, year = {2024}, } Publisher's Version 

D'Orsi, Tommaso 
STOC '24: "Private Graphon Estimation ..."
Private Graphon Estimation via SumofSquares
Hongjie Chen , Jingqiu Ding , Tommaso D'Orsi , Yiding Hua , ChihHung Liu , and David Steurer (ETH Zurich, Switzerland; Bocconi University, Italy; National Taiwan University, Taiwan) We develop the first pure nodedifferentiallyprivate algorithms for learning stochastic block models and for graphon estimation with polynomial running time for any constant number of blocks. The statistical utility guarantees match those of the previous best informationtheoretic (exponentialtime) nodeprivate mechanisms for these problems. The algorithm is based on an exponential mech anism for a score function defined in terms of a sumofsquares relaxation whose level depends on the number of blocks. The key ingredients of our results are (1) a characterization of the distance between the block graphons in terms of a quadratic optimization over the polytope of doubly stochastic matrices, (2) a general sumofsquares convergence result for polynomial op timization over arbitrary polytopes, and (3) a general approach to perform Lipschitz extensions of score functions as part of the sumofsquares algorithmic paradigm. @InProceedings{STOC24p172, author = {Hongjie Chen and Jingqiu Ding and Tommaso D'Orsi and Yiding Hua and ChihHung Liu and David Steurer}, title = {Private Graphon Estimation via SumofSquares}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {172182}, doi = {10.1145/3618260.3649643}, year = {2024}, } Publisher's Version 

Dreier, Jan 
STOC '24: "FlipBreakability: A Combinatorial ..."
FlipBreakability: A Combinatorial Dichotomy for Monadically Dependent Graph Classes
Jan Dreier , Nikolas Mählmann , and Szymon Toruńczyk (TU Wien, Austria; University of Bremen, Bremen, Germany; University of Warsaw, Poland) A conjecture in algorithmic model theory predicts that the modelchecking problem for firstorder logic is fixedparameter tractable on a hereditary graph class if and only if the class is monadically dependent. Originating in model theory, this notion is defined in terms of logic, and encompasses nowhere dense classes, monadically stable classes, and classes of bounded twinwidth. Working towards this conjecture, we provide the first two combinatorial characterizations of monadically dependent graph classes. This yields the following dichotomy. On the structure side, we characterize monadic dependence by a Ramseytheoretic property called flipbreakability. This notion generalizes the notions of uniform quasiwideness, flipflatness, and bounded grid rank, which characterize nowhere denseness, monadic stability, and bounded twinwidth, respectively, and played a key role in their respective model checking algorithms. Natural restrictions of flipbreakability additionally characterize bounded treewidth and cliquewidth and bounded treedepth and shrubdepth. On the nonstructure side, we characterize monadic dependence by explicitly listing few families of forbidden induced subgraphs. This result is analogous to the characterization of nowhere denseness via forbidden subdivided cliques, and allows us to resolve one half of the motivating conjecture: Firstorder model checking is AW[*]hard on every hereditary graph class that is monadically independent. The result moreover implies that hereditary graph classes which are small, have almost bounded twinwidth, or have almost bounded flipwidth, are monadically dependent. Lastly, we lift our result to also obtain a combinatorial dichotomy in the more general setting of monadically dependent classes of binary structures. @InProceedings{STOC24p1550, author = {Jan Dreier and Nikolas Mählmann and Szymon Toruńczyk}, title = {FlipBreakability: A Combinatorial Dichotomy for Monadically Dependent Graph Classes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {15501560}, doi = {10.1145/3618260.3649739}, year = {2024}, } Publisher's Version 

Dughmi, Shaddin 
STOC '24: "Limitations of Stochastic ..."
Limitations of Stochastic Selection Problems with Pairwise Independent Priors
Shaddin Dughmi , Yusuf Hakan Kalayci , and Neel Patel (University of Southern California, USA) Motivated by the growing interest in correlationrobust stochastic optimization, we investigate stochastic selection problems beyond independence. Specifically, we consider the instructive case of pairwiseindependent priors and matroid constraints. We obtain essentiallyoptimal bounds for contention resolution and prophet inequalities. The impetus for our work comes from the recent work of Caragiannis et. al. [WINE 2022], who derived a constant factor approximation for the singlechoice prophet inequality with pairwiseindependent priors. For general matroids, our results are tight and largely negative. For both contention resolution and prophet inequalities, our impossibility results hold for the full linear matroid over a finite field. We explicitly construct pairwiseindependent distributions which rule out an ω(1/)balanced offline CRS and an ω(1/log)competitive prophet inequality against the (usual) oblivious adversary. For both results, we employ a generic approach for constructing pairwiseindependent random vectors — one which unifies and generalizes existing pairwiseindependence constructions from the literature on universal hash functions and pseudorandomness. Specifically, our approach is based on our observation that random linear maps turn linear independence into stochastic independence. We then examine the class of matroids which satisfy the socalled partition property — these include most common matroids encountered in optimization. We obtain positive results for both online contention resolution and prophet inequalities with pairwiseindependent priors on such matroids, approximately matching the corresponding guarantees for fully independent priors. These algorithmic results hold against the almighty adversary for both problems. @InProceedings{STOC24p479, author = {Shaddin Dughmi and Yusuf Hakan Kalayci and Neel Patel}, title = {Limitations of Stochastic Selection Problems with Pairwise Independent Priors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {479490}, doi = {10.1145/3618260.3649718}, year = {2024}, } Publisher's Version 

Dwivedi, Prateek 
STOC '24: "Learning the Coefficients: ..."
Learning the Coefficients: A Presentable Version of Border Complexity and Applications to Circuit Factoring
C. S. Bhargav , Prateek Dwivedi , and Nitin Saxena (IIT Kanpur, India) The border, or the approximative, model of algebraic computation (VP) is quite popular due to the Geometric Complexity Theory (GCT) approach to P≠NP conjecture, and its complex analytic origins. On the flip side, the definition of the border is inherently existential in the field constants that the model employs. In particular, a polysize border circuit C(ε, x) cannot be compactly presented in reality, as the limit parameter ε may require exponential precision. In this work we resolve this issue by giving a constructive, or a presentable, version of border circuits and state its applications. We make border presentable by restricting the circuit C to use only those constants, in the function field F_{q}(ε), that it can generate by the ring operations on {ε}∪F_{q}, and their division, within polysize circuit. This model is more expressive than VP as it affords exponentialdegree in ε; and analogous to the usual border, we define new border classes called VP_{ε} and VNP_{ε}. We prove that both these (now called presentable border) classes lie in VNP. Such a ’debordering’ result is not known for the classical border classes VP and respectively for VNP. We pose VP_{ε}=VP as a new conjecture to study the border. The heart of our technique is a newly formulated exponential interpolation over a finite field, to bound the Boolean complexity of the coefficients before deducing the algebraic complexity. It attacks two factorization problems which were open before. We make progress on (Conj.8.3 in Bürgisser 2000, FOCS 2001) and solve (Conj.2.1 in Bürgisser 2000; Chou,Kumar,Solomon CCC 2018) over all finite fields: 1. Each polydegree irreducible factor, with multiplicity coprime to field characteristic, of a polysize circuit (of possibly exponentialdegree), is in VNP. 2. For all finite fields, and all factors, VNP is closed under factoring. Consequently, factors of VP are always in VNP. The prime characteristic cases were open before due to the inseparability obstruction (i.e. when the multiplicity is not coprime to q). @InProceedings{STOC24p130, author = {C. S. Bhargav and Prateek Dwivedi and Nitin Saxena}, title = {Learning the Coefficients: A Presentable Version of Border Complexity and Applications to Circuit Factoring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {130140}, doi = {10.1145/3618260.3649743}, year = {2024}, } Publisher's Version 

Dwork, Cynthia 
STOC '24: "ComplexityTheoretic Implications ..."
ComplexityTheoretic Implications of Multicalibration
Sílvia Casacuberta , Cynthia Dwork , and Salil Vadhan (University of Oxford, United Kingdom; Harvard University, USA) We present connections between the recent literature on multigroup fairness for prediction algorithms and classical results in computational complexity. Multiaccurate predictors are correct in expectation on each member of an arbitrary collection of prespecified sets. Multicalibrated predictors satisfy a stronger condition: they are calibrated on each set in the collection. Multiaccuracy is equivalent to a regularity notion for functions defined by Trevisan, Tulsiani, and Vadhan (2009). They showed that, given a class F of (possibly simple) functions, an arbitrarily complex function g can be approximated by a lowcomplexity function h that makes a small number of oracle calls to members of F, where the notion of approximation requires that h cannot be distinguished from g by members of F. This complexitytheoretic Regularity Lemma is known to have implications in different areas, including in complexity theory, additive number theory, information theory, graph theory, and cryptography. Starting from the stronger notion of multicalibration, we obtain stronger and more general versions of a number of applications of the Regularity Lemma, including the Hardcore Lemma, the Dense Model Theorem, and the equivalence of conditional pseudominentropy and unpredictability. For example, we show that every boolean function (regardless of its hardness) has a small collection of disjoint hardcore sets, where the sizes of those hardcore sets are related to how balanced the function is on corresponding pieces of an efficient partition of the domain. @InProceedings{STOC24p1071, author = {Sílvia Casacuberta and Cynthia Dwork and Salil Vadhan}, title = {ComplexityTheoretic Implications of Multicalibration}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10711082}, doi = {10.1145/3618260.3649748}, year = {2024}, } Publisher's Version 

Efremenko, Klim 
STOC '24: "Lower Bounds for Regular Resolution ..."
Lower Bounds for Regular Resolution over Parities
Klim Efremenko , Michal Garlík , and Dmitry Itsykson (BenGurion University of the Negev, Israel; Imperial College London, United Kingdom) The proof system resolution over parities (Res(⊕)) operates with disjunctions of linear equations (linear clauses) over GF(2); it extends the resolution proof system by incorporating linear algebra over GF(2). Over the years, several exponential lower bounds on the size of treelike refutations have been established. However, proving a superpolynomial lower bound on the size of daglike Res(⊕) refutations remains a highly challenging open question. We prove an exponential lower bound for regular Res(⊕). Regular Res(⊕) is a subsystem of daglike Res(⊕) that naturally extends regular resolution. This is the first known superpolynomial lower bound for a fragment of daglike Res(⊕) which is exponentially stronger than treelike Res(⊕). In the regular regime, resolving linear clauses C_{1} and C_{2} on a linear form f is permitted only if, for both i∈ {1,2}, the linear form f does not lie within the linear span of all linear forms that were used in resolution rules during the derivation of C_{i}. Namely, we show that the size of any regular Res(⊕) refutation of the binary pigeonhole principle BPHP_{n}^{n+1} is at least 2^{Ω(∛n/logn)}. A corollary of our result is an exponential lower bound on the size of a strongly readonce linear branching program solving a search problem. This resolves an open question raised by Gryaznov, Pudlak, and Talebanfard (CCC 2022). As a byproduct of our technique, we prove that the size of any treelike Res(⊕) refutation of the weak binary pigeonhole principle BPHP_{n}^{m} is at least 2^{Ω(n)} using ProverDelayer games. We also give a direct proof of a width lower bound: we show that any daglike Res(⊕) refutation of BPHP_{n}^{m} contains a linear clause C with Ω(n) linearly independent equations. @InProceedings{STOC24p640, author = {Klim Efremenko and Michal Garlík and Dmitry Itsykson}, title = {Lower Bounds for Regular Resolution over Parities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {640651}, doi = {10.1145/3618260.3649652}, year = {2024}, } Publisher's Version 

Ekbatani, Farbod 
STOC '24: "Prophet Inequalities with ..."
Prophet Inequalities with Cancellation Costs
Farbod Ekbatani , Rad Niazadeh , Pranav Nuti , and Jan Vondrák (University of Chicago, USA; Stanford University, USA) Most of the literature on online algorithms and sequential decisionmaking focuses on settings with “irrevocable decisions” where the algorithm’s decision upon arrival of the new input is set in stone and can never change in the future. One canonical example is the classic prophet inequality problem, where realizations of a sequence of independent random variables X_{1}, X_{2},… with known distributions are drawn one by one and a decision maker decides when to stop and accept the arriving random variable, with the goal of maximizing the expected value of their pick. We consider “prophet inequalities with recourse” in the linear buyback cost setting, where after accepting a variable X_{i}, we can still discard X_{i} later and accept another variable X_{j}, at a buyback cost of f × X_{i}. The goal is to maximize the expected net reward, which is the value of the final accepted variable minus the total buyback cost. Our first main result is an optimal prophet inequality in the regime of f ≥ 1, where we prove that we can achieve an expected reward 1+f/1+2f times the expected offline optimum. The problem is still open for 0<f<1 and we give some partial results in this regime. In particular, as our second main result, we characterize the asymptotic behavior of the competitive ratio for small f and provide almost matching upper and lower bounds that show a factor of 1−Θ(flog(1/f)). Our results are obtained by two fundamentally different approaches: One is inspired by various proofs of the classical prophet inequality, while the second is based on combinatorial optimization techniques involving LP duality, flows, and cuts. @InProceedings{STOC24p1247, author = {Farbod Ekbatani and Rad Niazadeh and Pranav Nuti and Jan Vondrák}, title = {Prophet Inequalities with Cancellation Costs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12471258}, doi = {10.1145/3618260.3649786}, year = {2024}, } Publisher's Version 

Ellis, David 
STOC '24: "Product Mixing in Compact ..."
Product Mixing in Compact Lie Groups
David Ellis , Guy Kindler , Noam Lifshitz , and Dor Minzer (University of Bristol, United Kingdom; Hebrew University of Jerusalem, Israel; Massachusetts Institute of Technology, USA) If G is a group, we say a subset S of G is productfree if the equation xy=z has no solutions with x,y,z ∈ S.In 1985, Babai and Sós [] asked, for a finite group G, how large a subset S⊆ G can be if it is productfree. The main tool (hitherto) for studying this problem has been the notion of a quasirandom group. For D ∈ ℕ, a group G is said to be Dquasirandom if the minimal dimension of a nontrivial complex irreducible representation of G is at least D. Gowers showed that in a Dquasirandom finite group G, the maximal size of a productfree set is at most G/D^{1/3}. This disproved a longstanding conjecture of Babai and Sós from 1985. For the special unitary group, G=(n), Gowers observed that his argument yields an upper bound of n^{−1/3} on the measure of a measurable productfree subset. In this paper, we improve Gowers’ upper bound to exp(−cn^{1/3}), where c>0 is an absolute constant. In fact, we establish something stronger, namely, productmixing for measurable subsets of (n) with measure at least exp(−cn^{1/3}); for this productmixing result, the n^{1/3} in the exponent is sharp. Our approach involves introducing novel hypercontractive inequalities, which imply that the nonAbelian Fourier spectrum of the indicator function of a small set concentrates on highdimensional irreducible representations. Our hypercontractive inequalities are obtained via methods from representation theory, harmonic analysis, random matrix theory and differential geometry. We generalize our hypercontractive inequalities from (n) to an arbitrary Dquasirandom compact connected Lie group for D at least an absolute constant, thereby extending our results on productfree sets to such groups. We also demonstrate various other applications of our inequalities to geometry (viz., nonAbelian BrunnMinkowski type inequalities), mixing times, and the theory of growth in compact Lie groups. A subsequent work due to Arunachalam, Girish and Lifshitz uses our inequalities to establish new separation results between classical and quantum communication complexity. @InProceedings{STOC24p1415, author = {David Ellis and Guy Kindler and Noam Lifshitz and Dor Minzer}, title = {Product Mixing in Compact Lie Groups}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14151422}, doi = {10.1145/3618260.3649626}, year = {2024}, } Publisher's Version 

Esfandiari, Hossein 
STOC '24: "Optimal Communication Bounds ..."
Optimal Communication Bounds for Classic Functions in the Coordinator Model and Beyond
Hossein Esfandiari , Praneeth Kacham , Vahab Mirrokni , David P. Woodruff , and Peilin Zhong (Google, United Kingdom; Carnegie Mellon University, USA; Google Research, USA) In the coordinator model of communication with s servers, given an arbitrary nonnegative function f, we study the problem of approximating the sum ∑_{i ∈ [n]}f(x_{i}) up to a 1 ± ε factor. Here the vector x ∈ ℝ^{n} is defined to be x = x(1) + ⋯ + x(s), where x(j) ≥ 0 denotes the nonnegative vector held by the jth server. A special case of the problem is when f(x) = x^{k} which corresponds to the wellstudied problem of F_{k} moment estimation in the distributed communication model. We introduce a new parameter c_{f}[s] which captures the communication complexity of approximating ∑_{i∈ [n]} f(x_{i}) and for a broad class of functions f which includes f(x) = x^{k} for k ≥ 2 and other robust functions such as the Huber loss function, we give a two round protocol that uses total communication c_{f}[s]/ε^{2} bits, up to polylogarithmic factors. For this broad class of functions, our result improves upon the communication bounds achieved by Kannan, Vempala, and Woodruff (COLT 2014) and Woodruff and Zhang (STOC 2012), obtaining the optimal communication up to polylogarithmic factors in the minimum number of rounds. We show that our protocol can also be used for approximating higherorder correlations. Our results are part of a broad framework for optimally sampling from a joint distribution in terms of the marginal distributions held on individual servers. Apart from the coordinator model, algorithms for other graph topologies in which each node is a server have been extensively studied. We argue that directly lifting protocols from the coordinator model to other graph topologies will require some nodes in the graph to send a lot of communication. Hence, a natural question is the type of problems that can be efficiently solved in general graph topologies. We address this question by giving communication efficient protocols in the socalled personalized CONGEST model for solving linear regression and low rank approximation by designing composable sketches. Our sketch construction may be of independent interest and can implement any importance sampling procedure that has a monotonicity property. @InProceedings{STOC24p1911, author = {Hossein Esfandiari and Praneeth Kacham and Vahab Mirrokni and David P. Woodruff and Peilin Zhong}, title = {Optimal Communication Bounds for Classic Functions in the Coordinator Model and Beyond}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19111922}, doi = {10.1145/3618260.3649742}, year = {2024}, } Publisher's Version 

Evert, Eric 
STOC '24: "New Tools for Smoothed Analysis: ..."
New Tools for Smoothed Analysis: Least Singular Value Bounds for Random Matrices with Dependent Entries
Aditya Bhaskara , Eric Evert , Vaidehi Srinivas , and Aravindan Vijayaraghavan (University of Utah, USA; Northwestern University, USA) We develop new techniques for proving lower bounds on the least singular value of random matrices with limited randomness. The matrices we consider have entries that are given by polynomials of a few underlying base random variables. This setting captures a core technical challenge for obtaining smoothed analysis guarantees in many algorithmic settings. Least singular value bounds often involve showing strong anticoncentration inequalities that are intricate and much less understood compared to concentration (or large deviation) bounds. First, we introduce a general technique for proving anticoncentration that uses wellconditionedness properties of the Jacobian of a polynomial map, and show how to combine this with a hierarchical єnet argument to prove least singular value bounds. Our second tool is a new statement about least singular values to reason about higherorder lifts of smoothed matrices and the action of linear operators on them. Apart from getting simpler proofs of existing smoothed analysis results, we use these tools to now handle more general families of random matrices. This allows us to produce smoothed analysis guarantees in several previously open settings. These new settings include smoothed analysis guarantees for power sum decompositions and certifying robust entanglement of subspaces, where prior work could only establish least singular value bounds for fully random instances or only show nonrobust genericity guarantees. @InProceedings{STOC24p375, author = {Aditya Bhaskara and Eric Evert and Vaidehi Srinivas and Aravindan Vijayaraghavan}, title = {New Tools for Smoothed Analysis: Least Singular Value Bounds for Random Matrices with Dependent Entries}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {375386}, doi = {10.1145/3618260.3649765}, year = {2024}, } Publisher's Version Info 

Fallahpour, Pouria 
STOC '24: "Quantum Oblivious LWE Sampling ..."
Quantum Oblivious LWE Sampling and Insecurity of Standard Model LatticeBased SNARKs
Thomas DebrisAlazard , Pouria Fallahpour , and Damien Stehlé (Inria  Laboratoire LIX  École Polytechnique, France; ENS Lyon  LIP, France; CryptoLab, France) The Learning With Errors (LWE) problem asks to find s from an input of the form (A, b = As+e) ∈ (ℤ/qℤ)^{m × n} × (ℤ/qℤ)^{m}, for a vector e that has smallmagnitude entries. In this work, we do not focus on solving LWE but on the task of sampling instances. As these are extremely sparse in their range, it may seem plausible that the only way to proceed is to first create s and e and then set b = As+e. In particular, such an instance sampler knows the solution. This raises the question whether it is possible to obliviously sample (A, As+e), namely, without knowing the underlying s. A variant of the assumption that oblivious LWE sampling is hard has been used in a series of works to analyze the security of candidate constructions of Succinct Noninteractive Arguments of Knowledge (SNARKs). As the assumption is related to LWE, these SNARKs have been conjectured to be secure in the presence of quantum adversaries. Our main result is a quantum polynomialtime algorithm that samples welldistributed LWE instances while provably not knowing the solution, under the assumption that LWE is hard. Moreover, the approach works for a vast range of LWE parametrizations, including those used in the abovementioned SNARKs. This invalidates the assumptions used in their security analyses, although it does not yield attacks against the constructions themselves. @InProceedings{STOC24p423, author = {Thomas DebrisAlazard and Pouria Fallahpour and Damien Stehlé}, title = {Quantum Oblivious LWE Sampling and Insecurity of Standard Model LatticeBased SNARKs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {423434}, doi = {10.1145/3618260.3649766}, year = {2024}, } Publisher's Version 

Fang, Yuting 
STOC '24: "No Complete Problem for ConstantCost ..."
No Complete Problem for ConstantCost Randomized Communication
Yuting Fang , Lianna Hambardzumyan , Nathaniel Harms , and Pooya Hatami (Ohio State University, USA; Hebrew University of Jerusalem, Israel; EPFL, Lausanne, Switzerland) We prove that the class of communication problems with publiccoin randomized constantcost protocols, called BPP^{0}, does not contain a complete problem. In other words, there is no randomized constantcost problem Q ∈ BPP^{0}, such that all other problems P ∈ BPP^{0} can be computed by a constantcost deterministic protocol with access to an oracle for Q. We also show that the kHamming Distance problems form an infinite hierarchy within BPP^{0}. Previously, it was known only that Equality is not complete for BPP^{0}. We introduce a new technique, using Ramsey theory, that can prove lower bounds against arbitrary oracles in BPP^{0}, and more generally, we show that kHamming Distance matrices cannot be expressed as a Boolean combination of any constant number of matrices which forbid large GreaterThan subproblems. @InProceedings{STOC24p1287, author = {Yuting Fang and Lianna Hambardzumyan and Nathaniel Harms and Pooya Hatami}, title = {No Complete Problem for ConstantCost Randomized Communication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12871298}, doi = {10.1145/3618260.3649716}, year = {2024}, } Publisher's Version 

Fearnley, John 
STOC '24: "The Complexity of Computing ..."
The Complexity of Computing KKT Solutions of Quadratic Programs
John Fearnley , Paul W. Goldberg , Alexandros Hollender , and Rahul Savani (University of Liverpool, United Kingdom; University of Oxford, United Kingdom; Alan Turing Institute, United Kingdom) It is well known that solving a (nonconvex) quadratic program is NPhard. We show that the problem remains hard even if we are only looking for a KarushKuhnTucker (KKT) point, instead of a global optimum. Namely, we prove that computing a KKT point of a quadratic polynomial over the domain [0,1]^{n} is complete for the class CLS = PPAD∩PLS. @InProceedings{STOC24p892, author = {John Fearnley and Paul W. Goldberg and Alexandros Hollender and Rahul Savani}, title = {The Complexity of Computing KKT Solutions of Quadratic Programs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {892903}, doi = {10.1145/3618260.3649647}, year = {2024}, } Publisher's Version 

Fei, Yumou 
STOC '24: "DistributionFree Testing ..."
DistributionFree Testing of Decision Lists with a Sublinear Number of Queries
Xi Chen , Yumou Fei , and Shyamal Patel (Columbia University, USA; Peking University, China) We give a distributionfree testing algorithm for decision lists with Õ(n^{11/12}/ε^{3}) queries. This is the first sublinear algorithm for this problem, which shows that, unlike halfspaces, testing is strictly easier than learning for decision lists. Complementing the algorithm, we show that any distributionfree tester for decision lists must make Ω(√n) queries, or draw Ω(n) samples when the algorithm is samplebased. @InProceedings{STOC24p1051, author = {Xi Chen and Yumou Fei and Shyamal Patel}, title = {DistributionFree Testing of Decision Lists with a Sublinear Number of Queries}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10511062}, doi = {10.1145/3618260.3649717}, year = {2024}, } Publisher's Version 

Feldman, Michal 
STOC '24: "Fair Division via Quantile ..."
Fair Division via Quantile Shares
Yakov Babichenko , Michal Feldman , Ron Holzman , and Vishnu V. Narayan (Technion, Israel; Tel Aviv University, Israel) We consider the problem of fair division, where a set of indivisible goods should be distributed fairly among a set of agents with combinatorial valuations. To capture fairness, we adopt the notion of shares, where each agent is entitled to a fair share, based on some fairness criterion, and an allocation is considered fair if the value of every agent (weakly) exceeds her fair share. A sharebased notion is considered universally feasible if it admits a fair allocation for every profile of monotone valuations. A major question arises: is there a nontrivial sharebased notion that is universally feasible? The most wellknown sharebased notions, namely the proportional share and the maximin share, are not universally feasible, nor are any constant approximations of them. We propose a novel share notion, where an agent assesses the fairness of a bundle by comparing it to her valuation in a random allocation. In this framework, a bundle is considered qquantile fair, for q∈[0,1], if it is at least as good as a bundle obtained in a uniformly random allocation with probability at least q. Our main question is whether there exists a constant value of q for which the qquantile share is universally feasible. Our main result establishes a strong connection between the feasibility of quantile shares and the classical Erdős Matching Conjecture. Specifically, we show that if a version of this conjecture is true, then the 1/2equantile share is universally feasible. Furthermore, we provide unconditional feasibility results for additive, unitdemand and matroidrank valuations for constant values of q. Finally, we discuss the implications of our results for other share notions. @InProceedings{STOC24p1235, author = {Yakov Babichenko and Michal Feldman and Ron Holzman and Vishnu V. Narayan}, title = {Fair Division via Quantile Shares}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12351246}, doi = {10.1145/3618260.3649728}, year = {2024}, } Publisher's Version STOC '24: "Algorithmic Contract Design ..." Algorithmic Contract Design (Keynote) Michal Feldman (Tel Aviv University, Israel) Algorithmic contract design is a new frontier at the interface of economics and computation, studying scenarios where a principal delegates the execution of a costly project to an agent or a team of agents, and incentivizes them through a contract that specifies payments contingent on the project's success. This domain has gained increasing interest from the theoretical computer science community, particularly in the realm of combinatorial contracts. In this talk, I will survey two distinct models of combinatorial contracts, each illustrating unique sources of complexity encountered in contract design. The first model allows an agent to select from a set of possible actions, examining the intricate dependencies among these actions. The second model involves motivating a team of agents, focusing on the interdependencies within various agent combinations. I will present both (approximation) algorithms and hardness results concerning the optimal contract problem in these settings. The talk is based on joint work with Paul Duetting, Tomer Ezra, Yoav GalTzur, Thomas Kesselheim and Maya Schlesinger. @InProceedings{STOC24p2, author = {Michal Feldman}, title = {Algorithmic Contract Design (Keynote)}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {22}, doi = {10.1145/3618260.3664273}, year = {2024}, } Publisher's Version 

Feldman, Moran 
STOC '24: "Constrained Submodular Maximization ..."
Constrained Submodular Maximization via New Bounds for DRSubmodular Functions
Niv Buchbinder and Moran Feldman (Tel Aviv University, Israel; University of Haifa, Israel) Submodular maximization under various constraints is a fundamental problem studied continuously, in both computer science and operations research, since the late 1970’s. A central technique in this field is to approximately optimize the multilinear extension of the submodular objective, and then round the solution. The use of this technique requires a solver able to approximately maximize multilinear extensions. Following a long line of work, Buchbinder and Feldman (2019) described such a solver guaranteeing 0.385approximation for downclosed constraints, while Oveis Gharan and Vondrák (2011) showed that no solver can guarantee better than 0.478approximation. In this paper, we present a solver guaranteeing 0.401approximation, which significantly reduces the gap between the best known solver and the inapproximability result. The design and analysis of our solver are based on a novel bound that we prove for DRsubmodular functions. This bound improves over a previous bound due to Feldman et al. (2011) that is used by essentially all stateoftheart results for constrained maximization of general submodular/DRsubmodular functions. Hence, we believe that our new bound is likely to find many additional applications in related problems, and to be a key component for further improvement. @InProceedings{STOC24p1820, author = {Niv Buchbinder and Moran Feldman}, title = {Constrained Submodular Maximization via New Bounds for DRSubmodular Functions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18201831}, doi = {10.1145/3618260.3649630}, year = {2024}, } Publisher's Version 

Feng, Yiding 
STOC '24: "Strategic Budget Selection ..."
Strategic Budget Selection in a Competitive Autobidding World
Yiding Feng , Brendan Lucier , and Aleksandrs Slivkins (University of Chicago, USA; Microsoft Research, USA) We study a game played between advertisers in an online ad platform. The platform sells ad impressions by firstprice auction and provides autobidding algorithms that optimize bids on each advertiser's behalf, subject to advertiser constraints such as budgets. Crucially, these constraints are strategically chosen by the advertisers. The chosen constraints define an "inner" budgetpacing game for the autobidders. Advertiser payoffs in the constraintchoosing "metagame" are determined by the equilibrium reached by the autobidders. Advertiser preferences can be more general than what is implied by their constraints: we assume only that they have weakly decreasing marginal value for clicks and weakly increasing marginal disutility for spending money. Nevertheless, we show that at any pure Nash equilibrium of the metagame, the resulting allocation obtains at least half of the liquid welfare of any allocation and this bound is tight. We also obtain a 4approximation for any mixed Nash equilibrium or BayesNash equilibria. These results rely on the power to declare budgets: if advertisers can specify only a (linear) value per click or an ROI target but not a budget constraint, the approximation factor at equilibrium can be as bad as linear in the number of advertisers. @InProceedings{STOC24p213, author = {Yiding Feng and Brendan Lucier and Aleksandrs Slivkins}, title = {Strategic Budget Selection in a Competitive Autobidding World}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {213224}, doi = {10.1145/3618260.3649688}, year = {2024}, } Publisher's Version 

FilosRatsikas, Aris 
STOC '24: "PPADMembership for Problems ..."
PPADMembership for Problems with Exact Rational Solutions: A General Approach via Convex Optimization
Aris FilosRatsikas , Kristoffer Arnsfelt Hansen , Kasper Høgh , and Alexandros Hollender (University of Edinburgh, United Kingdom; Aarhus University, Aarhus, Denmark; University of Oxford, United Kingdom) We introduce a general technique for proving membership of search problems with exact rational solutions in PPAD, one of the most wellknown classes containing total search problems with polynomialtime verifiable solutions. In particular, we construct a "pseudogate", coined the linearOPTgate, which can be used as a "plugandplay" component in a piecewiselinear (PL) arithmetic circuit, as an integral component of the "LinearFIXP" equivalent definition of the class. The linearOPTgate can solve several convex optimization programs, including quadratic programs, which often appear organically in the simplest existence proofs for these problems. This effectively transforms existence proofs to PPADmembership proofs, and consequently establishes the existence of solutions described by rational numbers. Using the linearOPTgate, we are able to significantly simplify and generalize almost all known PPADmembership proofs for finding exact solutions in the application domains of game theory, competitive markets, autobidding auctions, and fair division, as well as to obtain new PPADmembership results for problems in these domains. @InProceedings{STOC24p1204, author = {Aris FilosRatsikas and Kristoffer Arnsfelt Hansen and Kasper Høgh and Alexandros Hollender}, title = {PPADMembership for Problems with Exact Rational Solutions: A General Approach via Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12041215}, doi = {10.1145/3618260.3649645}, year = {2024}, } Publisher's Version 

Fineman, Jeremy T. 
STOC '24: "SingleSource Shortest Paths ..."
SingleSource Shortest Paths with Negative Real Weights in Õ(𝑚𝑛^{8/9}) Time
Jeremy T. Fineman (Georgetown University, USA) This paper presents a randomized algorithm for singlesource shortest paths on directed graphs with real (both positive and negative) edge weights. Given an input graph with n vertices and m edges, the algorithm completes in Õ(mn^{8/9}) time with high probability. For realweighted graphs, this result constitutes the first asymptotic improvement over the classic O(mn)time algorithm variously attributed to Shimbel, Bellman, Ford, and Moore. @InProceedings{STOC24p3, author = {Jeremy T. Fineman}, title = {SingleSource Shortest Paths with Negative Real Weights in <i>Õ(𝑚𝑛<sup>8/9</sup>)</i> Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {314}, doi = {10.1145/3618260.3649614}, year = {2024}, } Publisher's Version 

Firanko, Raz 
STOC '24: "An Area Law for the MaximallyMixed ..."
An Area Law for the MaximallyMixed Ground State in Arbitrarily Degenerate Systems with Good AGSP
Itai Arad , Raz Firanko , and Rahul Jain (Centre for Quantum Technologies, Singapore; Technion, Israel; National University of Singapore, Singapore) We show an area law in the mutual information for the maximallymixed state Ω in the ground space of general Hamiltonians, which is independent of the underlying ground space degeneracy. Our result assumes the existence of a ‘good’ approximation to the ground state projector (a good AGSP), a crucial ingredient in former arealaw proofs. Such approximations have been explicitly derived for 1D gapped local Hamiltonians and 2D frustrationfree locallygapped local Hamiltonians. As a corollary, we show that in 1D gapped local Hamiltonians, for any є>0 and any bipartition L∪ L^{c} of the system, I^є_max(L)(L^c) ≤O( log(L)+log(1/є)), where L represents the number of sites in L and I_{max}^{є}(L)(L^{c})_{Ω} represents the єsmoothed maximum mutual information with respect to the L:L^{c} partition in Ω. From this bound we then conclude I(L)(L^{c})_{Ω}≤ O(log(L)) – an area law for the mutual information in 1D systems with a logarithmic correction. In addition, we show that Ω can be approximated up to an є in trace norm with a state of Schmidt rank of at most poly(L/є). Similar corollaries are derived for the mutual information of 2D frustrationfree and locallygapped local Hamiltonians. @InProceedings{STOC24p1311, author = {Itai Arad and Raz Firanko and Rahul Jain}, title = {An Area Law for the MaximallyMixed Ground State in Arbitrarily Degenerate Systems with Good AGSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13111322}, doi = {10.1145/3618260.3649612}, year = {2024}, } Publisher's Version 

First, Uriya A. 
STOC '24: "Cosystolic Expansion of Sheaves ..."
Cosystolic Expansion of Sheaves on Posets with Applications to Good 2Query Locally Testable Codes and Lifted Codes
Uriya A. First and Tali Kaufman (University of Haifa, Israel; BarIlan University, Israel) We show that cosystolic expansion of sheaves on posets can be derived from local expansion conditions of the sheaf and the poset. When the poset at hand is a cell complex — typically a high dimensional expander — a sheaf may be thought of as generalizing coefficient groups used for defining homology and cohomology, by letting the coefficient group vary along the cell complex. Previous works established local criteria for cosystolic expansion only for simplicial complexes and with respect to constant coefficients. Our main technical contribution is providing a criterion that is more general in two ways: it applies to posets and sheaves, respectively. The importance of working with sheaves on posets (rather than constant coefficients and simplicial complexes) stems from applications to locally testable codes (LTCs). It has been observed by Kaufman–Lubotzky that cosystolic expansion is related to property testing in the context of simplicial complexes and constant coefficients, but unfortunately, this special case does not give rise to interesting LTCs. We observe that this relation also exists in the much more general setting of sheaves on posets. As the language of sheaves is more expressive, it allows us to put this relation to use. Specifically, we apply our criterion for cosystolic expansion in two ways. First, we show the existence of good 2query LTCs. These codes are actually related to the recent good qquery LTCs of Dinur–Evra–Livne–Lubotzky–Mozes and Panteleev–Kalachev, being the formers’ socalled line codes, but we get them from a new, more illuminating perspective. By realizing these codes as cocycle codes of sheaves on posets, we can derive their good properties directly from our criterion for cosystolic expansion. The local expansion conditions that our criterion requires unfold to the conditions on the “small codes” in Dinur et. al and Panteleev–Kalachev, and hence give a conceptual explanation to why conditions such as agreement testability are required. Second, we show that local testability of a lifted code could be derived solely from local conditions, namely from agreement expansion properties of the local “small” codes which define it. In a work of Dikstein–Dinur–Harsha–RonZewi, it was shown that one can obtain local testability of lifted codes from a mixture of local and global conditions, namely, from local testability of the local codes and global agreement expansion of an auxiliary 3layer system called a multilayered agreement sampler. Our result achieves the same, but using genuinely local conditions and a simpler 3layer structure. It is derived neatly from our local criterion for cosystolic expansion, by interpreting the situation in the language of sheaves on posets. @InProceedings{STOC24p1446, author = {Uriya A. First and Tali Kaufman}, title = {Cosystolic Expansion of Sheaves on Posets with Applications to Good 2Query Locally Testable Codes and Lifted Codes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14461457}, doi = {10.1145/3618260.3649625}, year = {2024}, } Publisher's Version 

Fischer, Nick 
STOC '24: "New Graph Decompositions and ..."
New Graph Decompositions and Combinatorial Boolean Matrix Multiplication Algorithms
Amir Abboud , Nick Fischer , Zander Kelley , Shachar Lovett , and Raghu Meka (Weizmann Institute of Science, Israel; University of Illinois at UrbanaChampaign, USA; University of California at San Diego, USA; University of California at Los Angeles, USA) We revisit the fundamental Boolean Matrix Multiplication (BMM) problem. With the invention of algebraic fast matrix multiplication over 50 years ago, it also became known that BMM can be solved in truly subcubic O(n^{ω}) time, where ω<3; much work has gone into bringing ω closer to 2. Since then, a parallel line of work has sought comparably fast combinatorial algorithms but with limited success. The na'ive O(n^{3})time algorithm was initially improved by a log^{2}n factor [Arlazarov et al.; RAS’70], then by log^{2.25}n [Bansal and Williams; FOCS’09], then by log^{3}n [Chan; SODA’15], and finally by log^{4}n [Yu; ICALP’15]. We design a combinatorial algorithm for BMM running in time n^{3} / 2^{Ω((logn)1/7)} – a speedup over cubic time that is stronger than any polylog factor. This comes tantalizingly close to refuting the conjecture from the 90s that truly subcubic combinatorial algorithms for BMM are impossible. This popular conjecture is the basis for dozens of finegrained hardness results. Our main technical contribution is a new regularity decomposition theorem for Boolean matrices (or equivalently, bipartite graphs) under a notion of regularity that was recently introduced and analyzed analytically in the context of communication complexity [Kelley, Lovett, Meka; STOC’24], and is related to a similar notion from the recent work on 3term arithmetic progression free sets [Kelley, Meka; FOCS’23]. @InProceedings{STOC24p935, author = {Amir Abboud and Nick Fischer and Zander Kelley and Shachar Lovett and Raghu Meka}, title = {New Graph Decompositions and Combinatorial Boolean Matrix Multiplication Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {935943}, doi = {10.1145/3618260.3649696}, year = {2024}, } Publisher's Version 

Fishelson, Maxwell 
STOC '24: "From External to Swap Regret ..."
From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces
Yuval Dagan , Constantinos Daskalakis , Maxwell Fishelson , and Noah Golowich (University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We provide a novel reduction from swapregret minimization to externalregret minimization, which improves upon the classical reductions of BlumMansour and StoltzLugosi in that it does not require finiteness of the space of actions. We show that, whenever there exists a noexternalregret algorithm for some hypothesis class, there must also exist a noswapregret algorithm for that same class. For the problem of learning with expert advice, our result implies that it is possible to guarantee that the swap regret is bounded by є after (logN)^{Õ(1/є)} rounds and with O(N) per iteration complexity, where N is the number of experts, while the classical reductions of BlumMansour and StoltzLugosi require at least Ω(N/є^{2}) rounds and at least Ω(N^{3}) total computational cost. Our result comes with an associated lower bound, which—in contrast to that of BlumMansour—holds for oblivious and ℓ_{1}constrained adversaries and learners that can employ distributions over experts, showing that the number of rounds must be Ω(N/є^{2}) or exponential in 1/є. Our reduction implies that, if noregret learning is possible in some game, then this game must have approximate correlated equilibria, of arbitrarily good approximation. This strengthens the folklore implication of noregret learning that approximate coarse correlated equilibria exist. Importantly, it provides a sufficient condition for the existence of approximate correlated equilibrium which vastly extends the requirement that the action set is finite or the requirement that the action set is compact and the utility functions are continuous, allowing for games with finite Littlestone or finite sequential fat shattering dimension, thus answering a question left open in “Fast rates for nonparametric online learning: from realizability to learning in games” and “ Online learning and solving infinite games with an ERM oracle”. Moreover, it answers several outstanding questions about equilibrium computation and/or learning in games. In particular, for constant values of є: (a) we show that єapproximate correlated equilibria in extensiveform games can be computed efficiently, advancing a longstanding open problem for extensiveform games; see e.g. “ Extensiveform correlated equilibrium: Definition and computational complexity” and “ PolynomialTime LinearSwap Regret Minimization in ImperfectInformation Sequential Games”; (b) we show that the query and communication complexities of computing єapproximate correlated equilibria in Naction normalform games are N · poly log(N) and poly logN respectively, advancing an open problem of “Informational Bounds on Equilibria”; (c) we show that єapproximate correlated equilibria of sparsity poly logN can be computed efficiently, advancing an open problem of “Simple Approximate Equilibria in Large Games”; (d) finally, we show that in the adversarial bandit setting, sublinear swap regret can be achieved in only Õ(N) rounds, advancing an open problem of “From External to Internal Regret” and “Tight Lower Bound and Efficient Reduction for Swap Regret”. @InProceedings{STOC24p1216, author = {Yuval Dagan and Constantinos Daskalakis and Maxwell Fishelson and Noah Golowich}, title = {From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12161222}, doi = {10.1145/3618260.3649681}, year = {2024}, } Publisher's Version 

Fleming, Noah 
STOC '24: "BlackBox PPP Is Not TuringClosed ..."
BlackBox PPP Is Not TuringClosed
Noah Fleming , Stefan Grosser , Toniann Pitassi , and Robert Robere (Memorial University of Newfoundland, Canada; McGill University, Canada; Columbia University, USA) The complexity class PPP contains all total search problems manyone reducible to the Pigeon problem, where we are given a succinct encoding of a function mapping n+1 pigeons to n holes, and must output two pigeons that collide in a hole. PPP is one of the “original five” syntacticallydefined subclasses of TFNP, and has been extensively studied due to the strong connections between its defining problem — the pigeonhole principle — and problems in cryptography, extremal combinatorics, proof complexity, and other fields. However, despite its importance, PPP appears to be less robust than the other important TFNP subclasses. In particular, unlike all other major TFNP subclasses, it was conjectured by Buss and Johnson that PPP is not closed under Turing reductions, and they called for a blackbox separation in order to provide evidence for this conjecture. The question of whether PPP contains its Turing closure was further highlighted by Daskalakis in his recent IMU Abacus Medal Lecture. In this work we prove that PPP is indeed not Turingclosed in the blackbox setting, affirmatively resolving the above conjecture and providing strong evidence that PPP is not Turingclosed. In fact, we are able to separate PPP from its nonadaptive Turing closure, in which all calls to the Pigeon oracle must be made in parallel. This differentiates PPP from all other important TFNP subclasses, and especially from its closelyrelated subclass PWPP — defined by reducibility to the weak pigeonhole principle — which is known to be nonadaptively Turingclosed. Our proof requires developing new tools for PPP lower bounds, and creates new connections between PPP and the theory of pseudoexpectation operators used for SheraliAdams and SumofSquares lower bounds. In particular, we introduce a new type of pseudoexpectation operator that is precisely tailored for lower bounds against blackbox PPP, which may be of independent interest. @InProceedings{STOC24p1405, author = {Noah Fleming and Stefan Grosser and Toniann Pitassi and Robert Robere}, title = {BlackBox PPP Is Not TuringClosed}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14051414}, doi = {10.1145/3618260.3649769}, year = {2024}, } Publisher's Version 

Fournier, Hervé 
STOC '24: "On the Power of Homogeneous ..."
On the Power of Homogeneous Algebraic Formulas
Hervé Fournier , Nutan Limaye , Srikanth Srinivasan , and Sébastien Tavenas (Université Paris Cité  IMJPRG, France; IT University of Copenhagen, Copenhagen, Denmark; University of Copenhagen, Copenhagen, Denmark; Université Savoie Mont Blanc  CNRS  LAMA, France) Proving explicit lower bounds on the size of algebraic formulas is a longstanding open problem in the area of algebraic complexity theory. Recent results in the area (e.g. a lower bound against constantdepth algebraic formulas due to Limaye, Srinivasan, and Tavenas (FOCS 2021)) have indicated a way forward for attacking this question: show that we can convert a general algebraic formula to a homogeneous algebraic formula with moderate blowup in size, and prove strong lower bounds against the latter model. Here, a homogeneous algebraic formula F for a polynomial P is a formula in which all subformulas compute homogeneous polynomials. In particular, if P is homogeneous of degree d, F does not contain subformulas that compute polynomials of degree greater than d. We investigate the feasibility of the above strategy and prove a number of positive and negative results in this direction. Lower bounds against weighted homogeneous formulas: We show the first lower bounds against homogeneous formulas of any depth in the weighted setting. Here, each variable has a given weight and the weight of a monomial is the sum of weights of the variables in it. This result builds on a lower bound of Hrubeš and Yehudayoff (Computational Complexity 2011) against homogeneous multilinear formulas. This result is strong indication that lower bounds against homogeneous formulas are within reach. Improved (quasi)homogenization for formulas: A simple folklore argument shows that any formula F for a homogeneous polynomial of degree d can be homogenized with a size blowup of d^{O(logs)}. We show that this can be improved superpolynomially over fields of characteristic 0 as long as d = s^{o(1)}. Such a result was previously only known when d = (logs)^{1+o(1)} (Raz (J. ACM 2013)). Further, we show how to get rid of the condition on d at the expense of getting a quasihomogenization result: this means that subformulas can compute polynomials of degree up to poly(d). Lower bounds for noncommutative homogenization: A recent result of Dutta, Gesmundo, Ikenmeyer, Jindal and Lysikov (2022) implies that to homogenize algebraic formulas of any depth, it suffices to homogenize noncommutative algebraic formulas of depth just 3. We are able to show strong lower bounds for such homogenization, suggesting barriers for this approach. No GirardNewton identities for positive characteristic: In characteristic 0, it is known how to homogenize constantdepth algebraic formulas with a size blowup of exp(O(√d)) using the GirardNewton identities. Finding analogues of these identities in positive characteristic would allow us, paradoxically, to show lower bounds for constantdepth formulas over such fields. We rule out a strong generalization of GirardNewton identities in the setting of positive characteristic, suggesting that a different approach is required. @InProceedings{STOC24p141, author = {Hervé Fournier and Nutan Limaye and Srikanth Srinivasan and Sébastien Tavenas}, title = {On the Power of Homogeneous Algebraic Formulas}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {141151}, doi = {10.1145/3618260.3649760}, year = {2024}, } Publisher's Version 

Fusco, Federico 
STOC '24: "NoRegret Learning in Bilateral ..."
NoRegret Learning in Bilateral Trade via Global Budget Balance
Martino Bernasconi , Matteo Castiglioni , Andrea Celli , and Federico Fusco (Bocconi University, Italy; Politecnico di Milano, Italy; Sapienza University of Rome, Italy) Bilateral trade models the problem of intermediating between two rational agents — a seller and a buyer — both characterized by a private valuation for an item they want to trade. We study the online learning version of the problem, in which at each time step a new seller and buyer arrive and the learner has to set prices for them without any knowledge about their (adversarially generated) valuations. In this setting, known impossibility results rule out the existence of noregret algorithms when budget balanced has to be enforced at each time step. In this paper, we introduce the notion of global budget balance, which only requires the learner to fulfill budget balance over the entire time horizon. Under this natural relaxation, we provide the first noregret algorithms for adversarial bilateral trade under various feedback models. First, we show that in the fullfeedback model, the learner can guarantee Õ(√T) regret against the best fixed prices in hindsight, and that this bound is optimal up to polylogarithmic terms. Second, we provide a learning algorithm guaranteeing a Õ(T^{ 34}) regret upper bound with onebit feedback, which we complement with a Ω(T^{ 57}) lower bound that holds even in the twobit feedback model. Finally, we introduce and analyze an alternative benchmark that is provably stronger than the best fixed prices in hindsight and is inspired by the literature on bandits with knapsacks. @InProceedings{STOC24p247, author = {Martino Bernasconi and Matteo Castiglioni and Andrea Celli and Federico Fusco}, title = {NoRegret Learning in Bilateral Trade via Global Budget Balance}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {247258}, doi = {10.1145/3618260.3649653}, year = {2024}, } Publisher's Version STOC '24: "The Role of Transparency in ..." The Role of Transparency in Repeated FirstPrice Auctions with Unknown Valuations Nicolo CesaBianchi , Tommaso Cesari , Roberto Colomboni , Federico Fusco , and Stefano Leonardi (University of Milan, Italy; Politecnico di Milano, Italy; University of Ottawa, Canada; Italian Institute of Technology, Italy; Sapienza University of Rome, Italy) We study the problem of regret minimization for a single bidder in a sequence of firstprice auctions where the bidder discovers the item’s value only if the auction is won. Our main contribution is a complete characterization, up to logarithmic factors, of the minimax regret in terms of the auction’s transparency, which controls the amount of information on competing bids disclosed by the auctioneer at the end of each auction. Our results hold under different assumptions (stochastic, adversarial, and their smoothed variants) on the environment generating the bidder’s valuations and competing bids. These minimax rates reveal how the interplay between transparency and the nature of the environment affects how fast one can learn to bid optimally in firstprice auctions. @InProceedings{STOC24p225, author = {Nicolo CesaBianchi and Tommaso Cesari and Roberto Colomboni and Federico Fusco and Stefano Leonardi}, title = {The Role of Transparency in Repeated FirstPrice Auctions with Unknown Valuations}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {225236}, doi = {10.1145/3618260.3649658}, year = {2024}, } Publisher's Version 

Gaitonde, Jason 
STOC '24: "A Unified Approach to Learning ..."
A Unified Approach to Learning Ising Models: Beyond Independence and Bounded Width
Jason Gaitonde and Elchanan Mossel (Massachusetts Institute of Technology, USA) We revisit the wellstudied problem of efficiently learning the underlying structure and parameters of an Ising model from data. Current algorithmic approaches achieve essentially optimal sample complexity when samples are generated i.i.d. from the stationary measure and the underlying model satisfies ”width” constraints that bound the total ℓ_{1} interaction involving each node. However, these assumptions are not satisfied in some important settings of interest, like temporally correlated data or more complicated models (like spin glasses) that do not satisfy width bounds. We analyze a simple existing approach based on nodewise logistic regression, and show it provably succeeds at efficiently recovering the underlying Ising model in several new settings: Given dynamically generated data from a wide variety of Markov chains, including Glauber, block, and roundrobin dynamics, logistic regression recovers the parameters with sample complexity that is optimal up to loglogn factors. This generalizes the specialized algorithm of Bresler, Gamarnik, and Shah (IEEE Trans. Inf. Theory ’18) for structure recovery in bounded degree graphs from Glauber dynamics. For the SherringtonKirkpatrick model of spin glasses, given poly(n) independent samples, logistic regression recovers the parameters in most of the proven hightemperature regime via a simple reduction to weaker structural properties of the measure. This improves on recent work of Anari, Jain, Koehler, Pham, and Vuong (SODA ’24) which gives distribution learning at higher temperature. As a simple byproduct of our techniques, logistic regression achieves an exponential improvement in learning from samples in the Mregime of data considered by Dutt, Lokhov, Vuffray, and Misra (ICML ’21) as well as novel guarantees for learning from the adversarial Glauber dynamics of Chin, Moitra, Mossel, and Sandon. Our approach thus provides a significant generalization of the elegant analysis of logistic regression by Wu, Sanghavi, and Dimakis (Neurips ’19) without any algorithmic modification in each setting. @InProceedings{STOC24p503, author = {Jason Gaitonde and Elchanan Mossel}, title = {A Unified Approach to Learning Ising Models: Beyond Independence and Bounded Width}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {503514}, doi = {10.1145/3618260.3649674}, year = {2024}, } Publisher's Version 

Gajjala, Rishikesh 
STOC '24: "No Distributed Quantum Advantage ..."
No Distributed Quantum Advantage for Approximate Graph Coloring
Xavier CoiteuxRoy , Francesco d'Amore , Rishikesh Gajjala , Fabian Kuhn , François Le Gall , Henrik Lievonen , Augusto Modanese , MarcOlivier Renou , Gustav Schmid , and Jukka Suomela (TU Munich, Germany; Munich Center for Quantum Science and Technology, Germany; Aalto University, Finland; Bocconi University, Italy; Indian Institute of Science, India; University of Freiburg, Freiburg, Germany; Nagoya University, Nagoya, Japan; Inria, France; Université ParisSaclay, France; Institut Polytechnique de Paris, France) We give an almost complete characterization of the hardness of ccoloring χchromatic graphs with distributed algorithms, for a wide range of models of distributed computing. In particular, we show that these problems do not admit any distributed quantum advantage. To do that: We give a new distributed algorithm that finds a ccoloring in χchromatic graphs in Õ(n^{1/α}) rounds, with α = ⌊c−1/χ − 1⌋. We prove that any distributed algorithm for this problem requires Ω(n^{1/α}) rounds. Our upper bound holds in the classical, deterministic LOCAL model, while the nearmatching lower bound holds in the nonsignaling model. This model, introduced by Arfaoui and Fraigniaud in 2014, captures all models of distributed graph algorithms that obey physical causality; this includes not only classical deterministic LOCAL and randomized LOCAL but also quantumLOCAL, even with a preshared quantum state. We also show that similar arguments can be used to prove that, e.g., 3coloring 2dimensional grids or ccoloring trees remain hard problems even for the nonsignaling model, and in particular do not admit any quantum advantage. Our lowerbound arguments are purely graphtheoretic at heart; no background on quantum information theory is needed to establish the proofs. @InProceedings{STOC24p1901, author = {Xavier CoiteuxRoy and Francesco d'Amore and Rishikesh Gajjala and Fabian Kuhn and François Le Gall and Henrik Lievonen and Augusto Modanese and MarcOlivier Renou and Gustav Schmid and Jukka Suomela}, title = {No Distributed Quantum Advantage for Approximate Graph Coloring}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19011910}, doi = {10.1145/3618260.3649679}, year = {2024}, } Publisher's Version 

Gao, Ruiquan 
STOC '24: "Parallel Sampling via Counting ..."
Parallel Sampling via Counting
Nima Anari , Ruiquan Gao , and Aviad Rubinstein (Stanford University, USA) We show how to use parallelization to speed up sampling from an arbitrary distribution µ on a product space [q]^{n}, given oracle access to counting queries: ℙ_{X∼ µ}[X_{S}=σ_{S}] for any S⊆ [n] and σ_{S} ∈ [q]^{S}. Our algorithm takes O(n^{2/3}· polylog(n,q)) parallel time, to the best of our knowledge, the first sublinear in n runtime for arbitrary distributions. Our results have implications for sampling in autoregressive models. Our algorithm directly works with an equivalent oracle that answers conditional marginal queries ℙ_{X∼ µ}[X_{i}=σ_{i}  X_{S}=σ_{S}], whose role is played by a trained neural network in autoregressive models. This suggests a roughly n^{1/3}factor speedup is possible for sampling in anyorder autoregressive models. We complement our positive result by showing a lower bound of Ω(n^{1/3}) for the runtime of any parallel sampling algorithm making at most poly(n) queries to the counting oracle, even for q=2. @InProceedings{STOC24p537, author = {Nima Anari and Ruiquan Gao and Aviad Rubinstein}, title = {Parallel Sampling via Counting}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {537548}, doi = {10.1145/3618260.3649744}, year = {2024}, } Publisher's Version 

Garg, Sumegha 
STOC '24: "A New Information Complexity ..."
A New Information Complexity Measure for Multipass Streaming with Applications
Mark Braverman , Sumegha Garg , Qian Li , Shuo Wang , David P. Woodruff , and Jiapeng Zhang (Princeton University, USA; Rutgers University, USA; Shenzhen Research Institute of Big Data, China; Shanghai Jiao Tong University, China; Carnegie Mellon University, USA; University of Southern California, USA) We introduce a new notion of information complexity for multipass streaming problems and use it to resolve several important questions in data streams. In the coin problem, one sees a stream of n i.i.d. uniformly random bits and one would like to compute the majority with constant advantage. We show that any constantpass algorithm must use Ω(logn) bits of memory, significantly extending an earlier Ω(logn) bit lower bound for singlepass algorithms of BravermanGargWoodruff (FOCS, 2020). This also gives the first Ω(logn) bit lower bound for the problem of approximating a counter up to a constant factor in worstcase turnstile streams for more than one pass. In the needle problem, one either sees a stream of n i.i.d. uniform samples from a domain [t], or there is a randomly chosen needle α ∈[t] for which each item independently is chosen to equal α with probability p, and is otherwise uniformly random in [t]. The problem of distinguishing these two cases is central to understanding the space complexity of the frequency moment estimation problem in random order streams. We show tight multipass space bounds for this problem for every p < 1/√n log^{3} n, resolving an open question of Lovett and Zhang (FOCS, 2023); even for 1pass our bounds are new. To show optimality, we improve both lower and upper bounds from existing results. Our information complexity framework significantly extends the toolkit for proving multipass streaming lower bounds, and we give a wide number of additional streaming applications of our lower bound techniques, including multipass lower bounds for ℓ_{p}norm estimation, ℓ_{p}point query and heavy hitters, and compressed sensing problems. @InProceedings{STOC24p1781, author = {Mark Braverman and Sumegha Garg and Qian Li and Shuo Wang and David P. Woodruff and Jiapeng Zhang}, title = {A New Information Complexity Measure for Multipass Streaming with Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17811792}, doi = {10.1145/3618260.3649672}, year = {2024}, } Publisher's Version 

Garlík, Michal 
STOC '24: "Lower Bounds for Regular Resolution ..."
Lower Bounds for Regular Resolution over Parities
Klim Efremenko , Michal Garlík , and Dmitry Itsykson (BenGurion University of the Negev, Israel; Imperial College London, United Kingdom) The proof system resolution over parities (Res(⊕)) operates with disjunctions of linear equations (linear clauses) over GF(2); it extends the resolution proof system by incorporating linear algebra over GF(2). Over the years, several exponential lower bounds on the size of treelike refutations have been established. However, proving a superpolynomial lower bound on the size of daglike Res(⊕) refutations remains a highly challenging open question. We prove an exponential lower bound for regular Res(⊕). Regular Res(⊕) is a subsystem of daglike Res(⊕) that naturally extends regular resolution. This is the first known superpolynomial lower bound for a fragment of daglike Res(⊕) which is exponentially stronger than treelike Res(⊕). In the regular regime, resolving linear clauses C_{1} and C_{2} on a linear form f is permitted only if, for both i∈ {1,2}, the linear form f does not lie within the linear span of all linear forms that were used in resolution rules during the derivation of C_{i}. Namely, we show that the size of any regular Res(⊕) refutation of the binary pigeonhole principle BPHP_{n}^{n+1} is at least 2^{Ω(∛n/logn)}. A corollary of our result is an exponential lower bound on the size of a strongly readonce linear branching program solving a search problem. This resolves an open question raised by Gryaznov, Pudlak, and Talebanfard (CCC 2022). As a byproduct of our technique, we prove that the size of any treelike Res(⊕) refutation of the weak binary pigeonhole principle BPHP_{n}^{m} is at least 2^{Ω(n)} using ProverDelayer games. We also give a direct proof of a width lower bound: we show that any daglike Res(⊕) refutation of BPHP_{n}^{m} contains a linear clause C with Ω(n) linearly independent equations. @InProceedings{STOC24p640, author = {Klim Efremenko and Michal Garlík and Dmitry Itsykson}, title = {Lower Bounds for Regular Resolution over Parities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {640651}, doi = {10.1145/3618260.3649652}, year = {2024}, } Publisher's Version 

Gartland, Peter 
STOC '24: "Maximum Weight Independent ..."
Maximum Weight Independent Set in Graphs with no Long Claws in QuasiPolynomial Time
Peter Gartland , Daniel Lokshtanov , Tomáš Masařík , Marcin Pilipczuk , Michał Pilipczuk , and Paweł Rzążewski (University of California at Santa Barbara, USA; University of Warsaw, Poland; IT University of Copenhagen, Copenhagen, Denmark; Warsaw University of Technology, Poland) We show that the Maximum Weight Independent Set problem (MWIS) can be solved in quasipolynomial time on Hfree graphs (graphs excluding a fixed graph H as an induced subgraph) for every H whose every connected component is a path or a subdivided claw (i.e., a tree with at most three leaves). This completes the dichotomy of the complexity of MWIS in Ffree graphs for any finite set F of graphs into NPhard cases and cases solvable in quasipolynomial time, and corroborates the conjecture that the cases not known to be NPhard are actually polynomialtime solvable. The key graphtheoretic ingredient in our result is as follows. Fix an integer t ≥ 1. Let S_{t,t,t} be the graph created from three paths on t edges by identifying one endpoint of each path into a single vertex. We show that, given a graph G, one can in polynomial time find either an induced S_{t,t,t} in G, or a balanced separator consisting of O(logV(G)) vertex neighborhoods in G, or an extended strip decomposition of G (a decomposition almost as useful for recursion for MWIS as a partition into connected components) with each particle of weight multiplicatively smaller than the weight of G. This is a strengthening of a result of Majewski, Masařík, Novotná, Okrasa, Pilipczuk, Rzążewski, and Sokołowski [Transactions on Computation Theory 2024] which provided such an extended strip decomposition only after the deletion of O(logV(G)) vertex neighborhoods. To reach the final result, we employ an involved branching strategy that relies on the structural lemma presented above. @InProceedings{STOC24p683, author = {Peter Gartland and Daniel Lokshtanov and Tomáš Masařík and Marcin Pilipczuk and Michał Pilipczuk and Paweł Rzążewski}, title = {Maximum Weight Independent Set in Graphs with no Long Claws in QuasiPolynomial Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {683691}, doi = {10.1145/3618260.3649791}, year = {2024}, } Publisher's Version 

Gelles, Yuval 
STOC '24: "Optimal LoadBalanced Scalable ..."
Optimal LoadBalanced Scalable Distributed Agreement
Yuval Gelles and Ilan Komargodski (Hebrew University of Jerusalem, Israel; NTT Research, USA) We consider the fundamental problem of designing classical consensusrelated distributed abstractions for largescale networks, where the number of parties can be huge. Specifically, we consider tasks such as Byzantine Agreement, Broadcast, and Committee Election, and our goal is to design scalable protocols in the sense that each honest party processes and sends a number of bits which is sublinear in n, the total number of parties. In this work, we construct the first such scalable protocols for all of the above tasks. In our protocols, each party processes and sends Õ (√n) bits throughout Õ (1) rounds of communication, and correctness is guaranteed for at most 1/3−є fraction of static byzantine corruptions for every constant є>0 (in the full information model). All previous protocols for the considered agreement tasks were nonscalable, either because the communication complexity was linear or because the computational complexity was super polynomial. We complement our result with a matching lower bound showing that any Byzantine Agreement protocol must have Ω(√n) complexity in our model. Previously, the state of the art was the wellknown Ω(∛n) lower bound of Holtby, Kapron, and King (Distributed Computing, 2008). @InProceedings{STOC24p411, author = {Yuval Gelles and Ilan Komargodski}, title = {Optimal LoadBalanced Scalable Distributed Agreement}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {411422}, doi = {10.1145/3618260.3649736}, year = {2024}, } Publisher's Version 

Ghadiri, Mehrdad 
STOC '24: "Improving the Bit Complexity ..."
Improving the Bit Complexity of Communication for Distributed Convex Optimization
Mehrdad Ghadiri , Yin Tat Lee , Swati Padmanabhan , William Swartworth , David P. Woodruff , and Guanghao Ye (Massachusetts Institute of Technology, USA; University of Washington, USA; Microsoft Research, USA; Carnegie Mellon University, USA) We consider the communication complexity of some fundamental convex optimization problems in the pointtopoint (coordinator) and blackboard communication models. We strengthen known bounds for approximately solving linear regression, pnorm regression (for 1≤ p≤ 2), linear programming, minimizing the sum of finitely many convex nonsmooth functions with varying supports, and low rank approximation; for a number of these fundamental problems our bounds are nearly optimal, as proven by our lower bounds. Among our techniques, we use the notion of block leverage scores, which have been relatively unexplored in this context, as well as dropping all but the “middle” bits in Richardsonstyle algorithms. We also introduce a new communication problem for accurately approximating inner products and establish a lower bound using the spherical Radon transform. Our lower bound can be used to show the first separation of linear programming and linear systems in the distributed model when the number of constraints is polynomial, addressing an open question in prior work. @InProceedings{STOC24p1130, author = {Mehrdad Ghadiri and Yin Tat Lee and Swati Padmanabhan and William Swartworth and David P. Woodruff and Guanghao Ye}, title = {Improving the Bit Complexity of Communication for Distributed Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11301140}, doi = {10.1145/3618260.3649787}, year = {2024}, } Publisher's Version 

Ghaffari, Mohsen 
STOC '24: "Lenzen’s Distributed Routing ..."
Lenzen’s Distributed Routing Generalized: A Full Characterization of ConstantTime Routability
Mohsen Ghaffari and Brandon Wang (Massachusetts Institute of Technology, USA) A celebrated and widely used result of Lenzen and Wattenhofer [STOC’11, PODC’13] shows a constantround (deterministic) distributed routing algorithm for the completegraph network: if each node is the source or destination of at most Θ(n) packets, there is a constantround deterministic distributed algorithm that routes all packets to their destinations in a constant number of rounds, on the completegraph network. We study generalizations of this result to arbitrary network graphs and show a necessary and sufficient condition for the network so that it can route any such demand in constant rounds distributedly. One can easily see that just for the existence of a constantround routing for all such demands, it is necessary that any cut’s size, when normalized by the number of possible edges in that cut, should be lower bounded by a positive constant. That is, for any partition of nodes with exactly k∈ [1, n/2] nodes on one side, the cut should have at least Θ(kn) edges. We call this a graph with a positive minimum normalized cut, or a positive graph for short. We show that this necessary condition is also sufficient. In particular, by tightening the LeightonRao multicommodity maxflow mincut theorem for positive graphs, we show the existence of a constantround routing in positive graphs (assuming the network graph is known globally). Then, as the main technical contribution of this paper, we also show that there is a (deterministic) distributed algorithm that computes such a constantround routing in constant rounds in these graphs. This result allows us to vastly relax the conditions of the wellstudied congested clique model of distributed computing: Any distributed algorithm for the congested clique model can be run in any positive graph network, without any asymptotic slowdown. Our results are in fact more general and they give a distributed routing bound for any network, as a function of its minimum normalized cut size (and without assuming it is a constant), within a polynomial of the relevant lower bound. @InProceedings{STOC24p1877, author = {Mohsen Ghaffari and Brandon Wang}, title = {Lenzen’s Distributed Routing Generalized: A Full Characterization of ConstantTime Routability}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18771888}, doi = {10.1145/3618260.3649627}, year = {2024}, } Publisher's Version STOC '24: "WorkEfficient Parallel Derandomization ..." WorkEfficient Parallel Derandomization II: Optimal Concentrations via Bootstrapping Mohsen Ghaffari and Christoph Grunau (Massachusetts Institute of Technology, USA; ETH Zurich, Switzerland) In this paper, we present an efficient parallel derandomization method for randomized algorithms that rely on concentrations such as the Chernoff bound. This settles a classic problem in parallel derandomization, which dates back to the 1980s. Concretely, consider the set balancing problem where m sets of size at most s are given in a ground set of size n, and we should partition the ground set into two parts such that each set is split evenly up to a small additive (discrepancy) bound. A random partition achieves a discrepancy of O(√s logm) in each set, by Chernoff bound. We give a deterministic parallel algorithm that matches this bound, using nearlinear work Õ(m+n+∑_{i=1}^{m} S_{i}) and polylogarithmic depth poly(log(mn)). The previous results were weaker in discrepancy and/or work bounds: Motwani, Naor, and Naor [FOCS’89] and Berger and Rompel [FOCS’89] achieve discrepancy s · O(√s logm) with work Õ(m+n+∑_{i=1}^{m} S_{i}) · m^{Θ(1/)} and polylogarithmic depth; the discrepancy was optimized to O(√s logm) in later work, e.g. by Harris [Algorithmica’19], but the work bound remained prohibitively high at Õ(m^{4}n^{3}). Notice that these would require a large polynomial number of processors to even match the nearlinear runtime of the sequential algorithm. Ghaffari, Grunau, and Rozhon [FOCS’23] achieve discrepancy s/poly(log(nm)) + O(√s logm) with nearlinear work and polylogarithmicdepth. Notice that this discrepancy is nearly quadratically larger than the desired bound and barely sublinear with respect to the trivial bound of s. Our method is different from prior work. It can be viewed as a novel bootstrapping mechanism that uses crude partitioning algorithms as a subroutine and sharpens their discrepancy to the optimal bound. In particular, we solve the problem recursively, by using the crude partition in each iteration to split the variables into many smaller parts, and then we find a constraint for the variables in each part such that we reduce the overall number of variables in the problem. The scheme relies crucially on an interesting application of the multiplicative weights update method to control the variance losses in each iteration. Our result applies to the much more general lattice approximation problem, thus providing an efficient parallel derandomization of the randomized rounding scheme for linear programs. @InProceedings{STOC24p1889, author = {Mohsen Ghaffari and Christoph Grunau}, title = {WorkEfficient Parallel Derandomization II: Optimal Concentrations via Bootstrapping}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18891900}, doi = {10.1145/3618260.3649668}, year = {2024}, } Publisher's Version STOC '24: "Dynamic O(Arboricity) Coloring ..." Dynamic O(Arboricity) Coloring in Polylogarithmic WorstCase Time Mohsen Ghaffari and Christoph Grunau (Massachusetts Institute of Technology, USA; ETH Zurich, Switzerland) A recent work by Christiansen, Nowicki, and Rotenberg [STOC’23] provides dynamic algorithms for coloring sparse graphs, concretely as a function of the graph’s arboricity α. They give two randomized algorithms: O(α logα) implicit coloring in poly(logn) worstcase update and query times, and O(min{α logα, α logloglogn}) implicit coloring in poly(logn) amortized update and query times (against an oblivious adversary). We improve these results in terms of the number of colors and the time guarantee: First, we present an extremely simple algorithm that computes an O(α)implicit coloring with poly(logn) amortized update and query times. Second, and as the main technical contribution of our work, we show that the time complexity guarantee can be strengthened from amortized to worstcase. That is, we give a dynamic algorithm for implicit O(α)coloring with poly(logn) worstcase update and query times (against an oblivious adversary). @InProceedings{STOC24p1184, author = {Mohsen Ghaffari and Christoph Grunau}, title = {Dynamic O(Arboricity) Coloring in Polylogarithmic WorstCase Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11841191}, doi = {10.1145/3618260.3649782}, year = {2024}, } Publisher's Version 

Gheorghiu, Alexandru 
STOC '24: "Nonlocality under Computational ..."
Nonlocality under Computational Assumptions
Grzegorz Gluch , Khashayar Barooti , Alexandru Gheorghiu , and MarcOlivier Renou (EPFL, Lausanne, Switzerland; Aztec Labs, United Kingdom; Chalmers University of Technology, Sweden; Inria  Université ParisSaclay  CPHT  École Polytechnique  Institut Polytechnique de Paris, France) Nonlocality and its connections to entanglement are fundamental features of quantum mechanics that have found numerous applications in quantum information science. A set of correlations is said to be nonlocal if it cannot be reproduced by spacelikeseparated parties sharing randomness and performing local operations. An important practical consideration is that the runtime of the parties has to be shorter than the time it takes light to travel between them. One way to model this restriction is to assume that the parties are computationally bounded. We therefore initiate the study of nonlocality under computational assumptions and derive the following results: (a) We define the set NEL (notefficientlylocal) as consisting of all bipartite states whose correlations arising from local measurements cannot be reproduced with shared randomness and polynomialtime local operations. (b) Under the assumption that the Learning With Errors problem cannot be solved in quantum polynomialtime, we show that NEL=ENT, where ENT is the set of all bipartite entangled states (pure and mixed). This is in contrast to the standard notion of nonlocality where it is known that some entangled states, e.g. Werner states, are local. In essence, we show that there exist (efficient) local measurements producing correlations that cannot be reproduced through shared randomness and quantum polynomialtime computation. (c) We prove that if NEL=ENT unconditionally, then BQP≠PP. In other words, the ability to certify all bipartite entangled states against computationally bounded adversaries gives a nontrivial separation of complexity classes. (d) Using (c), we show that a certain natural class of 1round delegated quantum computation protocols that are sound against PP provers cannot exist. @InProceedings{STOC24p1018, author = {Grzegorz Gluch and Khashayar Barooti and Alexandru Gheorghiu and MarcOlivier Renou}, title = {Nonlocality under Computational Assumptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10181026}, doi = {10.1145/3618260.3649750}, year = {2024}, } Publisher's Version 

Gholami, Iman 
STOC '24: "PrizeCollecting Steiner Tree: ..."
PrizeCollecting Steiner Tree: A 1.79 Approximation
Ali Ahmadi , Iman Gholami , MohammadTaghi Hajiaghayi , Peyman Jabbarzade , and Mohammad Mahdavi (University of Maryland, USA) PrizeCollecting Steiner Tree (PCST) is a generalization of the Steiner Tree problem, a fundamental problem in computer science. In the classic Steiner Tree problem, we aim to connect a set of vertices known as terminals using the minimumweight tree in a given weighted graph. In this generalized version, each vertex has a penalty, and there is flexibility to decide whether to connect each vertex or pay its associated penalty, making the problem more realistic and practical. Both the Steiner Tree problem and its PrizeCollecting version had longstanding 2approximation algorithms, matching the integrality gap of the natural LP formulations for both. This barrier for both problems has been surpassed, with algorithms achieving approximation factors below 2. While research on the Steiner Tree problem has led to a series of reductions in the approximation ratio below 2, culminating in a ln(4)+є approximation by Byrka, Grandoni, Rothvoß, and Sanità [STOC’10], the PrizeCollecting version has not seen improvements in the past 15 years since the work of Archer, Bateni, Hajiaghayi, and Karloff [FOCS’09, SIAM J. Comput.’11], which reduced the approximation factor for this problem from 2 to 1.9672. Interestingly, even the PrizeCollecting TSP approximation, which was first improved below 2 in the same paper, has seen several advancements since then (see, e.g., Blauth and N'agele [STOC’23]). In this paper, we reduce the approximation factor for the PCST problem substantially to 1.7994 via a novel iterative approach. @InProceedings{STOC24p1641, author = {Ali Ahmadi and Iman Gholami and MohammadTaghi Hajiaghayi and Peyman Jabbarzade and Mohammad Mahdavi}, title = {PrizeCollecting Steiner Tree: A 1.79 Approximation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {16411652}, doi = {10.1145/3618260.3649789}, year = {2024}, } Publisher's Version 

Giannopoulou, Archontia C. 
STOC '24: "A Flat Wall Theorem for Matching ..."
A Flat Wall Theorem for Matching Minors in Bipartite Graphs
Archontia C. Giannopoulou and Sebastian Wiederrecht (National and Kapodistrian University of Athens, Athens, Greece; Institute for Basic Science, Daejeon, South Korea) In 1913, Pólya asked for which (0,1)matrices A it is possible to create a new matrix A′ by changing some of the signs such that the permanent of A equals the determinant of A′. A combinatorial solution to this problem was found by Little in 1975; he found these matrices to be exactly the biadjacency matrices of bipartite graphs excluding K_{3,3} as a matching minor. Utilising ideas from graph minors theory, this characterisation was later shown to yield a polynomial time algorithm to compute the permanent of matrices which satisfy Little’s condition. By a seminal result of Valiant, computing the permanent of (0,1)matrices in general is #Phard; however, it can be observed that the tractability of the permanent is closely related to the exclusion of matchings minors in bipartite graphs. Building on the results of Robertson’s and Seymour’s graph minors theory it was shown that the permanent remains tractable under the exclusion of a planar or a singlecrossing matching minor. In this paper, we provide an essential next step in the form of a matching theoretic analogue of the Flat Wall Theorem for bipartite graphs, describing the local structure of bipartite graphs excluding K_{t,t} as a matching minor. Our result builds on a tight relationship between structural digraph theory and matching theory and allows us to deduce a Flat Wall Theorem for digraphs which substantially differs from a previously established directed variant of this theorem. @InProceedings{STOC24p716, author = {Archontia C. Giannopoulou and Sebastian Wiederrecht}, title = {A Flat Wall Theorem for Matching Minors in Bipartite Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {716727}, doi = {10.1145/3618260.3649774}, year = {2024}, } Publisher's Version 

Girish, Uma 
STOC '24: "The Power of Adaptivity in ..."
The Power of Adaptivity in Quantum Query Algorithms
Uma Girish , Makrand Sinha , Avishay Tal , and Kewen Wu (Princeton University, USA; University of Illinois at UrbanaChampaign, USA; University of California at Berkeley, USA) Motivated by limitations on the depth of nearterm quantum devices, we study the depthcomputation tradeoff in the query model, where depth corresponds to the number of adaptive query rounds and the computation per layer corresponds to the number of parallel queries per round. We achieve the strongest known separation between quantum algorithms with r versus r−1 rounds of adaptivity. We do so by using the kfold Forrelation problem introduced by Aaronson and Ambainis (SICOMP’18). For k=2r, this problem can be solved using an r round quantum algorithm with only one query per round, yet we show that any r−1 round quantum algorithm needs an exponential (in the number of qubits) number of parallel queries per round. Our results are proven following the Fourier analytic machinery developed in recent works on quantumclassical separations. The key new component in our result are bounds on the Fourier weights of quantum query algorithms with bounded number of rounds of adaptivity. These may be of independent interest as they distinguish the polynomials that arise from such algorithms from arbitrary bounded polynomials of the same degree. @InProceedings{STOC24p1488, author = {Uma Girish and Makrand Sinha and Avishay Tal and Kewen Wu}, title = {The Power of Adaptivity in Quantum Query Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14881497}, doi = {10.1145/3618260.3649621}, year = {2024}, } Publisher's Version 

Gluch, Grzegorz 
STOC '24: "Nonlocality under Computational ..."
Nonlocality under Computational Assumptions
Grzegorz Gluch , Khashayar Barooti , Alexandru Gheorghiu , and MarcOlivier Renou (EPFL, Lausanne, Switzerland; Aztec Labs, United Kingdom; Chalmers University of Technology, Sweden; Inria  Université ParisSaclay  CPHT  École Polytechnique  Institut Polytechnique de Paris, France) Nonlocality and its connections to entanglement are fundamental features of quantum mechanics that have found numerous applications in quantum information science. A set of correlations is said to be nonlocal if it cannot be reproduced by spacelikeseparated parties sharing randomness and performing local operations. An important practical consideration is that the runtime of the parties has to be shorter than the time it takes light to travel between them. One way to model this restriction is to assume that the parties are computationally bounded. We therefore initiate the study of nonlocality under computational assumptions and derive the following results: (a) We define the set NEL (notefficientlylocal) as consisting of all bipartite states whose correlations arising from local measurements cannot be reproduced with shared randomness and polynomialtime local operations. (b) Under the assumption that the Learning With Errors problem cannot be solved in quantum polynomialtime, we show that NEL=ENT, where ENT is the set of all bipartite entangled states (pure and mixed). This is in contrast to the standard notion of nonlocality where it is known that some entangled states, e.g. Werner states, are local. In essence, we show that there exist (efficient) local measurements producing correlations that cannot be reproduced through shared randomness and quantum polynomialtime computation. (c) We prove that if NEL=ENT unconditionally, then BQP≠PP. In other words, the ability to certify all bipartite entangled states against computationally bounded adversaries gives a nontrivial separation of complexity classes. (d) Using (c), we show that a certain natural class of 1round delegated quantum computation protocols that are sound against PP provers cannot exist. @InProceedings{STOC24p1018, author = {Grzegorz Gluch and Khashayar Barooti and Alexandru Gheorghiu and MarcOlivier Renou}, title = {Nonlocality under Computational Assumptions}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10181026}, doi = {10.1145/3618260.3649750}, year = {2024}, } Publisher's Version 

Goldberg, Paul W. 
STOC '24: "The Complexity of Computing ..."
The Complexity of Computing KKT Solutions of Quadratic Programs
John Fearnley , Paul W. Goldberg , Alexandros Hollender , and Rahul Savani (University of Liverpool, United Kingdom; University of Oxford, United Kingdom; Alan Turing Institute, United Kingdom) It is well known that solving a (nonconvex) quadratic program is NPhard. We show that the problem remains hard even if we are only looking for a KarushKuhnTucker (KKT) point, instead of a global optimum. Namely, we prove that computing a KKT point of a quadratic polynomial over the domain [0,1]^{n} is complete for the class CLS = PPAD∩PLS. @InProceedings{STOC24p892, author = {John Fearnley and Paul W. Goldberg and Alexandros Hollender and Rahul Savani}, title = {The Complexity of Computing KKT Solutions of Quadratic Programs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {892903}, doi = {10.1145/3618260.3649647}, year = {2024}, } Publisher's Version 

Golowich, Louis 
STOC '24: "Approaching the Quantum Singleton ..."
Approaching the Quantum Singleton Bound with Approximate Error Correction
Thiago Bergamaschi , Louis Golowich , and Sam Gunn (University of California at Berkeley, USA) It is well known that no quantum error correcting code of rate R can correct adversarial errors on more than a (1−R)/4 fraction of symbols. But what if we only require our codes to approximately recover the message? In this work, we construct efficientlydecodable approximate quantum codes against adversarial error rates approaching the quantum Singleton bound of (1−R)/2, for any constant rate R. Specifically, for every R ∈ (0,1) and γ>0, we construct codes of rate R, message length k, and alphabet size 2^{O(1/γ5)}, that are efficiently decodable against a (1−R−γ)/2 fraction of adversarial errors and recover the message up to inverseexponential error 2^{−Ω(k)}. At a technical level, we use classical robust secret sharing and quantum purity testing to reduce approximate quantum error correction to a suitable notion of quantum list decoding. We then instantiate our notion of quantum list decoding by (i) introducing folded quantum ReedSolomon codes, and (ii) applying a new, quantum version of distance amplification. @InProceedings{STOC24p1507, author = {Thiago Bergamaschi and Louis Golowich and Sam Gunn}, title = {Approaching the Quantum Singleton Bound with Approximate Error Correction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {15071516}, doi = {10.1145/3618260.3649680}, year = {2024}, } Publisher's Version 

Golowich, Noah 
STOC '24: "Exploring and Learning in ..."
Exploring and Learning in Sparse Linear MDPs without Computationally Intractable Oracles
Noah Golowich , Ankur Moitra , and Dhruv Rohatgi (Massachusetts Institute of Technology, USA) The key assumption underlying linear Markov Decision Processes (MDPs) is that the learner has access to a known feature map φ(x, a) that maps stateaction pairs to ddimensional vectors, and that the rewards and transition probabilities are linear functions in this representation. But where do these features come from? In the absence of expert domain knowledge, a tempting strategy is to use the “kitchen sink” approach and hope that the true features are included in a much larger set of potential features. In this paper we revisit linear MDPs from the perspective of feature selection. In a ksparse linear MDP, there is an unknown subset S ⊂ [d] of size k containing all the relevant features, and the goal is to learn a nearoptimal policy in only poly(k,logd) interactions with the environment. Our main result is the first polynomialtime algorithm for this problem. In contrast, earlier works either made prohibitively strong assumptions that obviated the need for exploration, or required solving computationally intractable optimization problems. Along the way we introduce the notion of an emulator: a succinct approximate representation of the transitions, that still suffices for computing certain Bellman backups. Since linear MDPs are a nonparametric model, it is not even obvious whether polynomialsized emulators exist. We show that they do exist, and moreover can be computed efficiently via convex programming. As a corollary of our main result, we give an algorithm for learning a nearoptimal policy in block MDPs whose decoding function is a lowdepth decision tree; the algorithm runs in quasipolynomial time and takes a polynomial number of samples (in the size of the decision tree). This can be seen as a reinforcement learning analogue of classic results in computational learning theory. Furthermore, it gives a natural model where improving the sample complexity via representation learning is computationally feasible. @InProceedings{STOC24p183, author = {Noah Golowich and Ankur Moitra and Dhruv Rohatgi}, title = {Exploring and Learning in Sparse Linear MDPs without Computationally Intractable Oracles}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {183193}, doi = {10.1145/3618260.3649710}, year = {2024}, } Publisher's Version STOC '24: "From External to Swap Regret ..." From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces Yuval Dagan , Constantinos Daskalakis , Maxwell Fishelson , and Noah Golowich (University of California at Berkeley, USA; Massachusetts Institute of Technology, USA) We provide a novel reduction from swapregret minimization to externalregret minimization, which improves upon the classical reductions of BlumMansour and StoltzLugosi in that it does not require finiteness of the space of actions. We show that, whenever there exists a noexternalregret algorithm for some hypothesis class, there must also exist a noswapregret algorithm for that same class. For the problem of learning with expert advice, our result implies that it is possible to guarantee that the swap regret is bounded by є after (logN)^{Õ(1/є)} rounds and with O(N) per iteration complexity, where N is the number of experts, while the classical reductions of BlumMansour and StoltzLugosi require at least Ω(N/є^{2}) rounds and at least Ω(N^{3}) total computational cost. Our result comes with an associated lower bound, which—in contrast to that of BlumMansour—holds for oblivious and ℓ_{1}constrained adversaries and learners that can employ distributions over experts, showing that the number of rounds must be Ω(N/є^{2}) or exponential in 1/є. Our reduction implies that, if noregret learning is possible in some game, then this game must have approximate correlated equilibria, of arbitrarily good approximation. This strengthens the folklore implication of noregret learning that approximate coarse correlated equilibria exist. Importantly, it provides a sufficient condition for the existence of approximate correlated equilibrium which vastly extends the requirement that the action set is finite or the requirement that the action set is compact and the utility functions are continuous, allowing for games with finite Littlestone or finite sequential fat shattering dimension, thus answering a question left open in “Fast rates for nonparametric online learning: from realizability to learning in games” and “ Online learning and solving infinite games with an ERM oracle”. Moreover, it answers several outstanding questions about equilibrium computation and/or learning in games. In particular, for constant values of є: (a) we show that єapproximate correlated equilibria in extensiveform games can be computed efficiently, advancing a longstanding open problem for extensiveform games; see e.g. “ Extensiveform correlated equilibrium: Definition and computational complexity” and “ PolynomialTime LinearSwap Regret Minimization in ImperfectInformation Sequential Games”; (b) we show that the query and communication complexities of computing єapproximate correlated equilibria in Naction normalform games are N · poly log(N) and poly logN respectively, advancing an open problem of “Informational Bounds on Equilibria”; (c) we show that єapproximate correlated equilibria of sparsity poly logN can be computed efficiently, advancing an open problem of “Simple Approximate Equilibria in Large Games”; (d) finally, we show that in the adversarial bandit setting, sublinear swap regret can be achieved in only Õ(N) rounds, advancing an open problem of “From External to Internal Regret” and “Tight Lower Bound and Efficient Reduction for Swap Regret”. @InProceedings{STOC24p1216, author = {Yuval Dagan and Constantinos Daskalakis and Maxwell Fishelson and Noah Golowich}, title = {From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12161222}, doi = {10.1145/3618260.3649681}, year = {2024}, } Publisher's Version 

Gonczarowski, Yannai A. 
STOC '24: "Structural Complexities of ..."
Structural Complexities of Matching Mechanisms
Yannai A. Gonczarowski and Clayton Thomas (Harvard University, USA; Microsoft Research, USA) We study various novel complexity measures for twosided matching mechanisms, applied to the two canonical strategyproof matching mechanisms, Deferred Acceptance (DA) and Top Trading Cycles (TTC). Our metrics are designed to capture the complexity of various structural (rather than computational) concerns, in particular ones of recent interest within economics. We consider a unified, flexible approach to formalizing our questions: Define a protocol or data structure performing some task, and bound the number of bits that it requires. Our main results apply this approach to four questions of general interest; for mechanisms matching applicants to institutions, our questions are: (1) How can one applicant affect the outcome matching? (2) How can one applicant affect another applicant's set of options? (3) How can the outcome matching be represented / communicated? (4) How can the outcome matching be verified? Holistically, our results show that TTC is more complex than DA, formalizing previous intuitions that DA has a simpler structure than TTC. For question (2), our result gives a new combinatorial characterization of which institutions are removed from each applicant's set of options when a new applicant is added in DA; this characterization may be of independent interest. For question (3), our result gives new tight lower bounds proving that the relationship between the matching and the priorities is more complex in TTC than in DA. We nonetheless showcase that this higher complexity of TTC is nuanced: By constructing new tight lowerbound instances and new verification protocols, we prove that DA and TTC are comparable in complexity under questions (1) and (4). This more precisely delineates the ways in which TTC is more complex than DA, and emphasizes that diverse considerations must factor into gauging the complexity of matching mechanisms. @InProceedings{STOC24p455, author = {Yannai A. Gonczarowski and Clayton Thomas}, title = {Structural Complexities of Matching Mechanisms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {455466}, doi = {10.1145/3618260.3649737}, year = {2024}, } Publisher's Version 

Göös, Mika 
STOC '24: "Hardness Condensation by Restriction ..."
Hardness Condensation by Restriction
Mika Göös , Ilan Newman , Artur Riazanov , and Dmitry Sokolov (EPFL, Lausanne, Switzerland; University of Haifa, Israel) Can every nbit boolean function with deterministic query complexity k≪ n be restricted to O(k) variables such that the query complexity remains Ω(k)? That is, can query complexity be condensed via restriction? We study such hardness condensation questions in both query and communication complexity, proving two main results. Negative: Query complexity cannot be condensed in general: There is a function f with query complexity k such that any restriction of f to O(k) variables has query complexity O(k^{3/4}). Positive: Randomised communication complexity can be condensed for the sinkofxor function. This yields a quantitatively improved counterexample to the logapproximaterank conjecture, achieving parameters conjectured by Chattopadhyay, Garg, and Sherif (2021). Along the way we show the existence of Shearer extractors — a new type of seeded extractor whose output bits satisfy prescribed dependencies across distinct seeds. @InProceedings{STOC24p2016, author = {Mika Göös and Ilan Newman and Artur Riazanov and Dmitry Sokolov}, title = {Hardness Condensation by Restriction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {20162027}, doi = {10.1145/3618260.3649711}, year = {2024}, } Publisher's Version 

Gopi, Sivakanth 
STOC '24: "Generalized GMMDS: Polynomial ..."
Generalized GMMDS: Polynomial Codes Are Higher Order MDS
Joshua Brakensiek , Manik Dhar , and Sivakanth Gopi (Independent, USA; Massachusetts Institute of Technology, USA; Microsoft Research, USA) The GMMDS theorem, conjectured by DauSongDongYuen and proved by Lovett and YildizHassibi, shows that the generator matrices of ReedSolomon codes can attain every possible configuration of zeros for an MDS code. The recently emerging theory of higher order MDS codes has connected the GMMDS theorem to other important properties of ReedSolomon codes, including showing that ReedSolomon codes can achieve list decoding capacity, even over fields of size linear in the message length. A few works have extended the GMMDS theorem to other families of codes, including Gabidulin and skew polynomial codes. In this paper, we generalize all these previous results by showing that the GMMDS theorem applies to any polynomial code, i.e., a code where the columns of the generator matrix are obtained by evaluating linearly independent polynomials at different points. We also show that the GMMDS theorem applies to dual codes of such polynomial codes, which is nontrivial since the dual of a polynomial code may not be a polynomial code. More generally, we show that GMMDS theorem also holds for algebraic codes (and their duals) where columns of the generator matrix are chosen to be points on some irreducible variety which is not contained in a hyperplane through the origin. Our generalization has applications to constructing capacityachieving listdecodable codes as shown in a followup work [Brakensiek, Dhar, Gopi, Zhang; 2024], where it is proved that randomly punctured algebraicgeometric (AG) codes achieve listdecoding capacity over constantsized fields. @InProceedings{STOC24p728, author = {Joshua Brakensiek and Manik Dhar and Sivakanth Gopi}, title = {Generalized GMMDS: Polynomial Codes Are Higher Order MDS}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {728739}, doi = {10.1145/3618260.3649637}, year = {2024}, } Publisher's Version STOC '24: "AG Codes Achieve List Decoding ..." AG Codes Achieve List Decoding Capacity over ConstantSized Fields Joshua Brakensiek , Manik Dhar , Sivakanth Gopi , and Zihan Zhang (Independent, USA; Massachusetts Institute of Technology, USA; Microsoft Research, USA; Ohio State University, USA) The recentlyemerging field of higher order MDS codes has sought to unify a number of concepts in coding theory. Such areas captured by higher order MDS codes include maximally recoverable (MR) tensor codes, codes with optimal listdecoding guarantees, and codes with constrained generator matrices (as in the GMMDS theorem). By proving these equivalences, BrakensiekGopiMakam showed the existence of optimally listdecodable ReedSolomon codes over exponential sized fields. Building on this, recent breakthroughs by GuoZhang and AlrabiahGuruswamiLi have shown that randomly punctured ReedSolomon codes achieve listdecoding capacity (which is a relaxation of optimal listdecodability) over linear size fields. We extend these works by developing a formal theory of relaxed higher order MDS codes. In particular, we show that there are two inequivalent relaxations which we call lower and upper relaxations. The lower relaxation is equivalent to relaxed optimal listdecodable codes and the upper relaxation is equivalent to relaxed MR tensor codes with a single parity check per column. We then generalize the techniques of GuoZhang and AlrabiahGuruswamiLi to show that both these relaxations can be constructed over constant size fields by randomly puncturing suitable algebraicgeometric codes. For this, we crucially use the generalized GMMDS theorem for polynomial codes recently proved by BrakensiekDharGopi. We obtain the following corollaries from our main result: Randomly punctured algebraicgeometric codes of rate R are listdecodable up to radius L/L+1(1−R−є) with list size L over fields of size exp(O(L/є)). In particular, they achieve listdecoding capacity with list size O(1/є) and field size exp(O(1/є^{2})). Prior to this work, AG codes were not even known to achieve listdecoding capacity. By randomly puncturing algebraicgeometric codes, we can construct relaxed MR tensor codes with a single parity check per column over constantsized fields, whereas (nonrelaxed) MR tensor codes require exponential field size. @InProceedings{STOC24p740, author = {Joshua Brakensiek and Manik Dhar and Sivakanth Gopi and Zihan Zhang}, title = {AG Codes Achieve List Decoding Capacity over ConstantSized Fields}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {740751}, doi = {10.1145/3618260.3649651}, year = {2024}, } Publisher's Version 

Gorsky, Maximilian 
STOC '24: "Packing Even Directed Circuits ..."
Packing Even Directed Circuits QuarterIntegrally
Maximilian Gorsky , Kenichi Kawarabayashi , Stephan Kreutzer , and Sebastian Wiederrecht (TU Berlin, Berlin, Germany; National Institute of Informatics, Tokyo, Japan; University of Tokyo, Tokyo, Japan; Institute for Basic Science, Daejeon, South Korea) We prove the existence of a computable function f∶ℕ→ℕ such that for every integer k and every digraph D, either D contains a collection C of k directed cycles of even length such that no vertex of D belongs to more than four cycles in C, or there exists a set S⊆ V(D) of size at most f(k) such that D−S has no directed cycle of even length. Moreover, we provide an algorithm that finds one of the two outcomes of this statement in time g(k)n^{O(1)} for some computable function g∶ ℕ→ℕ. Our result unites two deep fields of research from the algorithmic theory for digraphs: The study of the ErdősPósa property of digraphs and the study of the Even Dicycle Problem. The latter is the decision problem which asks if a given digraph contains an even dicycle and can be traced back to a question of Pólya from 1913. It remained open until a polynomial time algorithm was finally found by Robertson, Seymour, and Thomas (Ann. of Math. (2) 1999) and, independently, McCuaig (Electron. J. Combin. 2004; announced jointly at STOC 1997). The Even Dicycle Problem is equivalent to the recognition problem of Pfaffian bipartite graphs and has applications even beyond discrete mathematics and theoretical computer science. On the other hand, Younger’s Conjecture (1973), states that dicycles have the ErdősPósa property. The conjecture was proven more than two decades later by Reed, Robertson, Seymour, and Thomas (Combinatorica 1996) and opened the path for structural digraph theory as well as the algorithmic study of the directed feedback vertex set problem. Our approach builds upon the techniques used to resolve both problems and combines them into a powerful structural theorem that yields further algorithmic applications for other prominent problems. @InProceedings{STOC24p692, author = {Maximilian Gorsky and Kenichi Kawarabayashi and Stephan Kreutzer and Sebastian Wiederrecht}, title = {Packing Even Directed Circuits QuarterIntegrally}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {692703}, doi = {10.1145/3618260.3649682}, year = {2024}, } Publisher's Version 

Gosset, David 
STOC '24: "Classical Simulation of Peaked ..."
Classical Simulation of Peaked Shallow Quantum Circuits
Sergey Bravyi , David Gosset , and Yinchen Liu (IBM Research, USA; University of Waterloo, Canada; Institute for Quantum Computing, Canada; Perimeter Institute for Theoretical Physics, Canada) An nqubit quantum circuit is said to be peaked if it has an output probability that is at least inversepolynomially large as a function of n. We describe a classical algorithm with quasipolynomial runtime n^{O(logn)} that approximately samples from the output distribution of a peaked constantdepth circuit. We give even faster algorithms for circuits composed of nearestneighbor gates on a Ddimensional grid of qubits, with polynomial runtime n^{O(1)} if D=2 and almostpolynomial runtime n^{O(loglogn)} for D>2. Our sampling algorithms can be used to estimate output probabilities of shallow circuits to within a given inversepolynomial additive error, improving previously known methods. As a simple application, we obtain a quasipolynomial algorithm to estimate the magnitude of the expected value of any Pauli observable in the output state of a shallow circuit (which may or may not be peaked). This is a dramatic improvement over the prior stateoftheart algorithm which had an exponential scaling in √n. @InProceedings{STOC24p561, author = {Sergey Bravyi and David Gosset and Yinchen Liu}, title = {Classical Simulation of Peaked Shallow Quantum Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {561572}, doi = {10.1145/3618260.3649638}, year = {2024}, } Publisher's Version 

Grewal, Sabee 
STOC '24: "Improved Stabilizer Estimation ..."
Improved Stabilizer Estimation via Bell Difference Sampling
Sabee Grewal , Vishnu Iyer , William Kretschmer , and Daniel Liang (University of Texas at Austin, USA; Simons Institute for the Theory of Computing, Berkeley, USA; Rice University, USA) We study the complexity of learning quantum states in various models with respect to the stabilizer formalism and obtain the following results: We prove that Ω(n) Tgates are necessary for any Clifford+T circuit to prepare computationally pseudorandom quantum states, an exponential improvement over the previously known bound. This bound is asymptotically tight if lineartime quantumsecure pseudorandom functions exist. Given an nqubit pure quantum state ψ⟩ that has fidelity at least τ with some stabilizer state, we give an algorithm that outputs a succinct description of a stabilizer state that witnesses fidelity at least τ − ε. The algorithm uses O(n/(ε^{2}τ^{4})) samples and exp(O(n/τ^{4})) / ε^{2} time. In the regime of τ constant, this algorithm estimates stabilizer fidelity substantially faster than the naive exp(O(n^{2}))time bruteforce algorithm over all stabilizer states. In the special case of τ > cos^{2}(π/8), we show that a modification of the above algorithm runs in polynomial time. We exhibit a tolerant property testing algorithm for stabilizer states. The underlying algorithmic primitive in all of our results is Bell difference sampling. To prove our results, we establish and/or strengthen connections between Bell difference sampling, symplectic Fourier analysis, and graph theory. @InProceedings{STOC24p1352, author = {Sabee Grewal and Vishnu Iyer and William Kretschmer and Daniel Liang}, title = {Improved Stabilizer Estimation via Bell Difference Sampling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13521363}, doi = {10.1145/3618260.3649738}, year = {2024}, } Publisher's Version 

Grosser, Stefan 
STOC '24: "BlackBox PPP Is Not TuringClosed ..."
BlackBox PPP Is Not TuringClosed
Noah Fleming , Stefan Grosser , Toniann Pitassi , and Robert Robere (Memorial University of Newfoundland, Canada; McGill University, Canada; Columbia University, USA) The complexity class PPP contains all total search problems manyone reducible to the Pigeon problem, where we are given a succinct encoding of a function mapping n+1 pigeons to n holes, and must output two pigeons that collide in a hole. PPP is one of the “original five” syntacticallydefined subclasses of TFNP, and has been extensively studied due to the strong connections between its defining problem — the pigeonhole principle — and problems in cryptography, extremal combinatorics, proof complexity, and other fields. However, despite its importance, PPP appears to be less robust than the other important TFNP subclasses. In particular, unlike all other major TFNP subclasses, it was conjectured by Buss and Johnson that PPP is not closed under Turing reductions, and they called for a blackbox separation in order to provide evidence for this conjecture. The question of whether PPP contains its Turing closure was further highlighted by Daskalakis in his recent IMU Abacus Medal Lecture. In this work we prove that PPP is indeed not Turingclosed in the blackbox setting, affirmatively resolving the above conjecture and providing strong evidence that PPP is not Turingclosed. In fact, we are able to separate PPP from its nonadaptive Turing closure, in which all calls to the Pigeon oracle must be made in parallel. This differentiates PPP from all other important TFNP subclasses, and especially from its closelyrelated subclass PWPP — defined by reducibility to the weak pigeonhole principle — which is known to be nonadaptively Turingclosed. Our proof requires developing new tools for PPP lower bounds, and creates new connections between PPP and the theory of pseudoexpectation operators used for SheraliAdams and SumofSquares lower bounds. In particular, we introduce a new type of pseudoexpectation operator that is precisely tailored for lower bounds against blackbox PPP, which may be of independent interest. @InProceedings{STOC24p1405, author = {Noah Fleming and Stefan Grosser and Toniann Pitassi and Robert Robere}, title = {BlackBox PPP Is Not TuringClosed}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14051414}, doi = {10.1145/3618260.3649769}, year = {2024}, } Publisher's Version 

Grunau, Christoph 
STOC '24: "WorkEfficient Parallel Derandomization ..."
WorkEfficient Parallel Derandomization II: Optimal Concentrations via Bootstrapping
Mohsen Ghaffari and Christoph Grunau (Massachusetts Institute of Technology, USA; ETH Zurich, Switzerland) In this paper, we present an efficient parallel derandomization method for randomized algorithms that rely on concentrations such as the Chernoff bound. This settles a classic problem in parallel derandomization, which dates back to the 1980s. Concretely, consider the set balancing problem where m sets of size at most s are given in a ground set of size n, and we should partition the ground set into two parts such that each set is split evenly up to a small additive (discrepancy) bound. A random partition achieves a discrepancy of O(√s logm) in each set, by Chernoff bound. We give a deterministic parallel algorithm that matches this bound, using nearlinear work Õ(m+n+∑_{i=1}^{m} S_{i}) and polylogarithmic depth poly(log(mn)). The previous results were weaker in discrepancy and/or work bounds: Motwani, Naor, and Naor [FOCS’89] and Berger and Rompel [FOCS’89] achieve discrepancy s · O(√s logm) with work Õ(m+n+∑_{i=1}^{m} S_{i}) · m^{Θ(1/)} and polylogarithmic depth; the discrepancy was optimized to O(√s logm) in later work, e.g. by Harris [Algorithmica’19], but the work bound remained prohibitively high at Õ(m^{4}n^{3}). Notice that these would require a large polynomial number of processors to even match the nearlinear runtime of the sequential algorithm. Ghaffari, Grunau, and Rozhon [FOCS’23] achieve discrepancy s/poly(log(nm)) + O(√s logm) with nearlinear work and polylogarithmicdepth. Notice that this discrepancy is nearly quadratically larger than the desired bound and barely sublinear with respect to the trivial bound of s. Our method is different from prior work. It can be viewed as a novel bootstrapping mechanism that uses crude partitioning algorithms as a subroutine and sharpens their discrepancy to the optimal bound. In particular, we solve the problem recursively, by using the crude partition in each iteration to split the variables into many smaller parts, and then we find a constraint for the variables in each part such that we reduce the overall number of variables in the problem. The scheme relies crucially on an interesting application of the multiplicative weights update method to control the variance losses in each iteration. Our result applies to the much more general lattice approximation problem, thus providing an efficient parallel derandomization of the randomized rounding scheme for linear programs. @InProceedings{STOC24p1889, author = {Mohsen Ghaffari and Christoph Grunau}, title = {WorkEfficient Parallel Derandomization II: Optimal Concentrations via Bootstrapping}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18891900}, doi = {10.1145/3618260.3649668}, year = {2024}, } Publisher's Version STOC '24: "Dynamic O(Arboricity) Coloring ..." Dynamic O(Arboricity) Coloring in Polylogarithmic WorstCase Time Mohsen Ghaffari and Christoph Grunau (Massachusetts Institute of Technology, USA; ETH Zurich, Switzerland) A recent work by Christiansen, Nowicki, and Rotenberg [STOC’23] provides dynamic algorithms for coloring sparse graphs, concretely as a function of the graph’s arboricity α. They give two randomized algorithms: O(α logα) implicit coloring in poly(logn) worstcase update and query times, and O(min{α logα, α logloglogn}) implicit coloring in poly(logn) amortized update and query times (against an oblivious adversary). We improve these results in terms of the number of colors and the time guarantee: First, we present an extremely simple algorithm that computes an O(α)implicit coloring with poly(logn) amortized update and query times. Second, and as the main technical contribution of our work, we show that the time complexity guarantee can be strengthened from amortized to worstcase. That is, we give a dynamic algorithm for implicit O(α)coloring with poly(logn) worstcase update and query times (against an oblivious adversary). @InProceedings{STOC24p1184, author = {Mohsen Ghaffari and Christoph Grunau}, title = {Dynamic O(Arboricity) Coloring in Polylogarithmic WorstCase Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {11841191}, doi = {10.1145/3618260.3649782}, year = {2024}, } Publisher's Version 

Gunn, Sam 
STOC '24: "How to Use Quantum Indistinguishability ..."
How to Use Quantum Indistinguishability Obfuscation
Andrea Coladangelo and Sam Gunn (University of Washington, USA; University of California at Berkeley, USA) Quantum copy protection, introduced by Aaronson, enables giving out a quantum programdescription that cannot be meaningfully duplicated. Despite over a decade of study, copy protection is only known to be possible for a very limited class of programs. As our first contribution, we show how to achieve "bestpossible" copy protection for all programs. We do this by introducing quantum state indistinguishability obfuscation (qsiO), a notion of obfuscation for quantum descriptions of classical programs. We show that applying qsiO to a program immediately achieves bestpossible copy protection. Our second contribution is to show that, assuming injective oneway functions exist, qsiO is concrete copy protection for a large family of puncturable programs  significantly expanding the class of copyprotectable programs. A key tool in our proof is a new variant of unclonable encryption (UE) that we call coupled unclonable encryption (cUE). While constructing UE in the standard model remains an important open problem, we are able to build cUE from oneway functions. If we additionally assume the existence of UE, then we can further expand the class of puncturable programs for which qsiO is copy protection. Finally, we construct qsiO relative to an efficient quantum oracle. @InProceedings{STOC24p1003, author = {Andrea Coladangelo and Sam Gunn}, title = {How to Use Quantum Indistinguishability Obfuscation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10031008}, doi = {10.1145/3618260.3649779}, year = {2024}, } Publisher's Version STOC '24: "Approaching the Quantum Singleton ..." Approaching the Quantum Singleton Bound with Approximate Error Correction Thiago Bergamaschi , Louis Golowich , and Sam Gunn (University of California at Berkeley, USA) It is well known that no quantum error correcting code of rate R can correct adversarial errors on more than a (1−R)/4 fraction of symbols. But what if we only require our codes to approximately recover the message? In this work, we construct efficientlydecodable approximate quantum codes against adversarial error rates approaching the quantum Singleton bound of (1−R)/2, for any constant rate R. Specifically, for every R ∈ (0,1) and γ>0, we construct codes of rate R, message length k, and alphabet size 2^{O(1/γ5)}, that are efficiently decodable against a (1−R−γ)/2 fraction of adversarial errors and recover the message up to inverseexponential error 2^{−Ω(k)}. At a technical level, we use classical robust secret sharing and quantum purity testing to reduce approximate quantum error correction to a suitable notion of quantum list decoding. We then instantiate our notion of quantum list decoding by (i) introducing folded quantum ReedSolomon codes, and (ii) applying a new, quantum version of distance amplification. @InProceedings{STOC24p1507, author = {Thiago Bergamaschi and Louis Golowich and Sam Gunn}, title = {Approaching the Quantum Singleton Bound with Approximate Error Correction}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {15071516}, doi = {10.1145/3618260.3649680}, year = {2024}, } Publisher's Version 

Guo, Siyao 
STOC '24: "Tight TimeSpace Tradeoffs ..."
Tight TimeSpace Tradeoffs for the Decisional DiffieHellman Problem
Akshima , Tyler Besselman , Siyao Guo , Zhiye Xie , and Yuping Ye (NYU Shanghai, China; East China Normal University, China) In the (preprocessing) Decisional DiffieHellman (DDH) problem, we are given a cyclic group G with a generator g and a prime order N, and want to prepare some advice of S, such that we can efficiently distinguish (g^{x},g^{y},g^{xy}) from (g^{x},g^{y},g^{z}) in time T for uniformly and independently chosen x,y,z from [N]. This is a central cryptographic problem whose computational hardness underpins many widely deployed schemes such as the Diffie–Hellman key exchange protocol. We prove that any generic preprocessing DDH algorithm (operating in any cyclic group) achieves advantage at most O(ST^{2}/N). This bound matches the best known attack up to polylog factors, and confirms that DDH is as secure as the (seemingly harder) discrete logarithm problem against preprocessing attacks. Our result resolves an open question by CorriganGibbs and Kogan (EUROCRYPT 2018), which proved optimal bounds for many variants of discrete logarithm problems except DDH (with an O(√ST^{2}/N) bound). We obtain our results by adopting and refining the approach by Gravin, Guo, Kwok, Lu (SODA 2021) and by Yun (EUROCRYPT 2015). Along the way, we significantly simplified and extended above techniques which may be of independent interests. The highlights of our techniques are following: 1. We obtain a simpler reduction from decisional problems against Sbit advice to their Swise XOR lemmas against zeroadvice, recovering the reduction by Gravin, Guo, Kwok and Lu (SODA 2021). 2. We show how to reduce generic hardness of decisional problems to their variants in the simpler hyperplane model proposed by Yun (EUROCRYPT 2015). This is the first work analyzing a decisional problem in Yun’s model, answering an open problem proposed by Auerbach, Hoffman, and PascualPerez (TCC 2023). 3. We prove an Swise XOR lemma of DDH in Yun’s model. As a corollary, we obtain the generic hardness of the SXOR DDH problem. @InProceedings{STOC24p1739, author = { Akshima and Tyler Besselman and Siyao Guo and Zhiye Xie and Yuping Ye}, title = {Tight TimeSpace Tradeoffs for the Decisional DiffieHellman Problem}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17391749}, doi = {10.1145/3618260.3649752}, year = {2024}, } Publisher's Version 

Gupta, Manoj 
STOC '24: "Nearly Optimal Fault Tolerant ..."
Nearly Optimal Fault Tolerant Distance Oracle
Dipan Dey and Manoj Gupta (IIT Gandhinagar, India) We present an ffault tolerant distance oracle for an undirected weighted graph where each edge has an integral weight from [1 … W]. Given a set F of f edges, as well as a source node s and a destination node t, our oracle returns the shortest path from s to t avoiding F in O((cf log(nW))^{O(f2)}) time, where c > 1 is a constant. The space complexity of our oracle is O(f^{4}n^{2}log^{2} (nW)). For a constant f, our oracle is nearly optimal both in terms of space and time (barring some logarithmic factor). @InProceedings{STOC24p944, author = {Dipan Dey and Manoj Gupta}, title = {Nearly Optimal Fault Tolerant Distance Oracle}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {944955}, doi = {10.1145/3618260.3649697}, year = {2024}, } Publisher's Version 

Gupta, Meghal 
STOC '24: "Constant Query Local Decoding ..."
Constant Query Local Decoding against Deletions Is Impossible
Meghal Gupta (University of California at Berkeley, USA) Locally decodable codes (LDC’s) are errorcorrecting codes that allow recovery of individual message indices by accessing only a constant number of codeword indices. For substitution errors, it is evident that LDC’s exist – Hadamard codes are examples of 2query LDC’s. Research on this front has focused on finding the optimal encoding length for LDC’s, for which there is a nearly exponential gap between the best lower bounds and constructions. Ostrovsky and PaskinCherniavsky (ICITS 2015) introduced the notion of local decoding to the insertion and deletion setting. In this context, it is not clear whether constant query LDC’s exist at all. Indeed, in contrast to the classical setting, Block et al. conjecture that they do not exist. Blocki et al. (FOCS 2021) make progress towards this conjecture, proving that any potential code must have at least exponential encoding length. Our work definitively resolves the conjecture and shows that constant query LDC’s do not exist in the insertion/deletion (or even deletiononly) setting. Using a reduction shown by Blocki et al., this also implies that constant query locally correctable codes do not exist in this setting. @InProceedings{STOC24p752, author = {Meghal Gupta}, title = {Constant Query Local Decoding against Deletions Is Impossible}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {752763}, doi = {10.1145/3618260.3649655}, year = {2024}, } Publisher's Version 

Gur, Tom 
STOC '24: "Perfect ZeroKnowledge PCPs ..."
Perfect ZeroKnowledge PCPs for #P
Tom Gur , Jack O'Connor , and Nicholas Spooner (University of Cambridge, United Kingdom; University of Warwick, United Kingdom; New York University, USA) We construct perfect zeroknowledge probabilistically checkable proofs (PZKPCPs) for every language in #P. This is the first construction of a PZKPCP for any language outside BPP. Furthermore, unlike previous constructions of (statistical) zeroknowledge PCPs, our construction simultaneously achieves nonadaptivity and zero knowledge against arbitrary (adaptive) polynomialtime malicious verifiers. Our construction consists of a novel masked sumcheck PCP, which uses the combinatorial nullstellen satz to obtain antisymmetric structure within the hypercube and randomness outside of it. To prove zero knowledge, we introduce the notion of locally simulatable encodings: randomised encodings in which every local view of the encoding can be efficiently sampled given a local view of the message. We show that the code arising from the sumcheck protocol (the Reed–Muller code augmented with subcube sums) admits a locally simulatable encoding. This reduces the algebraic problem of simulating our masked sumcheck to a combinatorial property of antisymmetric functions. @InProceedings{STOC24p1724, author = {Tom Gur and Jack O'Connor and Nicholas Spooner}, title = {Perfect ZeroKnowledge PCPs for #P}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17241730}, doi = {10.1145/3618260.3649698}, year = {2024}, } Publisher's Version STOC '24: "On the Power of Interactive ..." On the Power of Interactive Proofs for Learning Tom Gur , Mohammad Mahdi Jahanara , Mohammad Mahdi Khodabandeh , Ninad Rajgopal , Bahar Salamatian , and Igor Shinkar (University of Cambridge, United Kingdom; Simon Fraser University, Canada; Qualcomm, Canada) We continue the study of doublyefficient proof systems for verifying agnostic PAC learning, for which we obtain the following results. We construct an interactive protocol for learning the t largest Fourier characters of a given function f ∶ {0,1}^{n} → {0,1} up to an arbitrarily small error, wherein the verifier uses poly(t) random examples. This improves upon the Interactive GoldreichLevin protocol of Goldwasser, Rothblum, Shafer, and Yehudayoff (ITCS 2021) whose sample complexity is poly(t,n). For agnostically learning the class AC^{0}[2] under the uniform distribution, we build on the work of Carmosino, Impagliazzo, Kabanets, and Kolokolova (APPROX/RANDOM 2017) and design an interactive protocol, where given a function f ∶ {0,1}^{n} → {0,1}, the verifier learns the closest hypothesis up to polylog(n) multiplicative factor, using quasipolynomially many random examples. In contrast, this class has been notoriously resistant even for constructing realisable learners (without a prover) using random examples. For agnostically learning kjuntas under the uniform distribution, we obtain an interactive protocol, where the verifier uses O(2^{k}) random examples to a given function f ∶ {0,1}^{n} → {0,1}. Crucially, the sample complexity of the verifier is independent of n. We also show that if we do not insist on doublyefficient proof systems, then the model becomes trivial. Specifically, we show a protocol for an arbitrary class C of Boolean functions in the distributionfree setting, where the verifier uses O(1) labeled examples to learn f. @InProceedings{STOC24p1063, author = {Tom Gur and Mohammad Mahdi Jahanara and Mohammad Mahdi Khodabandeh and Ninad Rajgopal and Bahar Salamatian and Igor Shinkar}, title = {On the Power of Interactive Proofs for Learning}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10631070}, doi = {10.1145/3618260.3649784}, year = {2024}, } Publisher's Version 

Guruswami, Venkatesan 
STOC '24: "Randomly Punctured Reed–Solomon ..."
Randomly Punctured Reed–Solomon Codes Achieve ListDecoding Capacity over LinearSized Fields
Omar Alrabiah , Venkatesan Guruswami , and Ray Li (University of California at Berkeley, USA; Santa Clara University, USA) Reed–Solomon codes are a classic family of errorcorrecting codes consisting of evaluations of lowdegree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal uniquedecoding capabilities, but their listdecoding capabilities are not fully understood. Given the prevalence of ReedSolomon codes, a fundamental question in coding theory is determining if Reed–Solomon codes can optimally achieve listdecoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed–Solomon codes are combinatorially listdecodable all the way to capacity. However, their results hold for randomlypunctured Reed–Solomon codes over an exponentially large field size 2^{O(n)}, where n is the block length of the code. A natural question is whether Reed–Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed–Solomon codes are listdecodable to capacity with field size O(n^{2}). We show that Reed–Solomon codes are listdecodable to capacity with linear field size O(n), which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size q and code length n cannot be bounded by an absolute constant. Our techniques also show that random linear codes are listdecodable up to (the alphabetindependent) capacity with optimal listsize O(1/ε) and nearoptimal alphabet size 2^{O(1/ε2)}, where ε is the gap to capacity. As far as we are aware, listdecoding up to capacity with optimal listsize O(1/ε) was not known to be achievable with any linear code over a constant alphabet size (even nonconstructively), and it was also not known to be achievable for random linear codes over any alphabet size. Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices. With our proof, which maintains a hypergraph perspective of the listdecoding problem, we include an alternate presentation of ideas from Brakensiek, Gopi, and Makam that more directly connects the listdecoding problem to the GMMDS theorem via a hypergraph orientation theorem. @InProceedings{STOC24p1458, author = {Omar Alrabiah and Venkatesan Guruswami and Ray Li}, title = {Randomly Punctured Reed–Solomon Codes Achieve ListDecoding Capacity over LinearSized Fields}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14581469}, doi = {10.1145/3618260.3649634}, year = {2024}, } Publisher's Version STOC '24: "Parameterized Inapproximability ..." Parameterized Inapproximability Hypothesis under Exponential Time Hypothesis Venkatesan Guruswami , Bingkai Lin , Xuandi Ren , Yican Sun , and Kewen Wu (University of California at Berkeley, USA; Nanjing University, China; Peking University, China) The Parameterized Inapproximability Hypothesis (PIH) asserts that no fixed parameter tractable (FPT) algorithm can distinguish a satisfiable CSP instance, parameterized by the number of variables, from one where every assignment fails to satisfy an ε fraction of constraints for some absolute constant ε > 0. PIH plays the role of the PCP theorem in parameterized complexity. However, PIH has only been established under GapETH, a very strong assumption with an inherent gap. In this work, we prove PIH under the Exponential Time Hypothesis (ETH). This is the first proof of PIH from a gapfree assumption. Our proof is selfcontained and elementary. We identify an ETHhard CSP whose variables take vector values, and constraints are either linear or of a special parallel structure. Both kinds of constraints can be checked with constant soundness via a “parallel PCP of proximity” based on the WalshHadamard code. @InProceedings{STOC24p24, author = {Venkatesan Guruswami and Bingkai Lin and Xuandi Ren and Yican Sun and Kewen Wu}, title = {Parameterized Inapproximability Hypothesis under Exponential Time Hypothesis}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {2435}, doi = {10.1145/3618260.3649771}, year = {2024}, } Publisher's Version 

Haeupler, Bernhard 
STOC '24: "PolylogCompetitive Deterministic ..."
PolylogCompetitive Deterministic Local Routing and Scheduling
Bernhard Haeupler , Shyamal Patel , Antti Roeyskoe , Cliff Stein , and Goran Zuzic (ETH Zurich, Switzerland; Carnegie Mellon University, USA; Columbia University, USA; Google Research, Switzerland) This paper addresses pointtopoint packet routing in undirected networks, which is the most important communication primitive in most networks. The main result proves the existence of routing tables that deterministically guarantee a polylogcompetitive completiontime: In any undirected network, it is possible to give each node simple stateless deterministic local forwarding rules, such that, any adversarially chosen set of packets are delivered as fast as possible, up to polylog factors. All previous routing strategies crucially required randomization for both route selection and packet scheduling. The core technical contribution of this paper is a new local packet scheduling result of independent interest. This scheduling strategy integrates well with recent sparse semioblivious path selection strategies. Such strategies deterministically select not one but several candidate paths for each packet and require a global coordinator to know all packets to adaptively select a single good path from those candidates for each packet. Of course, global knowledge of all packets is exactly what local routing tables cannot have. Another challenge is that, even if a single path is selected for each packet, no strategy for scheduling packets along lowcongestion paths that is both local and deterministic is known. Our novel scheduling strategy utilizes the fact that every semioblivious routing strategy uses only a small (polynomial) subset of candidate routes. It overcomes the issue of global coordination by furthermore being provably robust to adversarial noise. This avoids the issue of having to choose a single path per packet by treating congestion caused by ineffective candidate paths as noise. Beyond more efficient routing tables, our results can be seen as making progress on fundamental questions regarding the importance and power of randomization in network communications and distributed computing. For example, our results imply the first deterministic universallyoptimal algorithms in the distributed supportedCONGEST model for many important global distributed tasks, including computing minimum spanning trees, approximate shortest paths, and partwise aggregates. @InProceedings{STOC24p812, author = {Bernhard Haeupler and Shyamal Patel and Antti Roeyskoe and Cliff Stein and Goran Zuzic}, title = {PolylogCompetitive Deterministic Local Routing and Scheduling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {812822}, doi = {10.1145/3618260.3649678}, year = {2024}, } Publisher's Version STOC '24: "LowStep Multicommodity Flow ..." LowStep Multicommodity Flow Emulators Bernhard Haeupler , D. Ellis Hershkowitz , Jason Li , Antti Roeyskoe , and Thatchaphol Saranurak (ETH Zurich, Switzerland; Carnegie Mellon University, USA; Brown University, USA; University of Michigan, USA) We introduce the concept of lowstep multicommodity flow emulators for any undirected, capacitated graph. At a high level, these emulators contain approximate multicommodity flows whose paths contain a small number of edges, shattering the infamous flow decomposition barrier for multicommodity flow. We prove the existence of lowstep multicommodity flow emulators and develop efficient algorithms to compute them. We then apply them to solve constantapproximate kcommodity flow in O((m+k)^{1+є}) time. To bypass the O(mk) flow decomposition barrier, we represent our output multicommodity flow implicitly; prior to our work, even the existence of implicit constantapproximate multicommodity flows of size o(mk) was unknown. Our results generalize to the minimum cost setting, where each edge has an associated cost and the multicommodity flow must satisfy a cost budget. Our algorithms are also parallel. @InProceedings{STOC24p71, author = {Bernhard Haeupler and D. Ellis Hershkowitz and Jason Li and Antti Roeyskoe and Thatchaphol Saranurak}, title = {LowStep Multicommodity Flow Emulators}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {7182}, doi = {10.1145/3618260.3649689}, year = {2024}, } Publisher's Version 

Hajiaghayi, MohammadTaghi 
STOC '24: "PrizeCollecting Steiner Tree: ..."
PrizeCollecting Steiner Tree: A 1.79 Approximation
Ali Ahmadi , Iman Gholami , MohammadTaghi Hajiaghayi , Peyman Jabbarzade , and Mohammad Mahdavi (University of Maryland, USA) PrizeCollecting Steiner Tree (PCST) is a generalization of the Steiner Tree problem, a fundamental problem in computer science. In the classic Steiner Tree problem, we aim to connect a set of vertices known as terminals using the minimumweight tree in a given weighted graph. In this generalized version, each vertex has a penalty, and there is flexibility to decide whether to connect each vertex or pay its associated penalty, making the problem more realistic and practical. Both the Steiner Tree problem and its PrizeCollecting version had longstanding 2approximation algorithms, matching the integrality gap of the natural LP formulations for both. This barrier for both problems has been surpassed, with algorithms achieving approximation factors below 2. While research on the Steiner Tree problem has led to a series of reductions in the approximation ratio below 2, culminating in a ln(4)+є approximation by Byrka, Grandoni, Rothvoß, and Sanità [STOC’10], the PrizeCollecting version has not seen improvements in the past 15 years since the work of Archer, Bateni, Hajiaghayi, and Karloff [FOCS’09, SIAM J. Comput.’11], which reduced the approximation factor for this problem from 2 to 1.9672. Interestingly, even the PrizeCollecting TSP approximation, which was first improved below 2 in the same paper, has seen several advancements since then (see, e.g., Blauth and N'agele [STOC’23]). In this paper, we reduce the approximation factor for the PCST problem substantially to 1.7994 via a novel iterative approach. @InProceedings{STOC24p1641, author = {Ali Ahmadi and Iman Gholami and MohammadTaghi Hajiaghayi and Peyman Jabbarzade and Mohammad Mahdavi}, title = {PrizeCollecting Steiner Tree: A 1.79 Approximation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {16411652}, doi = {10.1145/3618260.3649789}, year = {2024}, } Publisher's Version 

Hakoniemi, Tuomas 
STOC '24: "Functional Lower Bounds in ..."
Functional Lower Bounds in Algebraic Proofs: Symmetry, Lifting, and Barriers
Tuomas Hakoniemi , Nutan Limaye , and Iddo Tzameret (University of Helsinki, Helsinki, Finland; IT University of Copenhagen, Copenhagen, Denmark; Imperial College London, United Kingdom) Strong algebraic proof systems such as IPS (Ideal Proof System; GrochowPitassi [J. ACM, 65(6):37:1–55, 2018]) offer a general model for deriving polynomials in an ideal and refuting unsatisfiable propositional formulas, subsuming most standard propositional proof systems. A major approach for lower bounding the size of IPS refutations is the Functional Lower Bound Method (Forbes, Shpilka, Tzameret and Wigderson [Theory Comput., 17: 188, 2021]), which reduces the hardness of refuting a polynomial equation f(x)=0 with no Boolean solutions to the hardness of computing the function 1/f(x) over the Boolean cube with an algebraic circuit. Using symmetry we provide a general way to obtain many new hard instances against fragments of IPS via the functional lower bound method. This includes hardness over finite fields and hard instances different from Subset Sum variants both of which were unknown before, and stronger constantdepth lower bounds. Conversely, we expose the limitation of this method by showing it cannot lead to proof complexity lower bounds for any hard Boolean instance (e.g., CNFs) for any sufficiently strong proof systems. Specifically, we show the following: Nullstellensatz degree lower bounds using symmetry: Extending [Forbes et al. Theory Comput., 17: 188, 2021] we show that every unsatisfiable symmetric polynomial with n variables requires degree >n refutations (over sufficiently large characteristic). Using symmetry again, by characterising the n/2homogeneous slice appearing in refutations, we show that unsatisfiable invariant polynomials of degree n/2 require degree ≥ n refutations. Lifting to size lower bounds: Lifting our Nullstellensatz degree bounds to IPSsize lower bounds, we obtain exponential lower bounds for any polylogarithmic degree symmetric instance against IPS refutations written as oblivious readonce algebraic programs (roABPIPS). For invariant polynomials, we show lower bounds against roABPIPS and refutations written as multilinear formulas in the placeholder IPS regime (studied by Andrews and Forbes [54th Ann. Symp. Theory Comput., STOC 2022]), where the hard instances do not necessarily have small roABPs themselves, including over positive characteristic fields. This provides the first IPSfragment lower bounds over finite fields. By an adaptation of the work of Amireddy, Garg, Kayal, Saha and Thankey [50th Intl. Colloq. Aut. Lang. Prog., ICALP 2023], we strengthen the constantdepth IPS lower bounds obtained recently in Govindasamy, Hakoniemi and Tzameret [63rd IEEE Ann. Symp. Found. Comput. Sci., FOCS 2022]. Barriers for Boolean instances: While lower bounds against strong propositional proof systems were the original motivation for studying algebraic proof systems in the 1990s [Beame et al. Proc. London Math. Soc. (3) 73, 1 (1996), 1–26; Buss et al. Computational Complexity 6, 3 (1996), 256–298] we show that the functional lower bound method alone cannot establish any size lower bound for Boolean instances for any sufficiently strong proof systems, and in particular, cannot lead to lower bounds against AC^{0}[p]Frege and TC^{0}Frege. @InProceedings{STOC24p1396, author = {Tuomas Hakoniemi and Nutan Limaye and Iddo Tzameret}, title = {Functional Lower Bounds in Algebraic Proofs: Symmetry, Lifting, and Barriers}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13961404}, doi = {10.1145/3618260.3649616}, year = {2024}, } Publisher's Version 

Hambardzumyan, Lianna 
STOC '24: "No Complete Problem for ConstantCost ..."
No Complete Problem for ConstantCost Randomized Communication
Yuting Fang , Lianna Hambardzumyan , Nathaniel Harms , and Pooya Hatami (Ohio State University, USA; Hebrew University of Jerusalem, Israel; EPFL, Lausanne, Switzerland) We prove that the class of communication problems with publiccoin randomized constantcost protocols, called BPP^{0}, does not contain a complete problem. In other words, there is no randomized constantcost problem Q ∈ BPP^{0}, such that all other problems P ∈ BPP^{0} can be computed by a constantcost deterministic protocol with access to an oracle for Q. We also show that the kHamming Distance problems form an infinite hierarchy within BPP^{0}. Previously, it was known only that Equality is not complete for BPP^{0}. We introduce a new technique, using Ramsey theory, that can prove lower bounds against arbitrary oracles in BPP^{0}, and more generally, we show that kHamming Distance matrices cannot be expressed as a Boolean combination of any constant number of matrices which forbid large GreaterThan subproblems. @InProceedings{STOC24p1287, author = {Yuting Fang and Lianna Hambardzumyan and Nathaniel Harms and Pooya Hatami}, title = {No Complete Problem for ConstantCost Randomized Communication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12871298}, doi = {10.1145/3618260.3649716}, year = {2024}, } Publisher's Version 

Hansen, Kristoffer Arnsfelt 
STOC '24: "PPADMembership for Problems ..."
PPADMembership for Problems with Exact Rational Solutions: A General Approach via Convex Optimization
Aris FilosRatsikas , Kristoffer Arnsfelt Hansen , Kasper Høgh , and Alexandros Hollender (University of Edinburgh, United Kingdom; Aarhus University, Aarhus, Denmark; University of Oxford, United Kingdom) We introduce a general technique for proving membership of search problems with exact rational solutions in PPAD, one of the most wellknown classes containing total search problems with polynomialtime verifiable solutions. In particular, we construct a "pseudogate", coined the linearOPTgate, which can be used as a "plugandplay" component in a piecewiselinear (PL) arithmetic circuit, as an integral component of the "LinearFIXP" equivalent definition of the class. The linearOPTgate can solve several convex optimization programs, including quadratic programs, which often appear organically in the simplest existence proofs for these problems. This effectively transforms existence proofs to PPADmembership proofs, and consequently establishes the existence of solutions described by rational numbers. Using the linearOPTgate, we are able to significantly simplify and generalize almost all known PPADmembership proofs for finding exact solutions in the application domains of game theory, competitive markets, autobidding auctions, and fair division, as well as to obtain new PPADmembership results for problems in these domains. @InProceedings{STOC24p1204, author = {Aris FilosRatsikas and Kristoffer Arnsfelt Hansen and Kasper Høgh and Alexandros Hollender}, title = {PPADMembership for Problems with Exact Rational Solutions: A General Approach via Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12041215}, doi = {10.1145/3618260.3649645}, year = {2024}, } Publisher's Version 

Harms, Nathaniel 
STOC '24: "No Complete Problem for ConstantCost ..."
No Complete Problem for ConstantCost Randomized Communication
Yuting Fang , Lianna Hambardzumyan , Nathaniel Harms , and Pooya Hatami (Ohio State University, USA; Hebrew University of Jerusalem, Israel; EPFL, Lausanne, Switzerland) We prove that the class of communication problems with publiccoin randomized constantcost protocols, called BPP^{0}, does not contain a complete problem. In other words, there is no randomized constantcost problem Q ∈ BPP^{0}, such that all other problems P ∈ BPP^{0} can be computed by a constantcost deterministic protocol with access to an oracle for Q. We also show that the kHamming Distance problems form an infinite hierarchy within BPP^{0}. Previously, it was known only that Equality is not complete for BPP^{0}. We introduce a new technique, using Ramsey theory, that can prove lower bounds against arbitrary oracles in BPP^{0}, and more generally, we show that kHamming Distance matrices cannot be expressed as a Boolean combination of any constant number of matrices which forbid large GreaterThan subproblems. @InProceedings{STOC24p1287, author = {Yuting Fang and Lianna Hambardzumyan and Nathaniel Harms and Pooya Hatami}, title = {No Complete Problem for ConstantCost Randomized Communication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12871298}, doi = {10.1145/3618260.3649716}, year = {2024}, } Publisher's Version 

Harvey, Nicholas 
STOC '24: "Explicit Orthogonal Arrays ..."
Explicit Orthogonal Arrays and Universal Hashing with Arbitrary Parameters
Nicholas Harvey and Arvin Sahami (University of British Columbia, Canada) Orthogonal arrays are a type of combinatorial design that emerged in the 1940s in the design of statistical experiments. In 1947, Rao proved a lower bound on the size of any orthogonal array, and raised the problem of constructing arrays of minimum size. Kuperberg, Lovett and Peled (2017) gave a nonconstructive existence proof of orthogonal arrays whose size is nearoptimal (i.e., within a polynomial of Rao’s lower bound), leaving open the question of an algorithmic construction. We give the first explicit, deterministic, algorithmic construction of orthogonal arrays achieving nearoptimal size for all parameters. Our construction uses algebraic geometry codes. In pseudorandomness, the notions of tindependent generators or tindependent hash functions are equivalent to orthogonal arrays. Classical constructions of tindependent hash functions are known when the size of the codomain is a prime power, but very few constructions are known for an arbitrary codomain. Our construction yields algorithmically efficient tindependent hash functions for arbitrary domain and codomain. @InProceedings{STOC24p1259, author = {Nicholas Harvey and Arvin Sahami}, title = {Explicit Orthogonal Arrays and Universal Hashing with Arbitrary Parameters}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12591267}, doi = {10.1145/3618260.3649642}, year = {2024}, } Publisher's Version 

Hatami, Pooya 
STOC '24: "No Complete Problem for ConstantCost ..."
No Complete Problem for ConstantCost Randomized Communication
Yuting Fang , Lianna Hambardzumyan , Nathaniel Harms , and Pooya Hatami (Ohio State University, USA; Hebrew University of Jerusalem, Israel; EPFL, Lausanne, Switzerland) We prove that the class of communication problems with publiccoin randomized constantcost protocols, called BPP^{0}, does not contain a complete problem. In other words, there is no randomized constantcost problem Q ∈ BPP^{0}, such that all other problems P ∈ BPP^{0} can be computed by a constantcost deterministic protocol with access to an oracle for Q. We also show that the kHamming Distance problems form an infinite hierarchy within BPP^{0}. Previously, it was known only that Equality is not complete for BPP^{0}. We introduce a new technique, using Ramsey theory, that can prove lower bounds against arbitrary oracles in BPP^{0}, and more generally, we show that kHamming Distance matrices cannot be expressed as a Boolean combination of any constant number of matrices which forbid large GreaterThan subproblems. @InProceedings{STOC24p1287, author = {Yuting Fang and Lianna Hambardzumyan and Nathaniel Harms and Pooya Hatami}, title = {No Complete Problem for ConstantCost Randomized Communication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12871298}, doi = {10.1145/3618260.3649716}, year = {2024}, } Publisher's Version 

Hershkowitz, D. Ellis 
STOC '24: "Ghost Value Augmentation for ..."
Ghost Value Augmentation for kEdgeConnectivity
D. Ellis Hershkowitz , Nathan Klein , and Rico Zenklusen (Brown University, USA; Institute for Advanced Study, Princeton, USA; ETH Zurich, Switzerland) We give a polytime algorithm for the kedgeconnected spanning subgraph (kECSS) problem that returns a solution of cost no greater than the cheapest (k+10)ECSS on the same graph. Our approach enhances the iterative relaxation framework with a new ingredient, which we call ghost values, that allows for high sparsity in intermediate problems. Our guarantees improve upon the bestknown approximation factor of 2 for kECSS whenever the optimal value of (k+10)ECSS is close to that of kECSS. This is a property that holds for the closely related problem kedgeconnected spanning multisubgraph (kECSM), which is identical to kECSS except edges can be selected multiple times at the same cost. As a consequence, we obtain a 1+O(1/k)approximation algorithm for kECSM, which resolves a conjecture of Pritchard and improves upon a recent 1+O(1/√k)approximation algorithm of Karlin, Klein, Oveis Gharan, and Zhang. Moreover, we present a matching lower bound for kECSM, showing that our approximation ratio is tight up to the constant factor in O(1/k), unless P=NP. @InProceedings{STOC24p1853, author = {D. Ellis Hershkowitz and Nathan Klein and Rico Zenklusen}, title = {Ghost Value Augmentation for kEdgeConnectivity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18531864}, doi = {10.1145/3618260.3649715}, year = {2024}, } Publisher's Version STOC '24: "LowStep Multicommodity Flow ..." LowStep Multicommodity Flow Emulators Bernhard Haeupler , D. Ellis Hershkowitz , Jason Li , Antti Roeyskoe , and Thatchaphol Saranurak (ETH Zurich, Switzerland; Carnegie Mellon University, USA; Brown University, USA; University of Michigan, USA) We introduce the concept of lowstep multicommodity flow emulators for any undirected, capacitated graph. At a high level, these emulators contain approximate multicommodity flows whose paths contain a small number of edges, shattering the infamous flow decomposition barrier for multicommodity flow. We prove the existence of lowstep multicommodity flow emulators and develop efficient algorithms to compute them. We then apply them to solve constantapproximate kcommodity flow in O((m+k)^{1+є}) time. To bypass the O(mk) flow decomposition barrier, we represent our output multicommodity flow implicitly; prior to our work, even the existence of implicit constantapproximate multicommodity flows of size o(mk) was unknown. Our results generalize to the minimum cost setting, where each edge has an associated cost and the multicommodity flow must satisfy a cost budget. Our algorithms are also parallel. @InProceedings{STOC24p71, author = {Bernhard Haeupler and D. Ellis Hershkowitz and Jason Li and Antti Roeyskoe and Thatchaphol Saranurak}, title = {LowStep Multicommodity Flow Emulators}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {7182}, doi = {10.1145/3618260.3649689}, year = {2024}, } Publisher's Version 

Hirahara, Shuichi 
STOC '24: "OneWay Functions and Zero ..."
OneWay Functions and Zero Knowledge
Shuichi Hirahara and Mikito Nanashima (National Institute of Informatics, Tokyo, Japan; Tokyo Institute of Technology, Tokyo, Japan) The fundamental theorem of Goldreich, Micali, and Wigderson (J. ACM 1991) shows that the existence of a oneway function is sufficient for constructing computational zero knowledge (CZK) proofs for all languages in NP. We prove its converse, thereby establishing characterizations of oneway functions based on the worstcase complexities of zero knowledge. Specifically, we prove that the following are equivalent:  A oneway function exists.  NP ⊆ CZK and NP is hard in the worst case.  CZK is hard in the worst case and the problem GapMCSP of approximating circuit complexity is in CZK. The characterization above also holds for statistical and computational zeroknowledge argument systems. We further extend this characterization to a proof system with knowledge complexity O(logn). In particular, we show that the existence of a oneway function is characterized by the worstcase hardness of CZK if GapMCSP has a proof system with knowledge complexity O(logn). We complement this result by showing that NP admits an interactive proof system with knowledge complexity ω(logn) under the existence of an exponentially hard auxiliaryinput oneway function (which is a weaker primitive than an exponentially hard oneway function). We also characterize the existence of a robustlyoften nonuniformly computable oneway function by the nondeterministic hardness of CZK under the weak assumption that PSPACE ⊈AM. We present two applications of our results. First, we simplify the proof of the recent characterization of a oneway function by NPhardness of a metacomputational problem and the worstcase hardness of NP given by Hirahara (STOC’23). Second, we show that if NP has a laconic zeroknowledge argument system, then there exists a publickey encryption scheme whose security can be based on the worstcase hardness of NP. This improves previous results which assume the existence of an indistinguishable obfuscation. @InProceedings{STOC24p1731, author = {Shuichi Hirahara and Mikito Nanashima}, title = {OneWay Functions and Zero Knowledge}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17311738}, doi = {10.1145/3618260.3649701}, year = {2024}, } Publisher's Version STOC '24: "Probabilistically Checkable ..." Probabilistically Checkable Reconfiguration Proofs and Inapproximability of Reconfiguration Problems Shuichi Hirahara and Naoto Ohsaka (National Institute of Informatics, Tokyo, Japan; CyberAgent, Japan) Motivated by the inapproximability of reconfiguration problems, we present a new PCPtype characterization of PSPACE, which we call a probabilistically checkable reconfiguration proof (PCRP): Any PSPACE computation can be encoded into an exponentially long sequence of polynomially long proofs such that every adjacent pair of the proofs differs in at most one bit, and every proof can be probabilistically checked by reading a constant number of bits. Using the new characterization, we prove PSPACEcompleteness of approximate versions of many reconfiguration problems, such as the Maxmin 3SAT Reconfiguration problem. This resolves the open problem posed by Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (ISAAC 2008; Theor. Comput. Sci. 2011) as well as the Reconfiguration Inapproximability Hypothesis by Ohsaka (STACS 2023) affirmatively. We also present PSPACEcompleteness of approximating the Maxmin Clique Reconfiguration problem to within a factor of n^{ε} for some constant ε > 0. @InProceedings{STOC24p1435, author = {Shuichi Hirahara and Naoto Ohsaka}, title = {Probabilistically Checkable Reconfiguration Proofs and Inapproximability of Reconfiguration Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14351445}, doi = {10.1145/3618260.3649667}, year = {2024}, } Publisher's Version STOC '24: "Planted Clique Conjectures ..." Planted Clique Conjectures Are Equivalent Shuichi Hirahara and Nobutaka Shimizu (National Institute of Informatics, Tokyo, Japan; Tokyo Institute of Technology, Tokyo, Japan) The planted clique conjecture states that no polynomialtime algorithm can find a hidden clique of size k ≪ √n in an nvertex Erdős–Rényi random graph with a kclique planted. In this paper, we prove the equivalence among many (in fact, most) variants of planted clique conjectures, such as search versions with a success probability exponentially close to 1 and with a nonnegligible success probability, a worstcase version (the kclique problem on incompressible graphs), decision versions with small and large success probabilities, and decision versions with adversarially chosen k and binomially distributed k. In particular, we establish the equivalence between the planted clique problem introduced by Jerrum and Kučera and its decision version suggested by Saks in the 1990s. Moreover, the equivalence among decision versions identifies the optimality of a simple edge counting algorithm: By counting the number of edges, one can efficiently distinguish an nvertex random graph from a random graph with a kclique planted with probability Θ(k^{2}/n) for any k ≤ √n. We show that for any k, no polynomialtime algorithm can distinguish these two random graphs with probability ≫ k^{2} / n if and only if the planted clique conjecture holds. The equivalence among search versions identifies the first oneway function that admits a polynomialtime securitypreserving selfreduction from exponentially weak to strong oneway functions. These results reveal a detectionrecovery gap in success probabilities for the planted clique problem. We also present another equivalence between the existence of a refutation algorithm for the planted clique problem and an averagecase polynomialtime algorithm for the kclique problem with respect to the Erdős–Rényi random graph. @InProceedings{STOC24p358, author = {Shuichi Hirahara and Nobutaka Shimizu}, title = {Planted Clique Conjectures Are Equivalent}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {358366}, doi = {10.1145/3618260.3649751}, year = {2024}, } Publisher's Version STOC '24: "Beating Brute Force for Compression ..." Beating Brute Force for Compression Problems Shuichi Hirahara , Rahul Ilango , and R. Ryan Williams (National Institute of Informatics, Tokyo, Japan; Massachusetts Institute of Technology, USA) A compression problem is defined with respect to an efficient encoding function f; given a string x, our task is to find the shortest y such that f(y) = x. The obvious bruteforce algorithm for solving this compression task on nbit strings runs in time O(2^{ℓ} · t(n)), where ℓ is the length of the shortest description y and t(n) is the time complexity of f when it prints nbit output. We prove that every compression problem has a Boolean circuit family which finds short descriptions more efficiently than brute force. In particular, our circuits have size 2^{4 ℓ / 5} · poly(t(n)), which is significantly more efficient for all ℓ ≫ log(t(n)). Our construction builds on FiatNaor’s data structure for function inversion [SICOMP 1999]: we show how to carefully modify their data structure so that it can be nontrivially implemented using Boolean circuits, and we show how to utilize hashing so that the circuit size is only exponential in the description length. As a consequence, the Minimum Circuit Size Problem for generic fanin two circuits of size s(n) on truth tables of size 2^{n} can be solved by circuits of size 2^{4/5 · w + o(w)} · poly(2^{n}), where w = s(n) log_{2}(s(n) + n). This improves over the bruteforce approach of trying all possible sizes(n) circuits for all s(n) ≥ n. Similarly, the task of computing a short description of a string x when its ^{t}complexity is at most ℓ, has circuits of size 2^{4/5 ℓ} · poly(t). We also give nontrivial circuits for computing Kt complexity on average, and for solving NP relations with “compressible” instancewitness pairs. @InProceedings{STOC24p659, author = {Shuichi Hirahara and Rahul Ilango and R. Ryan Williams}, title = {Beating Brute Force for Compression Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {659670}, doi = {10.1145/3618260.3649778}, year = {2024}, } Publisher's Version STOC '24: "Symmetric Exponential Time ..." Symmetric Exponential Time Requires NearMaximum Circuit Size Lijie Chen , Shuichi Hirahara , and Hanlin Ren (University of California at Berkeley, USA; National Institute of Informatics, Tokyo, Japan; University of Oxford, United Kingdom) We show that there is a language in S_{2}E/_{1} (symmetric exponential time with one bit of advice) with circuit complexity at least 2^{n}/n. In particular, the above also implies the same nearmaximum circuit lower bounds for the classes Σ_{2}E, (Σ_{2}E∩Π_{2}E)/_{1}, and ZPE^{NP}/_{1}. Previously, only ”halfexponential” circuit lower bounds for these complexity classes were known, and the smallest complexity class known to require exponential circuit complexity was Δ_{3}E = E^{Σ2P} (Miltersen, Vinodchandran, and Watanabe COCOON’99). Our circuit lower bounds are corollaries of an unconditional zeroerror pseudodeterministic algorithm with an NP oracle and one bit of advice (FZPP^{NP}/_{1}) that solves the range avoidance problem infinitely often. This algorithm also implies unconditional infinitelyoften pseudodeterministic FZPP^{NP}/_{1} constructions for Ramsey graphs, rigid matrices, twosource extractors, linear codes, and K^{poly}random strings with nearly optimal parameters. Our proofs relativize. The two main technical ingredients are (1) Korten’s P^{NP} reduction from the range avoidance problem to constructing hard truth tables (FOCS’21), which was in turn inspired by a result of Jeřábek on provability in Bounded Arithmetic (Ann. Pure Appl. Log. 2004); and (2) the recent iterative winwin paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS’23). @InProceedings{STOC24p1990, author = {Lijie Chen and Shuichi Hirahara and Hanlin Ren}, title = {Symmetric Exponential Time Requires NearMaximum Circuit Size}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19901999}, doi = {10.1145/3618260.3649624}, year = {2024}, } Publisher's Version 

Høgh, Kasper 
STOC '24: "PPADMembership for Problems ..."
PPADMembership for Problems with Exact Rational Solutions: A General Approach via Convex Optimization
Aris FilosRatsikas , Kristoffer Arnsfelt Hansen , Kasper Høgh , and Alexandros Hollender (University of Edinburgh, United Kingdom; Aarhus University, Aarhus, Denmark; University of Oxford, United Kingdom) We introduce a general technique for proving membership of search problems with exact rational solutions in PPAD, one of the most wellknown classes containing total search problems with polynomialtime verifiable solutions. In particular, we construct a "pseudogate", coined the linearOPTgate, which can be used as a "plugandplay" component in a piecewiselinear (PL) arithmetic circuit, as an integral component of the "LinearFIXP" equivalent definition of the class. The linearOPTgate can solve several convex optimization programs, including quadratic programs, which often appear organically in the simplest existence proofs for these problems. This effectively transforms existence proofs to PPADmembership proofs, and consequently establishes the existence of solutions described by rational numbers. Using the linearOPTgate, we are able to significantly simplify and generalize almost all known PPADmembership proofs for finding exact solutions in the application domains of game theory, competitive markets, autobidding auctions, and fair division, as well as to obtain new PPADmembership results for problems in these domains. @InProceedings{STOC24p1204, author = {Aris FilosRatsikas and Kristoffer Arnsfelt Hansen and Kasper Høgh and Alexandros Hollender}, title = {PPADMembership for Problems with Exact Rational Solutions: A General Approach via Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12041215}, doi = {10.1145/3618260.3649645}, year = {2024}, } Publisher's Version 

Hollender, Alexandros 
STOC '24: "The Complexity of Computing ..."
The Complexity of Computing KKT Solutions of Quadratic Programs
John Fearnley , Paul W. Goldberg , Alexandros Hollender , and Rahul Savani (University of Liverpool, United Kingdom; University of Oxford, United Kingdom; Alan Turing Institute, United Kingdom) It is well known that solving a (nonconvex) quadratic program is NPhard. We show that the problem remains hard even if we are only looking for a KarushKuhnTucker (KKT) point, instead of a global optimum. Namely, we prove that computing a KKT point of a quadratic polynomial over the domain [0,1]^{n} is complete for the class CLS = PPAD∩PLS. @InProceedings{STOC24p892, author = {John Fearnley and Paul W. Goldberg and Alexandros Hollender and Rahul Savani}, title = {The Complexity of Computing KKT Solutions of Quadratic Programs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {892903}, doi = {10.1145/3618260.3649647}, year = {2024}, } Publisher's Version STOC '24: "PPADMembership for Problems ..." PPADMembership for Problems with Exact Rational Solutions: A General Approach via Convex Optimization Aris FilosRatsikas , Kristoffer Arnsfelt Hansen , Kasper Høgh , and Alexandros Hollender (University of Edinburgh, United Kingdom; Aarhus University, Aarhus, Denmark; University of Oxford, United Kingdom) We introduce a general technique for proving membership of search problems with exact rational solutions in PPAD, one of the most wellknown classes containing total search problems with polynomialtime verifiable solutions. In particular, we construct a "pseudogate", coined the linearOPTgate, which can be used as a "plugandplay" component in a piecewiselinear (PL) arithmetic circuit, as an integral component of the "LinearFIXP" equivalent definition of the class. The linearOPTgate can solve several convex optimization programs, including quadratic programs, which often appear organically in the simplest existence proofs for these problems. This effectively transforms existence proofs to PPADmembership proofs, and consequently establishes the existence of solutions described by rational numbers. Using the linearOPTgate, we are able to significantly simplify and generalize almost all known PPADmembership proofs for finding exact solutions in the application domains of game theory, competitive markets, autobidding auctions, and fair division, as well as to obtain new PPADmembership results for problems in these domains. @InProceedings{STOC24p1204, author = {Aris FilosRatsikas and Kristoffer Arnsfelt Hansen and Kasper Høgh and Alexandros Hollender}, title = {PPADMembership for Problems with Exact Rational Solutions: A General Approach via Convex Optimization}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12041215}, doi = {10.1145/3618260.3649645}, year = {2024}, } Publisher's Version 

Holzman, Ron 
STOC '24: "Fair Division via Quantile ..."
Fair Division via Quantile Shares
Yakov Babichenko , Michal Feldman , Ron Holzman , and Vishnu V. Narayan (Technion, Israel; Tel Aviv University, Israel) We consider the problem of fair division, where a set of indivisible goods should be distributed fairly among a set of agents with combinatorial valuations. To capture fairness, we adopt the notion of shares, where each agent is entitled to a fair share, based on some fairness criterion, and an allocation is considered fair if the value of every agent (weakly) exceeds her fair share. A sharebased notion is considered universally feasible if it admits a fair allocation for every profile of monotone valuations. A major question arises: is there a nontrivial sharebased notion that is universally feasible? The most wellknown sharebased notions, namely the proportional share and the maximin share, are not universally feasible, nor are any constant approximations of them. We propose a novel share notion, where an agent assesses the fairness of a bundle by comparing it to her valuation in a random allocation. In this framework, a bundle is considered qquantile fair, for q∈[0,1], if it is at least as good as a bundle obtained in a uniformly random allocation with probability at least q. Our main question is whether there exists a constant value of q for which the qquantile share is universally feasible. Our main result establishes a strong connection between the feasibility of quantile shares and the classical Erdős Matching Conjecture. Specifically, we show that if a version of this conjecture is true, then the 1/2equantile share is universally feasible. Furthermore, we provide unconditional feasibility results for additive, unitdemand and matroidrank valuations for constant values of q. Finally, we discuss the implications of our results for other share notions. @InProceedings{STOC24p1235, author = {Yakov Babichenko and Michal Feldman and Ron Holzman and Vishnu V. Narayan}, title = {Fair Division via Quantile Shares}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12351246}, doi = {10.1145/3618260.3649728}, year = {2024}, } Publisher's Version 

Hsieh, JunTing 
STOC '24: "Explicit TwoSided UniqueNeighbor ..."
Explicit TwoSided UniqueNeighbor Expanders
JunTing Hsieh , Theo McKenzie , Sidhanth Mohanty , and Pedro Paredes (Carnegie Mellon University, USA; Stanford University, USA; Massachusetts Institute of Technology, USA; Princeton University, USA) We study the problem of constructing explicit sparse graphs that exhibit strong vertex expansion. Our main result is the first twosided construction of imbalanced uniqueneighbor expanders, meaning bipartite graphs where small sets contained in both the left and right bipartitions exhibit uniqueneighbor expansion, along with algebraic properties relevant to constructing quantum codes. Our constructions are obtained from instantiations of the tripartite line product of a large tripartite spectral expander and a sufficiently good constantsized uniqueneighbor expander, a new graph product we defined that generalizes the line product and the routed product of previous wellknown works. To analyze the vertex expansion of graphs arising from the tripartite line product, we develop a sharp characterization of subgraphs that can arise in bipartite spectral expanders, generalizing previously known results, which may be of independent interest. By picking appropriate graphs to apply our product to, we give a strongly explicit construction of an infinite family of (d_{1},d_{2})biregular graphs (G_{n})_{n≥ 1} (for large enough d_{1} and d_{2}) where all sets S with fewer than a small constant fraction of vertices have Ω(d_{1}· S) uniqueneighbors (assuming d_{1} ≤ d_{2}). Additionally, we can also guarantee that subsets of vertices of size up to exp(Ω(√logV(G_{n}))) expand losslessly. @InProceedings{STOC24p788, author = {JunTing Hsieh and Theo McKenzie and Sidhanth Mohanty and Pedro Paredes}, title = {Explicit TwoSided UniqueNeighbor Expanders}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {788799}, doi = {10.1145/3618260.3649705}, year = {2024}, } Publisher's Version 

Hua, Yiding 
STOC '24: "Private Graphon Estimation ..."
Private Graphon Estimation via SumofSquares
Hongjie Chen , Jingqiu Ding , Tommaso D'Orsi , Yiding Hua , ChihHung Liu , and David Steurer (ETH Zurich, Switzerland; Bocconi University, Italy; National Taiwan University, Taiwan) We develop the first pure nodedifferentiallyprivate algorithms for learning stochastic block models and for graphon estimation with polynomial running time for any constant number of blocks. The statistical utility guarantees match those of the previous best informationtheoretic (exponentialtime) nodeprivate mechanisms for these problems. The algorithm is based on an exponential mech anism for a score function defined in terms of a sumofsquares relaxation whose level depends on the number of blocks. The key ingredients of our results are (1) a characterization of the distance between the block graphons in terms of a quadratic optimization over the polytope of doubly stochastic matrices, (2) a general sumofsquares convergence result for polynomial op timization over arbitrary polytopes, and (3) a general approach to perform Lipschitz extensions of score functions as part of the sumofsquares algorithmic paradigm. @InProceedings{STOC24p172, author = {Hongjie Chen and Jingqiu Ding and Tommaso D'Orsi and Yiding Hua and ChihHung Liu and David Steurer}, title = {Private Graphon Estimation via SumofSquares}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {172182}, doi = {10.1145/3618260.3649643}, year = {2024}, } Publisher's Version 

Huang, HsinYuan 
STOC '24: "Local Minima in Quantum Systems ..."
Local Minima in Quantum Systems
ChiFang Chen , HsinYuan Huang , John Preskill , and Leo Zhou (California Institute of Technology, USA; AWS Center for Quantum Computing, USA; Google Quantum AI, USA; Massachusetts Institute of Technology, USA) Finding ground states of quantum manybody systems is known to be hard for both classical and quantum computers. As a result, when Nature cools a quantum system in a lowtemperature thermal bath, the ground state cannot always be found efficiently. Instead, Nature finds a local minimum of the energy. In this work, we study the problem of finding local minima in quantum systems under thermal perturbations. While local minima are much easier to find than ground states, we show that finding a local minimum is computationally hard for classical computers, even when the task is to output a singlequbit observable at any local minimum. In contrast, we prove that a quantum computer can always find a local minimum efficiently using a thermal gradient descent algorithm that mimics the cooling process in Nature. To establish the classical hardness of finding local minima, we consider a family of twodimensional Hamiltonians such that any problem solvable by polynomialtime quantum algorithms can be reduced to finding local minima of these Hamiltonians. Therefore, cooling systems to local minima is universal for quantum computation, and, assuming quantum computation is more powerful than classical computation, finding local minima is classically hard and quantumly easy. @InProceedings{STOC24p1323, author = {ChiFang Chen and HsinYuan Huang and John Preskill and Leo Zhou}, title = {Local Minima in Quantum Systems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13231330}, doi = {10.1145/3618260.3649675}, year = {2024}, } Publisher's Version STOC '24: "Learning Shallow Quantum Circuits ..." Learning Shallow Quantum Circuits HsinYuan Huang , Yunchao Liu , Michael Broughton , Isaac Kim , Anurag Anshu , Zeph Landau , and Jarrod R. McClean (California Institute of Technology, USA; Google Quantum AI, USA; University of California at Berkeley, USA; University of California at Davis, USA; Harvard University, USA) Despite fundamental interests in learning quantum circuits, the existence of a computationally efficient algorithm for learning shallow quantum circuits remains an open question. Because shallow quantum circuits can generate distributions that are classically hard to sample from, existing learning algorithms do not apply. In this work, we present a polynomialtime classical algorithm for learning the description of any unknown nqubit shallow quantum circuit U (with arbitrary unknown architecture) within a small diamond distance using singlequbit measurement data on the output states of U. We also provide a polynomialtime classical algorithm for learning the description of any unknown nqubit state  ψ ⟩ = U  0^{n} ⟩ prepared by a shallow quantum circuit U (on a 2D lattice) within a small trace distance using singlequbit measurements on copies of  ψ ⟩. Our approach uses a quantum circuit representation based on local inversions and a technique to combine these inversions. This circuit representation yields an optimization landscape that can be efficiently navigated and enables efficient learning of quantum circuits that are classically hard to simulate. @InProceedings{STOC24p1343, author = {HsinYuan Huang and Yunchao Liu and Michael Broughton and Isaac Kim and Anurag Anshu and Zeph Landau and Jarrod R. McClean}, title = {Learning Shallow Quantum Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13431351}, doi = {10.1145/3618260.3649722}, year = {2024}, } Publisher's Version 

Huang, Lingxiao 
STOC '24: "On Optimal Coreset Construction ..."
On Optimal Coreset Construction for Euclidean (k,z)Clustering
Lingxiao Huang , Jian Li , and Xuan Wu (Nanjing University, China; Tsinghua University, China) Constructing smallsized coresets for various clustering problems in different metric spaces has attracted significant attention for the past decade. A central problem in the coreset literature is to understand what is the best possible coreset size for (k,z)clustering in Euclidean space. While there has been significant progress in the problem, there is still a gap between the stateoftheart upper and lower bounds. For instance, the best known upper bound for kmeans (z=2) is min{O(k^{3/2} ε^{−2}),O(k ε^{−4})} [CohenAddad, Larsen, Saulpic, Schwiegelshohn, SheikhOmar, NeurIPS’22], while the best known lower bound is Ω(kε^{−2}) [CohenAddad, Larsen, Saulpic, Schwiegelshohn. STOC’22]. In this paper, we make significant progress on both upper and lower bounds. For a large range of parameters (i.e., ε, k), we have a complete understanding of the optimal coreset size. In particular, we obtain the following results: (1) We present a new coreset lower bound Ω(k ε^{−z−2}) for Euclidean (k,z)clustering when ε ≥ Ω(k^{−1/(z+2)}). In view of the prior upper bound Õ_{z}(k ε^{−z−2}) [CohenAddad, Larsen, Saulpic, Schwiegelshohn. STOC’22], the bound is optimal. The new lower bound is surprising since Ω(kε^{−2}) [CohenAddad, Larsen, Saulpic, Schwiegelshohn. STOC’22] is “conjectured” to be the correct bound in some recent works (see e.g., [CohenAddad, Larsen, Saulpic, Schwiegelshohn. STOC’22; CohenAddad, Larsen, Saulpic, Schwiegelshohn, SheikhOmar, NeurIPS’22]). Our new lower bound instance is a delicate construction with multiple clusters of points, which is a significant departure from the previous construction in [CohenAddad, Larsen, Saulpic, Schwiegelshohn. STOC’22] that contains a single cluster of points. The new lower bound also implies improved lower bounds for (k,z)clustering in doubling metrics. (2) For the upper bound, we provide efficient coreset construction algorithms for (k,z)clustering with improved or optimal coreset sizes in several metric spaces. In particular, we provide an Õ_{z}(k^{2z+2/z+2} ε^{−2})sized coreset, with a unfied analysis, for (k,z)clustering for all z≥ 1 in Euclidean space. This upper bound improves upon the Õ_{z}(k^{2}ε^{−2}) upper bound by [CohenAddad, Larsen, Saulpic, Schwiegelshohn. STOC’22] (when k≤ ε^{−1}), and matches the recent independent results [CohenAddad, Larsen, Saulpic, Schwiegelshohn, SheikhOmar, NeurIPS’22] for kmedian and kmeans (z=1,2) and extends them to all z≥ 1. @InProceedings{STOC24p1594, author = {Lingxiao Huang and Jian Li and Xuan Wu}, title = {On Optimal Coreset Construction for Euclidean (k,z)Clustering}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {15941604}, doi = {10.1145/3618260.3649707}, year = {2024}, } Publisher's Version 

Ilango, Rahul 
STOC '24: "Beating Brute Force for Compression ..."
Beating Brute Force for Compression Problems
Shuichi Hirahara , Rahul Ilango , and R. Ryan Williams (National Institute of Informatics, Tokyo, Japan; Massachusetts Institute of Technology, USA) A compression problem is defined with respect to an efficient encoding function f; given a string x, our task is to find the shortest y such that f(y) = x. The obvious bruteforce algorithm for solving this compression task on nbit strings runs in time O(2^{ℓ} · t(n)), where ℓ is the length of the shortest description y and t(n) is the time complexity of f when it prints nbit output. We prove that every compression problem has a Boolean circuit family which finds short descriptions more efficiently than brute force. In particular, our circuits have size 2^{4 ℓ / 5} · poly(t(n)), which is significantly more efficient for all ℓ ≫ log(t(n)). Our construction builds on FiatNaor’s data structure for function inversion [SICOMP 1999]: we show how to carefully modify their data structure so that it can be nontrivially implemented using Boolean circuits, and we show how to utilize hashing so that the circuit size is only exponential in the description length. As a consequence, the Minimum Circuit Size Problem for generic fanin two circuits of size s(n) on truth tables of size 2^{n} can be solved by circuits of size 2^{4/5 · w + o(w)} · poly(2^{n}), where w = s(n) log_{2}(s(n) + n). This improves over the bruteforce approach of trying all possible sizes(n) circuits for all s(n) ≥ n. Similarly, the task of computing a short description of a string x when its ^{t}complexity is at most ℓ, has circuits of size 2^{4/5 ℓ} · poly(t). We also give nontrivial circuits for computing Kt complexity on average, and for solving NP relations with “compressible” instancewitness pairs. @InProceedings{STOC24p659, author = {Shuichi Hirahara and Rahul Ilango and R. Ryan Williams}, title = {Beating Brute Force for Compression Problems}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {659670}, doi = {10.1145/3618260.3649778}, year = {2024}, } Publisher's Version 

Itsykson, Dmitry 
STOC '24: "Lower Bounds for Regular Resolution ..."
Lower Bounds for Regular Resolution over Parities
Klim Efremenko , Michal Garlík , and Dmitry Itsykson (BenGurion University of the Negev, Israel; Imperial College London, United Kingdom) The proof system resolution over parities (Res(⊕)) operates with disjunctions of linear equations (linear clauses) over GF(2); it extends the resolution proof system by incorporating linear algebra over GF(2). Over the years, several exponential lower bounds on the size of treelike refutations have been established. However, proving a superpolynomial lower bound on the size of daglike Res(⊕) refutations remains a highly challenging open question. We prove an exponential lower bound for regular Res(⊕). Regular Res(⊕) is a subsystem of daglike Res(⊕) that naturally extends regular resolution. This is the first known superpolynomial lower bound for a fragment of daglike Res(⊕) which is exponentially stronger than treelike Res(⊕). In the regular regime, resolving linear clauses C_{1} and C_{2} on a linear form f is permitted only if, for both i∈ {1,2}, the linear form f does not lie within the linear span of all linear forms that were used in resolution rules during the derivation of C_{i}. Namely, we show that the size of any regular Res(⊕) refutation of the binary pigeonhole principle BPHP_{n}^{n+1} is at least 2^{Ω(∛n/logn)}. A corollary of our result is an exponential lower bound on the size of a strongly readonce linear branching program solving a search problem. This resolves an open question raised by Gryaznov, Pudlak, and Talebanfard (CCC 2022). As a byproduct of our technique, we prove that the size of any treelike Res(⊕) refutation of the weak binary pigeonhole principle BPHP_{n}^{m} is at least 2^{Ω(n)} using ProverDelayer games. We also give a direct proof of a width lower bound: we show that any daglike Res(⊕) refutation of BPHP_{n}^{m} contains a linear clause C with Ω(n) linearly independent equations. @InProceedings{STOC24p640, author = {Klim Efremenko and Michal Garlík and Dmitry Itsykson}, title = {Lower Bounds for Regular Resolution over Parities}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {640651}, doi = {10.1145/3618260.3649652}, year = {2024}, } Publisher's Version 

Ivkov, Misha 
STOC '24: "Semidefinite Programs Simulate ..."
Semidefinite Programs Simulate Approximate Message Passing Robustly
Misha Ivkov and Tselil Schramm (Stanford University, USA) Approximate message passing (AMP) is a family of iterative algorithms that generalize matrix power iteration. AMP algorithms are known to optimally solve many averagecase optimization problems. In this paper, we show that a large class of AMP algorithms can be simulated in polynomial time by local statistics hierarchy semidefinite programs (SDPs), even when an unknown principal minor of measure 1/polylog(dimension) is adversarially corrupted. Ours are the first robust guarantees for many of these problems. Further, our results offer an interesting counterpoint to strong lower bounds against less constrained SDP relaxations for averagecase maxcutgain (a.k.a. “optimizing the SherringtonKirkpatrick Hamiltonian”) and other problems. @InProceedings{STOC24p348, author = {Misha Ivkov and Tselil Schramm}, title = {Semidefinite Programs Simulate Approximate Message Passing Robustly}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {348357}, doi = {10.1145/3618260.3649713}, year = {2024}, } Publisher's Version 

Iyer, Siddharth 
STOC '24: "XOR Lemmas for Communication ..."
XOR Lemmas for Communication via Marginal Information
Siddharth Iyer and Anup Rao (University of Washington, USA) We define the marginal information of a communication protocol, and use it to prove XOR lemmas for communication complexity. We show that if every Cbit protocol has bounded advantage for computing a Boolean function f, then every Ω(C √n)bit protocol has advantage exp(−Ω(n)) for computing the nfold xor f^{⊕ n}. We prove exponentially small bounds in the average case setting, and near optimal bounds for product distributions and for boundedround protocols. @InProceedings{STOC24p652, author = {Siddharth Iyer and Anup Rao}, title = {XOR Lemmas for Communication via Marginal Information}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {652658}, doi = {10.1145/3618260.3649726}, year = {2024}, } Publisher's Version 

Iyer, Vishnu 
STOC '24: "Improved Stabilizer Estimation ..."
Improved Stabilizer Estimation via Bell Difference Sampling
Sabee Grewal , Vishnu Iyer , William Kretschmer , and Daniel Liang (University of Texas at Austin, USA; Simons Institute for the Theory of Computing, Berkeley, USA; Rice University, USA) We study the complexity of learning quantum states in various models with respect to the stabilizer formalism and obtain the following results: We prove that Ω(n) Tgates are necessary for any Clifford+T circuit to prepare computationally pseudorandom quantum states, an exponential improvement over the previously known bound. This bound is asymptotically tight if lineartime quantumsecure pseudorandom functions exist. Given an nqubit pure quantum state ψ⟩ that has fidelity at least τ with some stabilizer state, we give an algorithm that outputs a succinct description of a stabilizer state that witnesses fidelity at least τ − ε. The algorithm uses O(n/(ε^{2}τ^{4})) samples and exp(O(n/τ^{4})) / ε^{2} time. In the regime of τ constant, this algorithm estimates stabilizer fidelity substantially faster than the naive exp(O(n^{2}))time bruteforce algorithm over all stabilizer states. In the special case of τ > cos^{2}(π/8), we show that a modification of the above algorithm runs in polynomial time. We exhibit a tolerant property testing algorithm for stabilizer states. The underlying algorithmic primitive in all of our results is Bell difference sampling. To prove our results, we establish and/or strengthen connections between Bell difference sampling, symplectic Fourier analysis, and graph theory. @InProceedings{STOC24p1352, author = {Sabee Grewal and Vishnu Iyer and William Kretschmer and Daniel Liang}, title = {Improved Stabilizer Estimation via Bell Difference Sampling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13521363}, doi = {10.1145/3618260.3649738}, year = {2024}, } Publisher's Version 

Jabbarzade, Peyman 
STOC '24: "PrizeCollecting Steiner Tree: ..."
PrizeCollecting Steiner Tree: A 1.79 Approximation
Ali Ahmadi , Iman Gholami , MohammadTaghi Hajiaghayi , Peyman Jabbarzade , and Mohammad Mahdavi (University of Maryland, USA) PrizeCollecting Steiner Tree (PCST) is a generalization of the Steiner Tree problem, a fundamental problem in computer science. In the classic Steiner Tree problem, we aim to connect a set of vertices known as terminals using the minimumweight tree in a given weighted graph. In this generalized version, each vertex has a penalty, and there is flexibility to decide whether to connect each vertex or pay its associated penalty, making the problem more realistic and practical. Both the Steiner Tree problem and its PrizeCollecting version had longstanding 2approximation algorithms, matching the integrality gap of the natural LP formulations for both. This barrier for both problems has been surpassed, with algorithms achieving approximation factors below 2. While research on the Steiner Tree problem has led to a series of reductions in the approximation ratio below 2, culminating in a ln(4)+є approximation by Byrka, Grandoni, Rothvoß, and Sanità [STOC’10], the PrizeCollecting version has not seen improvements in the past 15 years since the work of Archer, Bateni, Hajiaghayi, and Karloff [FOCS’09, SIAM J. Comput.’11], which reduced the approximation factor for this problem from 2 to 1.9672. Interestingly, even the PrizeCollecting TSP approximation, which was first improved below 2 in the same paper, has seen several advancements since then (see, e.g., Blauth and N'agele [STOC’23]). In this paper, we reduce the approximation factor for the PCST problem substantially to 1.7994 via a novel iterative approach. @InProceedings{STOC24p1641, author = {Ali Ahmadi and Iman Gholami and MohammadTaghi Hajiaghayi and Peyman Jabbarzade and Mohammad Mahdavi}, title = {PrizeCollecting Steiner Tree: A 1.79 Approximation}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {16411652}, doi = {10.1145/3618260.3649789}, year = {2024}, } Publisher's Version 

Jahanara, Mohammad Mahdi 
STOC '24: "On the Power of Interactive ..."
On the Power of Interactive Proofs for Learning
Tom Gur , Mohammad Mahdi Jahanara , Mohammad Mahdi Khodabandeh , Ninad Rajgopal , Bahar Salamatian , and Igor Shinkar (University of Cambridge, United Kingdom; Simon Fraser University, Canada; Qualcomm, Canada) We continue the study of doublyefficient proof systems for verifying agnostic PAC learning, for which we obtain the following results. We construct an interactive protocol for learning the t largest Fourier characters of a given function f ∶ {0,1}^{n} → {0,1} up to an arbitrarily small error, wherein the verifier uses poly(t) random examples. This improves upon the Interactive GoldreichLevin protocol of Goldwasser, Rothblum, Shafer, and Yehudayoff (ITCS 2021) whose sample complexity is poly(t,n). For agnostically learning the class AC^{0}[2] under the uniform distribution, we build on the work of Carmosino, Impagliazzo, Kabanets, and Kolokolova (APPROX/RANDOM 2017) and design an interactive protocol, where given a function f ∶ {0,1}^{n} → {0,1}, the verifier learns the closest hypothesis up to polylog(n) multiplicative factor, using quasipolynomially many random examples. In contrast, this class has been notoriously resistant even for constructing realisable learners (without a prover) using random examples. For agnostically learning kjuntas under the uniform distribution, we obtain an interactive protocol, where the verifier uses O(2^{k}) random examples to a given function f ∶ {0,1}^{n} → {0,1}. Crucially, the sample complexity of the verifier is independent of n. We also show that if we do not insist on doublyefficient proof systems, then the model becomes trivial. Specifically, we show a protocol for an arbitrary class C of Boolean functions in the distributionfree setting, where the verifier uses O(1) labeled examples to learn f. @InProceedings{STOC24p1063, author = {Tom Gur and Mohammad Mahdi Jahanara and Mohammad Mahdi Khodabandeh and Ninad Rajgopal and Bahar Salamatian and Igor Shinkar}, title = {On the Power of Interactive Proofs for Learning}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10631070}, doi = {10.1145/3618260.3649784}, year = {2024}, } Publisher's Version 

Jain, Rahul 
STOC '24: "An Area Law for the MaximallyMixed ..."
An Area Law for the MaximallyMixed Ground State in Arbitrarily Degenerate Systems with Good AGSP
Itai Arad , Raz Firanko , and Rahul Jain (Centre for Quantum Technologies, Singapore; Technion, Israel; National University of Singapore, Singapore) We show an area law in the mutual information for the maximallymixed state Ω in the ground space of general Hamiltonians, which is independent of the underlying ground space degeneracy. Our result assumes the existence of a ‘good’ approximation to the ground state projector (a good AGSP), a crucial ingredient in former arealaw proofs. Such approximations have been explicitly derived for 1D gapped local Hamiltonians and 2D frustrationfree locallygapped local Hamiltonians. As a corollary, we show that in 1D gapped local Hamiltonians, for any є>0 and any bipartition L∪ L^{c} of the system, I^є_max(L)(L^c) ≤O( log(L)+log(1/є)), where L represents the number of sites in L and I_{max}^{є}(L)(L^{c})_{Ω} represents the єsmoothed maximum mutual information with respect to the L:L^{c} partition in Ω. From this bound we then conclude I(L)(L^{c})_{Ω}≤ O(log(L)) – an area law for the mutual information in 1D systems with a logarithmic correction. In addition, we show that Ω can be approximated up to an є in trace norm with a state of Schmidt rank of at most poly(L/є). Similar corollaries are derived for the mutual information of 2D frustrationfree and locallygapped local Hamiltonians. @InProceedings{STOC24p1311, author = {Itai Arad and Raz Firanko and Rahul Jain}, title = {An Area Law for the MaximallyMixed Ground State in Arbitrarily Degenerate Systems with Good AGSP}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13111322}, doi = {10.1145/3618260.3649612}, year = {2024}, } Publisher's Version 

Jambulapati, Arun 
STOC '24: "Sparsifying Generalized Linear ..."
Sparsifying Generalized Linear Models
Arun Jambulapati , James R. Lee , Yang P. Liu , and Aaron Sidford (Simons Institute for the Theory of Computing, Berkeley, USA; University of Washington, USA; Institute for Advanced Study, Princeton, USA; Stanford University, USA) We consider the sparsification of sums F : ℝ^{n} → ℝ^{+} where F(x) = f_{1}(⟨ a_{1},x⟩) + ⋯ + f_{m}(⟨ a_{m},x⟩) for vectors a_{1},…,a_{m} ∈ ℝ^{n} and functions f_{1},…,f_{m} : ℝ → ℝ^{+}. We show that (1+ε)approximate sparsifiers of F with support size n/ε^{2} (logn/ε)^{O(1)} exist whenever the functions f_{1},…,f_{m} are symmetric, monotone, and satisfy natural growth bounds. Additionally, we give efficient algorithms to compute such a sparsifier assuming each f_{i} can be evaluated efficiently. Our results generalize the classical case of ℓ_{p} sparsification, where f_{i}(z) = z^{p}, for p ∈ (0, 2], and give the first nearlinear size sparsifiers in the wellstudied setting of the Huber loss function and its generalizations, e.g., f_{i}(z) = min{z^{p}, z^{2}} for 0 < p ≤ 2. Our sparsification algorithm can be applied to give nearoptimal reductions for optimizing a variety of generalized linear models including ℓ_{p} regression for p ∈ (1, 2] to high accuracy, via solving (logn)^{O(1)} sparse regression instances with m ≤ n(logn)^{O(1)}, plus runtime proportional to the number of nonzero entries in the vectors a_{1}, …, a_{m}. @InProceedings{STOC24p1665, author = {Arun Jambulapati and James R. Lee and Yang P. Liu and Aaron Sidford}, title = {Sparsifying Generalized Linear Models}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {16651675}, doi = {10.1145/3618260.3649684}, year = {2024}, } Publisher's Version Video 

Jayaram, Rajesh 
STOC '24: "DataDependent LSH for the ..."
DataDependent LSH for the Earth Mover’s Distance
Rajesh Jayaram , Erik Waingarten , and Tian Zhang (Google Research, USA; University of Pennsylvania, USA) We give new datadependent locality sensitive hashing schemes (LSH) for the Earth Mover’s Distance (EMD), and as a result, improve the best approximation for nearest neighbor search under EMD by a quadratic factor. Here, the metric EMD_{s}(ℝ^{d},ℓ_{p}) consists of sets of s vectors in ^{d}, and for any two sets x,y of s vectors the distance EMD(x,y) is the minimum cost of a perfect matching between x,y, where the cost of matching two vectors is their ℓ_{p} distance. Previously, Andoni, Indyk, and Krauthgamer gave a (dataindependent) localitysensitive hashing scheme for EMD_{s}(ℝ^{d},ℓ_{p}) when p ∈ [1,2] with approximation O(log^{2} s). By being datadependent, we improve the approximation to Õ(logs). Our main technical contribution is to show that for any distribution µ supported on the metric EMD_{s}(ℝ^{d}, ℓ_{p}), there exists a datadependent LSH for dense regions of µ which achieves approximation Õ(logs), and that the dataindependent LSH actually achieves a Õ(logs)approximation outside of those dense regions. Finally, we show how to “glue” together these two hashing schemes without any additional loss in the approximation. Beyond nearest neighbor search, our datadependent LSH also gives optimal (distributional) sketches for the Earth Mover’s Distance. By known sketching lower bounds, this implies that our LSH is optimal (up to poly(loglogs) factors) among those that collide close points with constant probability. @InProceedings{STOC24p800, author = {Rajesh Jayaram and Erik Waingarten and Tian Zhang}, title = {DataDependent LSH for the Earth Mover’s Distance}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {800811}, doi = {10.1145/3618260.3649666}, year = {2024}, } Publisher's Version 

Jin, Ce 
STOC '24: "Shaving Logs via Large Sieve ..."
Shaving Logs via Large Sieve Inequality: Faster Algorithms for Sparse Convolution and More
Ce Jin and Yinzhan Xu (Massachusetts Institute of Technology, USA) In sparse convolutiontype problems, a common technique is to hash the input integers modulo a random prime p∈ [Q/2,Q] for some parameter Q, which reduces the range of the input integers while preserving their additive structure. However, this hash family suffers from two drawbacks, which led to bottlenecks in many stateoftheart algorithms: (1) The collision probability of two elements from [N] is O(logN/Q) rather than O(1/Q); (2) It is difficult to derandomize the choice of p; known derandomization techniques lead to superlogarithmic overhead [Chan, Lewenstein STOC’15]. In this paper, we partially overcome these drawbacks in certain scenarios, via novel applications of the large sieve inequality from analytic number theory. Consequently, we obtain the following improved algorithms for various problems (in the standard word RAM model): Sparse Nonnegative Convolution: We obtain an O(tlogt)time Las Vegas algorithm that computes the convolution A⋆ B of two nonnegative integer vectors A,B, where t is the output sparsity A⋆ B_{0}. Moreover, our algorithm terminates in O(tlogt) time with 1−1/poly(t) probability. This simultaneously improves the O(tlogt loglogt)time Las Vegas algorithm [Bringmann, Fischer, Nakos SODA’22] and the Monte Carlo O(tlogt)time algorithm with failure probability 2^{−√logt} [Bringmann, Fischer, Nakos STOC’21]. TexttoPattern Hamming Distances: Given a lengthm pattern P and a lengthn text T, we obtain an O(n√mloglogm)time deterministic algorithm that exactly computes the Hamming distance between P and every lengthm substring of T. This improves the previous O(n√m(logmloglogm)^{1/4})time deterministic algorithm [Chan, Jin, Vassilevska Williams, Xu FOCS’23] and nearly matches their O(n√m)time Las Vegas algorithm. Sparse General Convolution: For sparse convolution with possibly negative input, all previous approaches required Ω(tlog^{2} t) time, where t is the maximum of input and output sparsity, and an important question left open by [Bringmann, Fischer, Nakos STOC’21] is whether this can be improved. We make partial progress towards solving this question by giving a Monte Carlo O(tlogt) time algorithm in the restricted case where the length N of the input vectors satisfies N≤ t^{1.99}. @InProceedings{STOC24p1573, author = {Ce Jin and Yinzhan Xu}, title = {Shaving Logs via Large Sieve Inequality: Faster Algorithms for Sparse Convolution and More}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {15731584}, doi = {10.1145/3618260.3649605}, year = {2024}, } Publisher's Version STOC '24: "01 Knapsack in Nearly Quadratic ..." 01 Knapsack in Nearly Quadratic Time Ce Jin (Massachusetts Institute of Technology, USA) We study pseudopolynomial time algorithms for the fundamental 01 Knapsack problem. Recent research interest has focused on its finegrained complexity with respect to the number of items n and the maximum item weight w_{max}. Under (min,+)convolution hypothesis, 01 Knapsack does not have O((n+w_{max})^{2−δ}) time algorithms (CyganMuchaWęgrzyckiWłodarczyk 2017 and K'unnemannPaturiSchneider 2017). On the upper bound side, currently the fastest algorithm runs in Õ(n + ^{12/5}) time (Chen, Lian, Mao, and Zhang 2023), improving the earlier O(n + w_{max}^{3})time algorithm by Polak, Rohwedder, and Węgrzycki (2021). In this paper, we close this gap between the upper bound and the conditional lower bound (up to subpolynomial factors): The 01 Knapsack problem has a deterministic algorithm in O(n + w_{max}^{2}log^{4}w_{max}) time. Our algorithm combines and extends several recent structural results and algorithmic techniques from the literature on knapsacktype problems: (1) We generalize the “finegrained proximity” technique of Chen, Lian, Mao, and Zhang (2023) derived from the additivecombinatorial results of Bringmann and Wellnitz (2021) on dense subset sums. This allows us to bound the support size of the useful partial solutions in the dynamic program. (2) To exploit the small support size, our main technical component is a vast extension of the “witness propagation” method, originally designed by Deng, Mao, and Zhong (2023) for speeding up dynamic programming in the easier unbounded knapsack settings. To extend this approach to our 01 setting, we use a novel pruning method, as well as the twolevel colorcoding of Bringmann (2017) and the SMAWK algorithm on tall matrices. @InProceedings{STOC24p271, author = {Ce Jin}, title = {01 Knapsack in Nearly Quadratic Time}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {271282}, doi = {10.1145/3618260.3649618}, year = {2024}, } Publisher's Version 

Jin, Zhengzhong 
STOC '24: "SNARGs under LWE via Propositional ..."
SNARGs under LWE via Propositional Proofs
Zhengzhong Jin , Yael Kalai , Alex Lombardi , and Vinod Vaikuntanathan (Northeastern University, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA; Princeton University, USA) We construct a succinct noninteractive argument (SNARG) system for every NP language L that has a propositional proof of nonmembership, i.e. of x∉ L. The soundness of our SNARG system relies on the hardness of the learning with errors (LWE) problem. The common reference string (CRS) in our construction grows with the space required to verify the propositional proof, and the size of the proof grows polylogarithmically in the length of the propositional proof. Unlike most of the literature on SNARGs, our result implies SNARGs for languages L with proof length shorter than logarithmic in the deterministic time complexity of L. Our SNARG improves over prior SNARGs for such “hard” NP languages (Sahai and Waters, STOC 2014, Jain and Jin, FOCS 2022) in several ways: 1) For languages with polynomiallength propositional proofs of nonmembership, our SNARGs are based on a single, polynomialtime falsifiable assumption, namely LWE. 2) Our construction handles superpolynomial length propositional proofs, as long as they have bounded space, under the subexponential LWE assumption. 3) Our SNARGs have a transparent setup, meaning that no private randomness is required to generate the CRS. Moreover, our approach departs dramatically from these prior works: we show how to design SNARGs for hard languages without publishing a program (in the CRS) that has the power to verify NP witnesses. The key new idea in our construction is what we call a “locally unsatisfiable extension” of the NP verification circuit {C_{x}}_{x}. We say that an NP verifier has a locally unsatisfiable extension if for every x∉L, there exists an extension E_{x} of C_{x} that is not even locally satisfiable in the sense of a local assignment generator [PanethRothblum, TCC 2017]. Crucially, we allow E_{x} to be depend arbitrarily on x rather than being efficiently constructible. In this work, we show – via a “hashandBARG” for a hidden, encrypted computation – how to build SNARGs for all languages with locally unsatisfiable extensions. We additionally show that propositional proofs of unsatisfiability generically imply the existence of locally unsatisfiable extensions, which allows us to deduce our main results. As an illustrative example, our results imply a SNARG for the decisional DiffieHellman (DDH) language under the LWE assumption. @InProceedings{STOC24p1750, author = {Zhengzhong Jin and Yael Kalai and Alex Lombardi and Vinod Vaikuntanathan}, title = {SNARGs under LWE via Propositional Proofs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17501757}, doi = {10.1145/3618260.3649770}, year = {2024}, } Publisher's Version 

Kacham, Praneeth 
STOC '24: "Optimal Communication Bounds ..."
Optimal Communication Bounds for Classic Functions in the Coordinator Model and Beyond
Hossein Esfandiari , Praneeth Kacham , Vahab Mirrokni , David P. Woodruff , and Peilin Zhong (Google, United Kingdom; Carnegie Mellon University, USA; Google Research, USA) In the coordinator model of communication with s servers, given an arbitrary nonnegative function f, we study the problem of approximating the sum ∑_{i ∈ [n]}f(x_{i}) up to a 1 ± ε factor. Here the vector x ∈ ℝ^{n} is defined to be x = x(1) + ⋯ + x(s), where x(j) ≥ 0 denotes the nonnegative vector held by the jth server. A special case of the problem is when f(x) = x^{k} which corresponds to the wellstudied problem of F_{k} moment estimation in the distributed communication model. We introduce a new parameter c_{f}[s] which captures the communication complexity of approximating ∑_{i∈ [n]} f(x_{i}) and for a broad class of functions f which includes f(x) = x^{k} for k ≥ 2 and other robust functions such as the Huber loss function, we give a two round protocol that uses total communication c_{f}[s]/ε^{2} bits, up to polylogarithmic factors. For this broad class of functions, our result improves upon the communication bounds achieved by Kannan, Vempala, and Woodruff (COLT 2014) and Woodruff and Zhang (STOC 2012), obtaining the optimal communication up to polylogarithmic factors in the minimum number of rounds. We show that our protocol can also be used for approximating higherorder correlations. Our results are part of a broad framework for optimally sampling from a joint distribution in terms of the marginal distributions held on individual servers. Apart from the coordinator model, algorithms for other graph topologies in which each node is a server have been extensively studied. We argue that directly lifting protocols from the coordinator model to other graph topologies will require some nodes in the graph to send a lot of communication. Hence, a natural question is the type of problems that can be efficiently solved in general graph topologies. We address this question by giving communication efficient protocols in the socalled personalized CONGEST model for solving linear regression and low rank approximation by designing composable sketches. Our sketch construction may be of independent interest and can implement any importance sampling procedure that has a monotonicity property. @InProceedings{STOC24p1911, author = {Hossein Esfandiari and Praneeth Kacham and Vahab Mirrokni and David P. Woodruff and Peilin Zhong}, title = {Optimal Communication Bounds for Classic Functions in the Coordinator Model and Beyond}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {19111922}, doi = {10.1145/3618260.3649742}, year = {2024}, } Publisher's Version 

Kalai, Adam Tauman 
STOC '24: "Calibrated Language Models ..."
Calibrated Language Models Must Hallucinate
Adam Tauman Kalai and Santosh S. Vempala (Open AI, USA; Georgia Institute of Technology, USA) Recent language models generate false but plausiblesounding text with surprising frequency. Such “hallucinations” are an obstacle to the usability of languagebased AI systems and can harm people who rely upon their outputs. This work shows that there is an inherent statistical lowerbound on the rate that pretrained language models hallucinate certain types of facts, having nothing to do with the transformer LM architecture or data quality. For “arbitrary” facts whose veracity cannot be determined from the training data, we show that hallucinations must occur at a certain rate for language models that satisfy a statistical calibration condition appropriate for generative language models. Specifically, if the maximum probability of any fact is bounded, we show that the probability of generating a hallucination is close to the fraction of facts that occur exactly once in the training data (a “GoodTuring” estimate), even assuming ideal training data without errors. One conclusion is that models pretrained to be sufficiently good predictors (i.e., calibrated) may require posttraining to mitigate hallucinations on the type of arbitrary facts that tend to appear once in the training set. However, our analysis also suggests that there is no statistical reason that pretraining will lead to hallucination on facts that tend to appear more than once in the training data (like references to publications such as articles and books, whose hallucinations have been particularly notable and problematic) or on systematic facts (like arithmetic calculations). Therefore, different architectures and learning algorithms may mitigate these latter types of hallucinations. @InProceedings{STOC24p160, author = {Adam Tauman Kalai and Santosh S. Vempala}, title = {Calibrated Language Models Must Hallucinate}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {160171}, doi = {10.1145/3618260.3649777}, year = {2024}, } Publisher's Version 

Kalai, Yael 
STOC '24: "SNARGs under LWE via Propositional ..."
SNARGs under LWE via Propositional Proofs
Zhengzhong Jin , Yael Kalai , Alex Lombardi , and Vinod Vaikuntanathan (Northeastern University, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA; Princeton University, USA) We construct a succinct noninteractive argument (SNARG) system for every NP language L that has a propositional proof of nonmembership, i.e. of x∉ L. The soundness of our SNARG system relies on the hardness of the learning with errors (LWE) problem. The common reference string (CRS) in our construction grows with the space required to verify the propositional proof, and the size of the proof grows polylogarithmically in the length of the propositional proof. Unlike most of the literature on SNARGs, our result implies SNARGs for languages L with proof length shorter than logarithmic in the deterministic time complexity of L. Our SNARG improves over prior SNARGs for such “hard” NP languages (Sahai and Waters, STOC 2014, Jain and Jin, FOCS 2022) in several ways: 1) For languages with polynomiallength propositional proofs of nonmembership, our SNARGs are based on a single, polynomialtime falsifiable assumption, namely LWE. 2) Our construction handles superpolynomial length propositional proofs, as long as they have bounded space, under the subexponential LWE assumption. 3) Our SNARGs have a transparent setup, meaning that no private randomness is required to generate the CRS. Moreover, our approach departs dramatically from these prior works: we show how to design SNARGs for hard languages without publishing a program (in the CRS) that has the power to verify NP witnesses. The key new idea in our construction is what we call a “locally unsatisfiable extension” of the NP verification circuit {C_{x}}_{x}. We say that an NP verifier has a locally unsatisfiable extension if for every x∉L, there exists an extension E_{x} of C_{x} that is not even locally satisfiable in the sense of a local assignment generator [PanethRothblum, TCC 2017]. Crucially, we allow E_{x} to be depend arbitrarily on x rather than being efficiently constructible. In this work, we show – via a “hashandBARG” for a hidden, encrypted computation – how to build SNARGs for all languages with locally unsatisfiable extensions. We additionally show that propositional proofs of unsatisfiability generically imply the existence of locally unsatisfiable extensions, which allows us to deduce our main results. As an illustrative example, our results imply a SNARG for the decisional DiffieHellman (DDH) language under the LWE assumption. @InProceedings{STOC24p1750, author = {Zhengzhong Jin and Yael Kalai and Alex Lombardi and Vinod Vaikuntanathan}, title = {SNARGs under LWE via Propositional Proofs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {17501757}, doi = {10.1145/3618260.3649770}, year = {2024}, } Publisher's Version 

Kalayci, Yusuf Hakan 
STOC '24: "Limitations of Stochastic ..."
Limitations of Stochastic Selection Problems with Pairwise Independent Priors
Shaddin Dughmi , Yusuf Hakan Kalayci , and Neel Patel (University of Southern California, USA) Motivated by the growing interest in correlationrobust stochastic optimization, we investigate stochastic selection problems beyond independence. Specifically, we consider the instructive case of pairwiseindependent priors and matroid constraints. We obtain essentiallyoptimal bounds for contention resolution and prophet inequalities. The impetus for our work comes from the recent work of Caragiannis et. al. [WINE 2022], who derived a constant factor approximation for the singlechoice prophet inequality with pairwiseindependent priors. For general matroids, our results are tight and largely negative. For both contention resolution and prophet inequalities, our impossibility results hold for the full linear matroid over a finite field. We explicitly construct pairwiseindependent distributions which rule out an ω(1/)balanced offline CRS and an ω(1/log)competitive prophet inequality against the (usual) oblivious adversary. For both results, we employ a generic approach for constructing pairwiseindependent random vectors — one which unifies and generalizes existing pairwiseindependence constructions from the literature on universal hash functions and pseudorandomness. Specifically, our approach is based on our observation that random linear maps turn linear independence into stochastic independence. We then examine the class of matroids which satisfy the socalled partition property — these include most common matroids encountered in optimization. We obtain positive results for both online contention resolution and prophet inequalities with pairwiseindependent priors on such matroids, approximately matching the corresponding guarantees for fully independent priors. These algorithmic results hold against the almighty adversary for both problems. @InProceedings{STOC24p479, author = {Shaddin Dughmi and Yusuf Hakan Kalayci and Neel Patel}, title = {Limitations of Stochastic Selection Problems with Pairwise Independent Priors}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {479490}, doi = {10.1145/3618260.3649718}, year = {2024}, } Publisher's Version 

Kallaugher, John 
STOC '24: "Exponential Quantum Space ..."
Exponential Quantum Space Advantage for Approximating Maximum Directed Cut in the Streaming Model
John Kallaugher , Ojas Parekh , and Nadezhda Voronova (Sandia National Laboratories, USA; Boston University, USA) While the search for quantum advantage typically focuses on speedups in execution time, quantum algorithms also offer the potential for advantage in space complexity. Previous work has shown such advantages for data stream problems, in which elements arrive and must be processed sequentially without random access, but these have been restricted to speciallyconstructed problems Le Gall, SPAA ‘06 or polynomial advantage Kallaugher, FOCS ‘21. We show an exponential quantum space advantage for the maximum directed cut problem. This is the first known exponential quantum space advantage for any natural streaming problem. This also constitutes the first unconditional exponential quantum resource advantage for approximating a discrete optimization problem in any setting. Our quantum streaming algorithm 0.4844approximates the value of the largest directed cut in a graph stream with n vertices using polylog(n) space, while previous work by Chou, Golovnev, and Velusamy FOCS ’20 implies that obtaining an approximation ratio better than 4/9 ≈ 0.4444 requires Ω(√n) space for any classical streaming algorithm. Our result is based on a recent O(√n) space classical streaming approach by Saxena, Singer, Sudan, and Velusamy FOCS ’23, with an additional improvement in the approximation ratio due to recent work by Singer APPROX ’23. @InProceedings{STOC24p1805, author = {John Kallaugher and Ojas Parekh and Nadezhda Voronova}, title = {Exponential Quantum Space Advantage for Approximating Maximum Directed Cut in the Streaming Model}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18051815}, doi = {10.1145/3618260.3649709}, year = {2024}, } Publisher's Version 

Kamath, Chethan 
STOC '24: "Batch Proofs Are Statistically ..."
Batch Proofs Are Statistically Hiding
Nir Bitansky , Chethan Kamath , Omer Paneth , Ron D. Rothblum , and Prashant Nalini Vasudevan (Tel Aviv University, Israel; IIT Bombay, India; Technion, Israel; National University of Singapore, Singapore) Batch proofs are proof systems that convince a verifier that x_{1},…,x_{t} ∈ L, for some NP language L, with communication that is much shorter than sending the t witnesses. In the case of statistical soundness (where the cheating prover is unbounded but the honest prover is efficient given the witnesses), interactive batch proofs are known for UP, the class of uniquewitness NP languages. In the case of computational soundness (where both honest and dishonest provers are efficient), noninteractive solutions are now known for all of NP, assuming standard lattice or group assumptions. We exhibit the first negative results regarding the existence of batch proofs and arguments:  Statistically sound batch proofs for L imply that L has a statistically witness indistinguishable (SWI) proof, with inverse polynomial SWI error, and a nonuniform honest prover. The implication is unconditional for obtaining honestverifier SWI or for obtaining fullfledged SWI from publiccoin protocols, whereas for privatecoin protocols fullfledged SWI is obtained assuming oneway functions. This poses a barrier for achieving batch proofs beyond UP (where witness indistinguishability is trivial). In particular, assuming that NP does not have SWI proofs, batch proofs for all of NP do not exist.  Computationally sound batch proofs (a.k.a batch arguments or BARGs) for NP, together with oneway functions, imply statistical zeroknowledge (SZK) arguments for NP with roughly the same number of rounds, an inverse polynomial zeroknowledge error, and nonuniform honest prover. Thus, constantround interactive BARGs from oneway functions would yield constantround SZK arguments from oneway functions. This would be surprising as SZK arguments are currently only known assuming constantround statisticallyhiding commitments. We further prove new positive implications of noninteractive batch arguments to noninteractive zero knowledge arguments (with explicit uniform prover and verifier):  Noninteractive BARGs for NP, together with oneway functions, imply noninteractive computational zeroknowledge arguments for NP. Assuming also dualmode commitments, the zero knowledge can be made statistical. Both our negative and positive results stem from a new framework showing how to transform a batch protocol for a language L into an SWI protocol for L. @InProceedings{STOC24p435, author = {Nir Bitansky and Chethan Kamath and Omer Paneth and Ron D. Rothblum and Prashant Nalini Vasudevan}, title = {Batch Proofs Are Statistically Hiding}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {435443}, doi = {10.1145/3618260.3649775}, year = {2024}, } Publisher's Version 

Kane, Daniel M. 
STOC '24: "Super Nonsingular Decompositions ..."
Super Nonsingular Decompositions of Polynomials and Their Application to Robustly Learning LowDegree PTFs
Ilias Diakonikolas , Daniel M. Kane , Vasilis Kontonis , Sihan Liu , and Nikos Zarifis (University of WisconsinMadison, USA; University of California at San Diego, USA; University of Texas at Austin, USA) We study the efficient learnability of lowdegree polynomial threshold functions (PTFs) in the presence of a constant fraction of adversarial corruptions. Our main algorithmic result is a polynomialtime PAC learning algorithm for this concept class in the strong contamination model under the Gaussian distribution with error guarantee O_{d, c}(opt^{1−c}), for any desired constant c>0, where opt is the fraction of corruptions. In the strong contamination model, an omniscient adversary can arbitrarily corrupt an optfraction of the data points and their labels. This model generalizes the malicious noise model and the adversarial label noise model. Prior to our work, known polynomialtime algorithms in this corruption model (or even in the weaker adversarial label noise model) achieved error Õ_{d}(opt^{1/(d+1)}), which deteriorates significantly as a function of the degree d. Our algorithm employs an iterative approach inspired by localization techniques previously used in the context of learning linear threshold functions. Specifically, we use a robust perceptron algorithm to compute a good partial classifier and then iterate on the unclassified points. In order to achieve this, we need to take a set defined by a number of polynomial inequalities and partition it into several wellbehaved subsets. To this end, we develop new polynomial decomposition techniques that may be of independent interest. @InProceedings{STOC24p152, author = {Ilias Diakonikolas and Daniel M. Kane and Vasilis Kontonis and Sihan Liu and Nikos Zarifis}, title = {Super Nonsingular Decompositions of Polynomials and Their Application to Robustly Learning LowDegree PTFs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {152159}, doi = {10.1145/3618260.3649776}, year = {2024}, } Publisher's Version STOC '24: "Testing Closeness of Multivariate ..." Testing Closeness of Multivariate Distributions via Ramsey Theory Ilias Diakonikolas , Daniel M. Kane , and Sihan Liu (University of WisconsinMadison, USA; University of California at San Diego, USA) We investigate the statistical task of closeness (or equivalence) testing for multidimensional distributions. Specifically, given sample access to two unknown distributions p, q on ^{d}, we want to distinguish between the case that p=q versus p−q_{Ak} > є, where p−q_{Ak} denotes the generalized A_{k} distance between p and q — measuring the maximum discrepancy between the distributions over any collection of k disjoint, axisaligned rectangles. Our main result is the first closeness tester for this problem with sublearning sample complexity in any fixed dimension and a nearlymatching sample complexity lower bound. In more detail, we provide a computationally efficient closeness tester with sample complexity O((k^{6/7}/ poly_{d}(є)) log^{d}(k)). On the lower bound side, we establish a qualitatively matching sample complexity lower bound of Ω(k^{6/7}/poly(є)), even for d=2. These sample complexity bounds are surprising because the sample complexity of the problem in the univariate setting is Θ(k^{4/5}/poly(є)). This has the interesting consequence that the jump from one to two dimensions leads to a substantial increase in sample complexity, while increases beyond that do not. As a corollary of our general A_{k} tester, we obtain d_{TV}closeness testers for pairs of khistograms on ^{d} over a common unknown partition, and pairs of uniform distributions supported on the union of k unknown disjoint axisaligned rectangles. Both our algorithm and our lower bound make essential use of tools from Ramsey theory. @InProceedings{STOC24p340, author = {Ilias Diakonikolas and Daniel M. Kane and Sihan Liu}, title = {Testing Closeness of Multivariate Distributions via Ramsey Theory}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {340347}, doi = {10.1145/3618260.3649657}, year = {2024}, } Publisher's Version STOC '24: "Locality Bounds for Sampling ..." Locality Bounds for Sampling Hamming Slices Daniel M. Kane , Anthony Ostuni , and Kewen Wu (University of California at San Diego, USA; University of California at Berkeley, USA) Spurred by the influential work of Viola (Journal of Computing 2012), the past decade has witnessed an active line of research into the complexity of (approximately) sampling distributions, in contrast to the traditional focus on the complexity of computing functions. We build upon and make explicit earlier implicit results of Viola to provide superconstant lower bounds on the locality of Boolean functions approximately sampling the uniform distribution over binary strings of particular Hamming weights, both exactly and modulo an integer, answering questions of Viola (Journal of Computing 2012) and Filmus, Leigh, Riazanov, and Sokolov (RANDOM 2023). Applications to data structure lower bounds and quantumclassical separations are discussed. This is an extended abstract. The full paper can be found at https://arxiv.org/abs/2402.14278. @InProceedings{STOC24p1279, author = {Daniel M. Kane and Anthony Ostuni and Kewen Wu}, title = {Locality Bounds for Sampling Hamming Slices}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12791286}, doi = {10.1145/3618260.3649670}, year = {2024}, } Publisher's Version 

Kaski, Petteri 
STOC '24: "The Asymptotic Rank Conjecture ..."
The Asymptotic Rank Conjecture and the Set Cover Conjecture Are Not Both True
Andreas Björklund and Petteri Kaski (IT University of Copenhagen, Copenhagen, Denmark; Aalto University, Finland) Strassen’s asymptotic rank conjecture [Progr. Math. 120 (1994)] claims a strong submultiplicative upper bound on the rank of a threetensor obtained as an iterated Kronecker product of a constantsize base tensor. The conjecture, if true, most notably would put square matrix multiplication in quadratic time. We note here that some moreorless unexpected algorithmic results in the area of exponentialtime algorithms would also follow. Specifically, we study the socalled set cover conjecture, which states that for any є>0 there exists a positive integer constant k such that no algorithm solves the kSet Cover problem in worstcase time ((2−є)^{n}Fpoly(n)). The kSet Cover problem asks, given as input an nelement universe U, a family F of sizeatmostk subsets of U, and a positive integer t, whether there is a subfamily of at most t sets in F whose union is U. The conjecture was formulated by Cygan, Fomin, Kowalik, Lokshtanov, Marx, Pilipczuk, Pilipczuk, and Saurabh in the monograph Parameterized Algorithms [Springer, 2015], but was implicit as a hypothesis already in Cygan, Dell, Lokshtanov, Marx, Nederlof, Okamoto, Paturi, Saurabh, and Wahlstr'om [CCC 2012, ACM Trans. Algorithms 2016], there conjectured to follow from the Strong Exponential Time Hypothesis. We prove that if the asymptotic rank conjecture is true, then the set cover conjecture is false. Using a reduction by Krauthgamer and Trabelsi [STACS 2019], in this scenario we would also get an ((2−δ)^{n})time randomized algorithm for some constant δ>0 for another wellstudied problem for which no such algorithm is known, namely that of deciding whether a given nvertex directed graph has a Hamiltonian cycle. At a finegrained level, our results do not need the full strength of the asymptotic rank conjecture; it suffices that the conclusion of the conjecture holds approximately for a single 7× 7× 7 tensor. @InProceedings{STOC24p859, author = {Andreas Björklund and Petteri Kaski}, title = {The Asymptotic Rank Conjecture and the Set Cover Conjecture Are Not Both True}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {859870}, doi = {10.1145/3618260.3649656}, year = {2024}, } Publisher's Version 

Kaufman, Tali 
STOC '24: "Cosystolic Expansion of Sheaves ..."
Cosystolic Expansion of Sheaves on Posets with Applications to Good 2Query Locally Testable Codes and Lifted Codes
Uriya A. First and Tali Kaufman (University of Haifa, Israel; BarIlan University, Israel) We show that cosystolic expansion of sheaves on posets can be derived from local expansion conditions of the sheaf and the poset. When the poset at hand is a cell complex — typically a high dimensional expander — a sheaf may be thought of as generalizing coefficient groups used for defining homology and cohomology, by letting the coefficient group vary along the cell complex. Previous works established local criteria for cosystolic expansion only for simplicial complexes and with respect to constant coefficients. Our main technical contribution is providing a criterion that is more general in two ways: it applies to posets and sheaves, respectively. The importance of working with sheaves on posets (rather than constant coefficients and simplicial complexes) stems from applications to locally testable codes (LTCs). It has been observed by Kaufman–Lubotzky that cosystolic expansion is related to property testing in the context of simplicial complexes and constant coefficients, but unfortunately, this special case does not give rise to interesting LTCs. We observe that this relation also exists in the much more general setting of sheaves on posets. As the language of sheaves is more expressive, it allows us to put this relation to use. Specifically, we apply our criterion for cosystolic expansion in two ways. First, we show the existence of good 2query LTCs. These codes are actually related to the recent good qquery LTCs of Dinur–Evra–Livne–Lubotzky–Mozes and Panteleev–Kalachev, being the formers’ socalled line codes, but we get them from a new, more illuminating perspective. By realizing these codes as cocycle codes of sheaves on posets, we can derive their good properties directly from our criterion for cosystolic expansion. The local expansion conditions that our criterion requires unfold to the conditions on the “small codes” in Dinur et. al and Panteleev–Kalachev, and hence give a conceptual explanation to why conditions such as agreement testability are required. Second, we show that local testability of a lifted code could be derived solely from local conditions, namely from agreement expansion properties of the local “small” codes which define it. In a work of Dikstein–Dinur–Harsha–RonZewi, it was shown that one can obtain local testability of lifted codes from a mixture of local and global conditions, namely, from local testability of the local codes and global agreement expansion of an auxiliary 3layer system called a multilayered agreement sampler. Our result achieves the same, but using genuinely local conditions and a simpler 3layer structure. It is derived neatly from our local criterion for cosystolic expansion, by interpreting the situation in the language of sheaves on posets. @InProceedings{STOC24p1446, author = {Uriya A. First and Tali Kaufman}, title = {Cosystolic Expansion of Sheaves on Posets with Applications to Good 2Query Locally Testable Codes and Lifted Codes}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14461457}, doi = {10.1145/3618260.3649625}, year = {2024}, } Publisher's Version 

Kawamura, Akitoshi 
STOC '24: "Proof of the Density Threshold ..."
Proof of the Density Threshold Conjecture for Pinwheel Scheduling
Akitoshi Kawamura (Kyoto University, Kyoto, Japan) In the pinwheel scheduling problem, each task i is associated with a positive integer a_{i} called its period, and we want to (perpetually) schedule one task per day so that each task i is performed at least once every a_{i} days. An obvious necessary condition for schedulability is that the density, i.e., the sum of the reciprocals 1/a_{i}, not exceed 1. We prove that all instances with density not exceeding 5/6 are schedulable, as was conjectured by Chan and Chin in 1993. Like some of the known partial progress towards the conjecture, our proof involves computer search for schedules for a large but finite set of instances. A key idea in our reduction to these finite cases is to generalize the problem to fractional (noninteger) periods in an appropriate way. As byproducts of our ideas, we obtain a simple proof that every instance with two distinct periods and density at most 1 is schedulable, as well as a fast algorithm for the bamboo garden trimming problem with approximation ratio 4/3. @InProceedings{STOC24p1816, author = {Akitoshi Kawamura}, title = {Proof of the Density Threshold Conjecture for Pinwheel Scheduling}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18161819}, doi = {10.1145/3618260.3649757}, year = {2024}, } Publisher's Version 

Kawarabayashi, Kenichi 
STOC '24: "EdgeDisjoint Paths in Eulerian ..."
EdgeDisjoint Paths in Eulerian Digraphs
Dario Giuliano Cavallaro , Kenichi Kawarabayashi , and Stephan Kreutzer (TU Berlin, Berlin, Germany; National Institute of Informatics, Tokyo, Japan; University of Tokyo, Tokyo, Japan) Disjoint paths problems are among the most prominent problems in combinatorial optimisation. The edge as well as the VertexDisjoint Paths problem are NPcomplete, both on directed and undirected graphs. But on undirected graphs, Robertson and Seymour developed an algorithm for both problems that runs in cubic time for every fixed number p of terminal pairs, i.e. they proved that the problem is fixedparameter tractable on undirected graphs. This is in sharp contrast to the situation on directed graphs, where Fortune, Hopcroft, and Wyllie proved that both problems are NPcomplete already for p=2 terminal pairs. In this paper, we study the EdgeDisjoint Paths problem (EDPP) on Eulerian digraphs, a problem that has received significant attention in the literature. Marx proved that the Eulerian EDPP is NPcomplete even on structurally very simple Eulerian digraphs. On the positive side, polynomial time algorithms are known only for very restricted cases, such as p≤ 3 or where the demand graph is a union of two stars. The question for which values of p the EdgeDisjoint Paths problem can be solved in polynomial time on Eulerian digraphs has already been raised by Frank, Ibaraki, and Nagamochi almost 30 years ago. But despite considerable effort, the complexity of the problem is still wide open and is considered to be the main open problem in this area. In this paper, we solve this longopen problem by showing that the EdgeDisjoint Paths problem is fixedparameter tractable on Eulerian digraphs in general (parameterized by the number of terminal pairs). The algorithm itself is reasonably simple but the proof of its correctness requires a deep structural analysis of Eulerian digraphs. @InProceedings{STOC24p704, author = {Dario Giuliano Cavallaro and Kenichi Kawarabayashi and Stephan Kreutzer}, title = {EdgeDisjoint Paths in Eulerian Digraphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {704715}, doi = {10.1145/3618260.3649758}, year = {2024}, } Publisher's Version STOC '24: "Better Coloring of 3Colorable ..." Better Coloring of 3Colorable Graphs Kenichi Kawarabayashi , Mikkel Thorup , and Hirotaka Yoneda (National Institute of Informatics, Tokyo, Japan; University of Tokyo, Tokyo, Japan; University of Copenhagen, Copenhagen, Denmark) We consider the problem of coloring a 3colorable graph in polynomial time using as few colors as possible. This is one of the most challenging problems in graph algorithms. In this paper using Blum’s notion of “progress”, we develop a new combinatorial algorithm for the following: Given any 3colorable graph with minimum degree >√n, we can, in polynomial time, make progress towards a kcoloring for some k=√n/· n^{o(1)}. We balance our main result with the bestknown semidefinite(SDP) approach which we use for degrees below n^{0.605073}. As a result, we show that (n^{0.19747}) colors suffice for coloring 3colorable graphs. This improves on the previous best bound of (n^{0.19996}) by Kawarabayashi and Thorup from 2017. @InProceedings{STOC24p331, author = {Kenichi Kawarabayashi and Mikkel Thorup and Hirotaka Yoneda}, title = {Better Coloring of 3Colorable Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {331339}, doi = {10.1145/3618260.3649768}, year = {2024}, } Publisher's Version STOC '24: "Packing Even Directed Circuits ..." Packing Even Directed Circuits QuarterIntegrally Maximilian Gorsky , Kenichi Kawarabayashi , Stephan Kreutzer , and Sebastian Wiederrecht (TU Berlin, Berlin, Germany; National Institute of Informatics, Tokyo, Japan; University of Tokyo, Tokyo, Japan; Institute for Basic Science, Daejeon, South Korea) We prove the existence of a computable function f∶ℕ→ℕ such that for every integer k and every digraph D, either D contains a collection C of k directed cycles of even length such that no vertex of D belongs to more than four cycles in C, or there exists a set S⊆ V(D) of size at most f(k) such that D−S has no directed cycle of even length. Moreover, we provide an algorithm that finds one of the two outcomes of this statement in time g(k)n^{O(1)} for some computable function g∶ ℕ→ℕ. Our result unites two deep fields of research from the algorithmic theory for digraphs: The study of the ErdősPósa property of digraphs and the study of the Even Dicycle Problem. The latter is the decision problem which asks if a given digraph contains an even dicycle and can be traced back to a question of Pólya from 1913. It remained open until a polynomial time algorithm was finally found by Robertson, Seymour, and Thomas (Ann. of Math. (2) 1999) and, independently, McCuaig (Electron. J. Combin. 2004; announced jointly at STOC 1997). The Even Dicycle Problem is equivalent to the recognition problem of Pfaffian bipartite graphs and has applications even beyond discrete mathematics and theoretical computer science. On the other hand, Younger’s Conjecture (1973), states that dicycles have the ErdősPósa property. The conjecture was proven more than two decades later by Reed, Robertson, Seymour, and Thomas (Combinatorica 1996) and opened the path for structural digraph theory as well as the algorithmic study of the directed feedback vertex set problem. Our approach builds upon the techniques used to resolve both problems and combines them into a powerful structural theorem that yields further algorithmic applications for other prominent problems. @InProceedings{STOC24p692, author = {Maximilian Gorsky and Kenichi Kawarabayashi and Stephan Kreutzer and Sebastian Wiederrecht}, title = {Packing Even Directed Circuits QuarterIntegrally}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {692703}, doi = {10.1145/3618260.3649682}, year = {2024}, } Publisher's Version 

Kelley, Zander 
STOC '24: "Explicit Separations between ..."
Explicit Separations between Randomized and Deterministic NumberonForehead Communication
Zander Kelley , Shachar Lovett , and Raghu Meka (University of Illinois at UrbanaChampaign, USA; University of California at San Diego, USA; University of California at Los Angeles, USA) We study the power of randomness in the NumberonForehead (NOF) model in communication complexity. We construct an explicit 3player function f:[N]^{3} → {0,1}, such that: (i) there exist a randomized NOF protocol computing it that sends a constant number of bits; but (ii) any deterministic or nondeterministic NOF protocol computing it requires sending about (logN)^{1/3} many bits. This exponentially improves upon the previously bestknown such separation. At the core of our proof is an extension of a recent result on sets of integers without 3term arithmetic progressions into a nonarithmetic setting. @InProceedings{STOC24p1299, author = {Zander Kelley and Shachar Lovett and Raghu Meka}, title = {Explicit Separations between Randomized and Deterministic NumberonForehead Communication}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {12991310}, doi = {10.1145/3618260.3649721}, year = {2024}, } Publisher's Version STOC '24: "New Graph Decompositions and ..." New Graph Decompositions and Combinatorial Boolean Matrix Multiplication Algorithms Amir Abboud , Nick Fischer , Zander Kelley , Shachar Lovett , and Raghu Meka (Weizmann Institute of Science, Israel; University of Illinois at UrbanaChampaign, USA; University of California at San Diego, USA; University of California at Los Angeles, USA) We revisit the fundamental Boolean Matrix Multiplication (BMM) problem. With the invention of algebraic fast matrix multiplication over 50 years ago, it also became known that BMM can be solved in truly subcubic O(n^{ω}) time, where ω<3; much work has gone into bringing ω closer to 2. Since then, a parallel line of work has sought comparably fast combinatorial algorithms but with limited success. The na'ive O(n^{3})time algorithm was initially improved by a log^{2}n factor [Arlazarov et al.; RAS’70], then by log^{2.25}n [Bansal and Williams; FOCS’09], then by log^{3}n [Chan; SODA’15], and finally by log^{4}n [Yu; ICALP’15]. We design a combinatorial algorithm for BMM running in time n^{3} / 2^{Ω((logn)1/7)} – a speedup over cubic time that is stronger than any polylog factor. This comes tantalizingly close to refuting the conjecture from the 90s that truly subcubic combinatorial algorithms for BMM are impossible. This popular conjecture is the basis for dozens of finegrained hardness results. Our main technical contribution is a new regularity decomposition theorem for Boolean matrices (or equivalently, bipartite graphs) under a notion of regularity that was recently introduced and analyzed analytically in the context of communication complexity [Kelley, Lovett, Meka; STOC’24], and is related to a similar notion from the recent work on 3term arithmetic progression free sets [Kelley, Meka; FOCS’23]. @InProceedings{STOC24p935, author = {Amir Abboud and Nick Fischer and Zander Kelley and Shachar Lovett and Raghu Meka}, title = {New Graph Decompositions and Combinatorial Boolean Matrix Multiplication Algorithms}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {935943}, doi = {10.1145/3618260.3649696}, year = {2024}, } Publisher's Version 

Kesselheim, Thomas 
STOC '24: "Supermodular Approximation ..."
Supermodular Approximation of Norms and Applications
Thomas Kesselheim , Marco Molinaro , and Sahil Singla (University of Bonn, Bonn, Germany; PUCRio, Brazil; Georgia Institute of Technology, USA) Many classical problems in theoretical computer science involve norms, even if implicitly; for example, both XOS functions and downwardclosed sets are equivalent to some norms. The last decade has seen a lot of interest in designing algorithms beyond the standard ℓ_{p} norms · _{p}. Despite notable advancements, many existing methods remain tailored to specific problems, leaving a broader applicability to general norms less understood. This paper investigates the intrinsic properties of ℓ_{p} norms that facilitate their widespread use and seeks to abstract these qualities to a more general setting. We identify supermodularity—often reserved for combinatorial set functions and characterized by monotone gradients—as a defining feature beneficial for ·_{p}^{p}. We introduce the notion of psupermodularity for norms, asserting that a norm is psupermodular if its p^{th} power function exhibits supermodularity. The association of supermodularity with norms offers a new lens through which to view and construct algorithms. Our work demonstrates that for a large class of problems psupermodularity is a sufficient criterion for developing good algorithms. This is either by reframing existing algorithms for problems like Online LoadBalancing and Bandits with Knapsacks through a supermodular lens, or by introducing novel analyses for problems such as Online Covering, Online Packing, and Stochastic Probing. Moreover, we prove that every symmetric norm can be approximated by a psupermodular norm. Together, these recover and extend several existing results, and support psupermodularity as a unified theoretical framework for optimization challenges centered around normrelated problems. @InProceedings{STOC24p1841, author = {Thomas Kesselheim and Marco Molinaro and Sahil Singla}, title = {Supermodular Approximation of Norms and Applications}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18411852}, doi = {10.1145/3618260.3649734}, year = {2024}, } Publisher's Version 

Khanna, Sanjeev 
STOC '24: "Maximum Bipartite Matching ..."
Maximum Bipartite Matching in 𝑛^{2+𝑜(1)} Time via a Combinatorial Algorithm
Julia Chuzhoy and Sanjeev Khanna (Toyota Technological Institute, Chicago, USA; University of Pennsylvania, USA) Maximum bipartite matching (MBM) is a fundamental problem in combinatorial optimization with a long and rich history. A classic result of Hopcroft and Karp (1973) provides an O(m √n)time algorithm for the problem, where n and m are the number of vertices and edges in the input graph, respectively. For dense graphs, an approach based on fast matrix multiplication achieves a running time of O(n^{2.371}). For several decades, these results represented stateoftheart algorithms, until, in 2013, Madry introduced a powerful new approach for solving MBM using continuous optimization techniques. This line of research, that builds on continuous techniques based on interiorpoint methods, led to several spectacular results, culminating in a breakthrough m^{1+o(1)}time algorithm for mincost flow, that implies an m^{1+o(1)}time algorithm for MBM as well. These striking advances naturally raise the question of whether combinatorial algorithms can match the performance of the algorithms that are based on continuous techniques for MBM. One reason to explore combinatorial algorithms is that they are often more transparent than their continuous counterparts, and that the tools and techniques developed for such algorithms may be useful in other settings, including, for example, developing faster algorithms for maximum matching in general graphs. A recent work of Chuzhoy and Khanna (2024) made progress on this question by giving a combinatorial Õ(m^{1/3}n^{5/3})time algorithm for MBM, thus outperforming both the HopcroftKarp algorithm and matrix multiplication based approaches, on sufficiently dense graphs. Still, a large gap remains between the running time of their algorithm and the almost lineartime achievable by algorithms based on continuous techniques. In this work, we take another step towards narrowing this gap, and present a randomized n^{2+o(1)}time combinatorial algorithm for MBM. Thus in dense graphs, our algorithm essentially matches the performance of algorithms that are based on continuous methods. Similar to the classical algorithms for MBM and the approach used in the work of Chuzhoy and Khanna (2024), our algorithm is based on iterative augmentation of a current matching using augmenting paths in the corresponding (directed) residual flow network. Our main contribution is a recursive algorithm that exploits the special structure of the resulting flow problem to recover an Ω(1/log^{2} n)fraction of the remaining augmentations in n^{2+o(1)} time. Finally, we obtain a randomized n^{2+o(1)}time algorithm for maximum vertexcapacitated st flow in directed graphs when all vertex capacities are identical, using a standard reduction from this problem to MBM. @InProceedings{STOC24p83, author = {Julia Chuzhoy and Sanjeev Khanna}, title = {Maximum Bipartite Matching in 𝑛<sup>2+𝑜(1)</sup> Time via a Combinatorial Algorithm}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {8394}, doi = {10.1145/3618260.3649725}, year = {2024}, } Publisher's Version 

Khodabandeh, Mohammad Mahdi 
STOC '24: "On the Power of Interactive ..."
On the Power of Interactive Proofs for Learning
Tom Gur , Mohammad Mahdi Jahanara , Mohammad Mahdi Khodabandeh , Ninad Rajgopal , Bahar Salamatian , and Igor Shinkar (University of Cambridge, United Kingdom; Simon Fraser University, Canada; Qualcomm, Canada) We continue the study of doublyefficient proof systems for verifying agnostic PAC learning, for which we obtain the following results. We construct an interactive protocol for learning the t largest Fourier characters of a given function f ∶ {0,1}^{n} → {0,1} up to an arbitrarily small error, wherein the verifier uses poly(t) random examples. This improves upon the Interactive GoldreichLevin protocol of Goldwasser, Rothblum, Shafer, and Yehudayoff (ITCS 2021) whose sample complexity is poly(t,n). For agnostically learning the class AC^{0}[2] under the uniform distribution, we build on the work of Carmosino, Impagliazzo, Kabanets, and Kolokolova (APPROX/RANDOM 2017) and design an interactive protocol, where given a function f ∶ {0,1}^{n} → {0,1}, the verifier learns the closest hypothesis up to polylog(n) multiplicative factor, using quasipolynomially many random examples. In contrast, this class has been notoriously resistant even for constructing realisable learners (without a prover) using random examples. For agnostically learning kjuntas under the uniform distribution, we obtain an interactive protocol, where the verifier uses O(2^{k}) random examples to a given function f ∶ {0,1}^{n} → {0,1}. Crucially, the sample complexity of the verifier is independent of n. We also show that if we do not insist on doublyefficient proof systems, then the model becomes trivial. Specifically, we show a protocol for an arbitrary class C of Boolean functions in the distributionfree setting, where the verifier uses O(1) labeled examples to learn f. @InProceedings{STOC24p1063, author = {Tom Gur and Mohammad Mahdi Jahanara and Mohammad Mahdi Khodabandeh and Ninad Rajgopal and Bahar Salamatian and Igor Shinkar}, title = {On the Power of Interactive Proofs for Learning}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {10631070}, doi = {10.1145/3618260.3649784}, year = {2024}, } Publisher's Version 

Khot, Subhash 
STOC '24: "On Approximability of Satisfiable ..."
On Approximability of Satisfiable kCSPs: IV
Amey Bhangale , Subhash Khot , and Dor Minzer (University of California at Riverside, USA; New York University, USA; Massachusetts Institute of Technology, USA) We prove a stability result for general 3wise correlations over distributions satisfying mild connectivity properties. More concretely, we show that if Σ,Γ and Φ are alphabets of constant size, and µ is a distribution over Σ×Γ×Φ satisfying: (1) the probability of each atom is at least Ω(1), (2) µ is pairwise connected, and (3) µ has no Abelian embeddings into (ℤ,+), then the following holds. Any triplets of 1bounded functions f∶ Σ^{n}→ℂ, g∶ Γ^{n}→ℂ, h∶ Φ^{n}→ℂ satisfying (x,y,z)∼ µ^{⊗ n}f(x)g(y)h(z)≥ must arise from an Abelian group associated with the distribution µ. More specifically, we show that there is an Abelian group (H,+) of constant size such that for any such f,g and h, the function f (and similarly g and h) is correlated with a function of the form f(x) = χ(σ(x_{1}),…,σ(x_{n})) L (x), where σ∶ Σ → H is some map, χ∈ Ĥ^{⊗ n} is a character, and L∶ Σ^{n}→ℂ is a lowdegree function with bounded 2norm. En route we prove a few additional results that may be of independent interest, such as an improved direct product theorem, as well as a result we refer to as a “restriction inverse theorem” about the structure of functions that, under random restrictions, with noticeable probability have significant correlation with a product function. In companion papers, we show applications of our results to the fields of Probabilistically Checkable Proofs, as well as various areas in discrete mathematics such as extremal combinatorics and additive combinatorics. @InProceedings{STOC24p1423, author = {Amey Bhangale and Subhash Khot and Dor Minzer}, title = {On Approximability of Satisfiable kCSPs: IV}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14231434}, doi = {10.1145/3618260.3649610}, year = {2024}, } Publisher's Version 

Khurana, Dakshita 
STOC '24: "Commitments from Quantum OneWayness ..."
Commitments from Quantum OneWayness
Dakshita Khurana and Kabir Tomer (University of Illinois at UrbanaChampaign, USA) Oneway functions are central to classical cryptography. They are necessary for the existence of nontrivial classical cryptosystems, and also sufficient to realize meaningful primitives including commitments, pseudorandom generators and digital signatures. At the same time, a mounting body of evidence suggests that assumptions even weaker than oneway functions may suffice for many cryptographic tasks of interest in a quantum world, including bit commitments and secure multiparty computation. This work studies oneway state generators [MorimaeYamakawa, CRYPTO 2022], a natural quantum relaxation of oneway functions. Given a secret key, a oneway state generator outputs a hard to invert quantum state. A fundamental question is whether this type of quantum onewayness suffices to realize quantum cryptography. We obtain an affirmative answer to this question, by proving that oneway state generators with pure state outputs imply quantum bit commitments and secure multiparty computation. Along the way, we use efficient shadow tomography [Huang et. al., Nature Physics 2020] to build an intermediate primitive with classical outputs, which we call a (quantum) oneway puzzle. Our main technical contribution is a proof that oneway puzzles imply quantum bit commitments. This proof develops new techniques for pseudoentropy generation [Hastad et. al., SICOMP 1999] from arbitrary distributions, which may be of independent interest. @InProceedings{STOC24p968, author = {Dakshita Khurana and Kabir Tomer}, title = {Commitments from Quantum OneWayness}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {968978}, doi = {10.1145/3618260.3649654}, year = {2024}, } Publisher's Version 

Kim, Isaac 
STOC '24: "Learning Shallow Quantum Circuits ..."
Learning Shallow Quantum Circuits
HsinYuan Huang , Yunchao Liu , Michael Broughton , Isaac Kim , Anurag Anshu , Zeph Landau , and Jarrod R. McClean (California Institute of Technology, USA; Google Quantum AI, USA; University of California at Berkeley, USA; University of California at Davis, USA; Harvard University, USA) Despite fundamental interests in learning quantum circuits, the existence of a computationally efficient algorithm for learning shallow quantum circuits remains an open question. Because shallow quantum circuits can generate distributions that are classically hard to sample from, existing learning algorithms do not apply. In this work, we present a polynomialtime classical algorithm for learning the description of any unknown nqubit shallow quantum circuit U (with arbitrary unknown architecture) within a small diamond distance using singlequbit measurement data on the output states of U. We also provide a polynomialtime classical algorithm for learning the description of any unknown nqubit state  ψ ⟩ = U  0^{n} ⟩ prepared by a shallow quantum circuit U (on a 2D lattice) within a small trace distance using singlequbit measurements on copies of  ψ ⟩. Our approach uses a quantum circuit representation based on local inversions and a technique to combine these inversions. This circuit representation yields an optimization landscape that can be efficiently navigated and enables efficient learning of quantum circuits that are classically hard to simulate. @InProceedings{STOC24p1343, author = {HsinYuan Huang and Yunchao Liu and Michael Broughton and Isaac Kim and Anurag Anshu and Zeph Landau and Jarrod R. McClean}, title = {Learning Shallow Quantum Circuits}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {13431351}, doi = {10.1145/3618260.3649722}, year = {2024}, } Publisher's Version 

Kindler, Guy 
STOC '24: "Product Mixing in Compact ..."
Product Mixing in Compact Lie Groups
David Ellis , Guy Kindler , Noam Lifshitz , and Dor Minzer (University of Bristol, United Kingdom; Hebrew University of Jerusalem, Israel; Massachusetts Institute of Technology, USA) If G is a group, we say a subset S of G is productfree if the equation xy=z has no solutions with x,y,z ∈ S.In 1985, Babai and Sós [] asked, for a finite group G, how large a subset S⊆ G can be if it is productfree. The main tool (hitherto) for studying this problem has been the notion of a quasirandom group. For D ∈ ℕ, a group G is said to be Dquasirandom if the minimal dimension of a nontrivial complex irreducible representation of G is at least D. Gowers showed that in a Dquasirandom finite group G, the maximal size of a productfree set is at most G/D^{1/3}. This disproved a longstanding conjecture of Babai and Sós from 1985. For the special unitary group, G=(n), Gowers observed that his argument yields an upper bound of n^{−1/3} on the measure of a measurable productfree subset. In this paper, we improve Gowers’ upper bound to exp(−cn^{1/3}), where c>0 is an absolute constant. In fact, we establish something stronger, namely, productmixing for measurable subsets of (n) with measure at least exp(−cn^{1/3}); for this productmixing result, the n^{1/3} in the exponent is sharp. Our approach involves introducing novel hypercontractive inequalities, which imply that the nonAbelian Fourier spectrum of the indicator function of a small set concentrates on highdimensional irreducible representations. Our hypercontractive inequalities are obtained via methods from representation theory, harmonic analysis, random matrix theory and differential geometry. We generalize our hypercontractive inequalities from (n) to an arbitrary Dquasirandom compact connected Lie group for D at least an absolute constant, thereby extending our results on productfree sets to such groups. We also demonstrate various other applications of our inequalities to geometry (viz., nonAbelian BrunnMinkowski type inequalities), mixing times, and the theory of growth in compact Lie groups. A subsequent work due to Arunachalam, Girish and Lifshitz uses our inequalities to establish new separation results between classical and quantum communication complexity. @InProceedings{STOC24p1415, author = {David Ellis and Guy Kindler and Noam Lifshitz and Dor Minzer}, title = {Product Mixing in Compact Lie Groups}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {14151422}, doi = {10.1145/3618260.3649626}, year = {2024}, } Publisher's Version 

Kiss, Peter 
STOC '24: "NearOptimal Dynamic Rounding ..."
NearOptimal Dynamic Rounding of Fractional Matchings in Bipartite Graphs
Sayan Bhattacharya , Peter Kiss , Aaron Sidford , and David Wajc (University of Warwick, United Kingdom; Stanford University, USA; Technion, Israel) We study dynamic (1−є)approximate rounding of fractional matchings—a key ingredient in numerous breakthroughs in the dynamic graph algorithms literature. Our first contribution is a surprisingly simple deterministic rounding algorithm in bipartite graphs with amortized update time O(є^{−1} log^{2} (є^{−1} · n)), matching an (unconditional) recourse lower bound of Ω(є^{−1}) up to logarithmic factors. Moreover, this algorithm’s update time improves provided the minimum (nonzero) weight in the fractional matching is lower bounded throughout. Combining this algorithm with novel dynamic partial rounding algorithms to increase this minimum weight, we obtain a number of algorithms that improve this dependence on n. For example, we give a highprobability randomized algorithm with Õ(є^{−1} · (loglogn)^{2})update time against adaptive adversaries. Using our rounding algorithms, we also round known (1−є)decremental fractional bipartite matching algorithms with no asymptotic overhead, thus improving on stateoftheart algorithms for the decremental bipartite matching problem. Further, we provide extensions of our results to general graphs and to maintaining almostmaximal matchings. @InProceedings{STOC24p59, author = {Sayan Bhattacharya and Peter Kiss and Aaron Sidford and David Wajc}, title = {NearOptimal Dynamic Rounding of Fractional Matchings in Bipartite Graphs}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {5970}, doi = {10.1145/3618260.3649648}, year = {2024}, } Publisher's Version 

Klein, Nathan 
STOC '24: "Ghost Value Augmentation for ..."
Ghost Value Augmentation for kEdgeConnectivity
D. Ellis Hershkowitz , Nathan Klein , and Rico Zenklusen (Brown University, USA; Institute for Advanced Study, Princeton, USA; ETH Zurich, Switzerland) We give a polytime algorithm for the kedgeconnected spanning subgraph (kECSS) problem that returns a solution of cost no greater than the cheapest (k+10)ECSS on the same graph. Our approach enhances the iterative relaxation framework with a new ingredient, which we call ghost values, that allows for high sparsity in intermediate problems. Our guarantees improve upon the bestknown approximation factor of 2 for kECSS whenever the optimal value of (k+10)ECSS is close to that of kECSS. This is a property that holds for the closely related problem kedgeconnected spanning multisubgraph (kECSM), which is identical to kECSS except edges can be selected multiple times at the same cost. As a consequence, we obtain a 1+O(1/k)approximation algorithm for kECSM, which resolves a conjecture of Pritchard and improves upon a recent 1+O(1/√k)approximation algorithm of Karlin, Klein, Oveis Gharan, and Zhang. Moreover, we present a matching lower bound for kECSM, showing that our approximation ratio is tight up to the constant factor in O(1/k), unless P=NP. @InProceedings{STOC24p1853, author = {D. Ellis Hershkowitz and Nathan Klein and Rico Zenklusen}, title = {Ghost Value Augmentation for kEdgeConnectivity}, booktitle = {Proc.\ STOC}, publisher = {ACM}, pages = {18531864}, doi = {10.1145/3618260.3649715}, year = {2024}, } 