STOC 2024
56th Annual ACM Symposium on Theory of Computing (STOC 2024)
Powered by
Conference Publishing Consulting

56th Annual ACM Symposium on Theory of Computing (STOC 2024), June 24–28, 2024, Vancouver, BC, Canada

STOC 2024 – Proceedings

Contents - Abstracts - Authors

Frontmatter

Title Page


Welcome from the Program Chair
The papers in this volume were presented at the 56th Annual ACM Symposium on Theory of Computing (STOC 2024), sponsored by the ACM Special Interest Group on Algorithms and Computation Theory (SIGACT). The conference was held in Vancouver, Canada, June 24--28, 2024, with the papers being presented as live talks.

STOC 2024 Conference Organization
STOC 2024 Conference Organization General Chairs: Bojan Mohar, Igor Shinkar, Simon Fraser University Program Chair: Ryan O'Donnell, Carnegie Mellon University

Sponsors
Sponsors

Keynotes

The Computer in the Sky (Keynote)
Tim Roughgarden
(Columbia University, USA; a16z crypto, USA)
Turing-complete blockchain protocols approximate the idealized abstraction of a "computer in the sky" that is open access, runs in plain view, and, in effect, has no owner or operator. This technology can, among other things, enable stronger notions of ownership of digital possessions than we have ever had before. Building the computer in the sky is hard (and scientifically fascinating). In this talk I'll highlight some of my recent research on this challenge, emphasizing the diversity of mathematical tools required, the immediate practical impact that mathematical work on this topic has had, and some open problems of interest to theoretical computer scientists.

Publisher's Version
Algorithmic Contract Design (Keynote)
Michal Feldman
(Tel Aviv University, Israel)
Algorithmic contract design is a new frontier at the interface of economics and computation, studying scenarios where a principal delegates the execution of a costly project to an agent or a team of agents, and incentivizes them through a contract that specifies payments contingent on the project's success. This domain has gained increasing interest from the theoretical computer science community, particularly in the realm of combinatorial contracts. In this talk, I will survey two distinct models of combinatorial contracts, each illustrating unique sources of complexity encountered in contract design. The first model allows an agent to select from a set of possible actions, examining the intricate dependencies among these actions. The second model involves motivating a team of agents, focusing on the interdependencies within various agent combinations. I will present both (approximation) algorithms and hardness results concerning the optimal contract problem in these settings. The talk is based on joint work with Paul Duetting, Tomer Ezra, Yoav Gal-Tzur, Thomas Kesselheim and Maya Schlesinger.

Publisher's Version

1A (Best Papers)

Single-Source Shortest Paths with Negative Real Weights in Õ(𝑚𝑛8/9) Time
Jeremy T. Fineman
(Georgetown University, USA)
This paper presents a randomized algorithm for single-source shortest paths on directed graphs with real (both positive and negative) edge weights. Given an input graph with n vertices and m edges, the algorithm completes in Õ(mn8/9) time with high probability. For real-weighted graphs, this result constitutes the first asymptotic improvement over the classic O(mn)-time algorithm variously attributed to Shimbel, Bellman, Ford, and Moore.

Publisher's Version
Near Optimal Alphabet-Soundness Tradeoff PCPs
Dor Minzer and Kai Zhe Zheng
(Massachusetts Institute of Technology, USA)
We show that for all є>0, for sufficiently large prime power q, for all δ>0, it is NP-hard to distinguish whether a 2-Prover-1-Round projection game with alphabet size q has value at least 1-δ, or value at most 1/q^(1-є). This establishes a nearly optimal alphabet-to-soundness tradeoff for 2-query PCPs with alphabet size q, improving upon a result of [Chan 2016]. Our result has the following implications:
1) Near optimal hardness for Quadratic Programming: it is NP-hard to approximate the value of a given Boolean Quadratic Program within factor (logn)^(1 - o(1)) under quasi-polynomial time reductions. This result improves a result of [Khot-Safra 2013] and nearly matches the performance of the best known approximation algorithm [Megrestki 2001, Nemirovski-Roos-Terlaky 1999 Charikar-Wirth 2004] that achieves a factor of O(logn).
2) Bounded degree 2-CSP’s: under randomized reductions, for sufficiently large d>0, it is NP-hard to approximate the value of 2-CSPs in which each variable appears in at most d constraints within factor (1-o(1))d/2 improving upon a recent result of [Lee-Manurangsi 2023].
3) Improved hardness results for connectivity problems: using results of [Laekhanukit 2014] and [Manurangsi 2019], we deduce improved hardness results for the Rooted k-Connectivity Problem, the Vertex-Connectivity Survivable Network Design Problem and the Vertex-Connectivity k-Route Cut Problem.

Publisher's Version
Parameterized Inapproximability Hypothesis under Exponential Time Hypothesis
Venkatesan Guruswami, Bingkai Lin, Xuandi Ren, Yican Sun, and Kewen Wu
(University of California at Berkeley, USA; Nanjing University, China; Peking University, China)
The Parameterized Inapproximability Hypothesis (PIH) asserts that no fixed parameter tractable (FPT) algorithm can distinguish a satisfiable CSP instance, parameterized by the number of variables, from one where every assignment fails to satisfy an ε fraction of constraints for some absolute constant ε > 0. PIH plays the role of the PCP theorem in parameterized complexity. However, PIH has only been established under Gap-ETH, a very strong assumption with an inherent gap. In this work, we prove PIH under the Exponential Time Hypothesis (ETH). This is the first proof of PIH from a gap-free assumption. Our proof is self-contained and elementary. We identify an ETH-hard CSP whose variables take vector values, and constraints are either linear or of a special parallel structure. Both kinds of constraints can be checked with constant soundness via a “parallel PCP of proximity” based on the Walsh-Hadamard code.

Publisher's Version

2A

Online Edge Coloring Is (Nearly) as Easy as Offline
Joakim Blikstad, Ola Svensson, Radu Vintan, and David Wajc
(KTH Royal Institute of Technology, Stockholm, Sweden; MPI-INF, Germany; EPFL, Lausanne, Switzerland; Technion, Israel)
The classic theorem of Vizing (Diskret. Analiz.’64) asserts that any graph of maximum degree Δ can be edge colored (offline) using no more than Δ+1 colors (with Δ being a trivial lower bound). In the online setting, Bar-Noy, Motwani and Naor (IPL’92) conjectured that a (1+o(1))Δ-edge-coloring can be computed online in n-vertex graphs of maximum degree Δ=ω(logn). Numerous algorithms made progress on this question, using a higher number of colors or assuming restricted arrival models, such as random-order edge arrivals or vertex arrivals (e.g., AGKM FOCS’03, BMM SODA’10, CPW FOCS’19, BGW SODA’21, KLSST STOC’22). In this work, we resolve this longstanding conjecture in the affirmative in the most general setting of adversarial edge arrivals. We further generalize this result to obtain online counterparts of the list edge coloring result of Kahn (J. Comb. Theory. A’96) and of the recent “local” edge coloring result of Christiansen (STOC’23).

Publisher's Version
Approximate Earth Mover’s Distance in Truly-Subquadratic Time
Lorenzo Beretta and Aviad Rubinstein
(University of Copenhagen, Copenhagen, Denmark; Stanford University, USA)
We design an additive approximation scheme for estimating the cost of the min-weight bipartite matching problem: given a bipartite graph with non-negative edge costs and ε > 0, our algorithm estimates the cost of matching all but O(ε)-fraction of the vertices in truly subquadratic time O(n2−δ(ε)). Our algorithm has a natural interpretation for computing the Earth Mover’s Distance (EMD), up to a ε-additive approximation. Notably, we make no assumptions about the underlying metric (more generally, the costs do not have to satisfy triangle inequality). Note that compared to the size of the instance (an arbitrary n × n cost matrix), our algorithm runs in sublinear time. Our algorithm can approximate a slightly more general problem: max-cardinality bipartite matching with a knapsack constraint, where the goal is to maximize the number of vertices that can be matched up to a total cost B.

Publisher's Version
Near-Optimal Dynamic Rounding of Fractional Matchings in Bipartite Graphs
Sayan Bhattacharya, Peter Kiss, Aaron Sidford, and David Wajc
(University of Warwick, United Kingdom; Stanford University, USA; Technion, Israel)
We study dynamic (1−є)-approximate rounding of fractional matchings—a key ingredient in numerous breakthroughs in the dynamic graph algorithms literature. Our first contribution is a surprisingly simple deterministic rounding algorithm in bipartite graphs with amortized update time O−1 log2−1 · n)), matching an (unconditional) recourse lower bound of Ω(є−1) up to logarithmic factors. Moreover, this algorithm’s update time improves provided the minimum (non-zero) weight in the fractional matching is lower bounded throughout. Combining this algorithm with novel dynamic partial rounding algorithms to increase this minimum weight, we obtain a number of algorithms that improve this dependence on n. For example, we give a high-probability randomized algorithm with Õ(є−1 · (loglogn)2)-update time against adaptive adversaries. Using our rounding algorithms, we also round known (1−є)-decremental fractional bipartite matching algorithms with no asymptotic overhead, thus improving on state-of-the-art algorithms for the decremental bipartite matching problem. Further, we provide extensions of our results to general graphs and to maintaining almost-maximal matchings.

Publisher's Version
Low-Step Multi-commodity Flow Emulators
Bernhard Haeupler, D. Ellis Hershkowitz, Jason Li, Antti Roeyskoe, and Thatchaphol Saranurak
(ETH Zurich, Switzerland; Carnegie Mellon University, USA; Brown University, USA; University of Michigan, USA)
We introduce the concept of low-step multi-commodity flow emulators for any undirected, capacitated graph. At a high level, these emulators contain approximate multi-commodity flows whose paths contain a small number of edges, shattering the infamous flow decomposition barrier for multi-commodity flow.
We prove the existence of low-step multi-commodity flow emulators and develop efficient algorithms to compute them. We then apply them to solve constant-approximate k-commodity flow in O((m+k)1+є) time. To bypass the O(mk) flow decomposition barrier, we represent our output multi-commodity flow implicitly; prior to our work, even the existence of implicit constant-approximate multi-commodity flows of size o(mk) was unknown.
Our results generalize to the minimum cost setting, where each edge has an associated cost and the multi-commodity flow must satisfy a cost budget. Our algorithms are also parallel.

Publisher's Version
Maximum Bipartite Matching in 𝑛2+𝑜(1) Time via a Combinatorial Algorithm
Julia Chuzhoy and Sanjeev Khanna
(Toyota Technological Institute, Chicago, USA; University of Pennsylvania, USA)
Maximum bipartite matching (MBM) is a fundamental problem in combinatorial optimization with a long and rich history. A classic result of Hopcroft and Karp (1973) provides an O(mn)-time algorithm for the problem, where n and m are the number of vertices and edges in the input graph, respectively. For dense graphs, an approach based on fast matrix multiplication achieves a running time of O(n2.371). For several decades, these results represented state-of-the-art algorithms, until, in 2013, Madry introduced a powerful new approach for solving MBM using continuous optimization techniques. This line of research, that builds on continuous techniques based on interior-point methods, led to several spectacular results, culminating in a breakthrough m1+o(1)-time algorithm for min-cost flow, that implies an m1+o(1)-time algorithm for MBM as well. These striking advances naturally raise the question of whether combinatorial algorithms can match the performance of the algorithms that are based on continuous techniques for MBM. One reason to explore combinatorial algorithms is that they are often more transparent than their continuous counterparts, and that the tools and techniques developed for such algorithms may be useful in other settings, including, for example, developing faster algorithms for maximum matching in general graphs. A recent work of Chuzhoy and Khanna (2024) made progress on this question by giving a combinatorial Õ(m1/3n5/3)-time algorithm for MBM, thus outperforming both the Hopcroft-Karp algorithm and matrix multiplication based approaches, on sufficiently dense graphs. Still, a large gap remains between the running time of their algorithm and the almost linear-time achievable by algorithms based on continuous techniques. In this work, we take another step towards narrowing this gap, and present a randomized n2+o(1)-time combinatorial algorithm for MBM. Thus in dense graphs, our algorithm essentially matches the performance of algorithms that are based on continuous methods. Similar to the classical algorithms for MBM and the approach used in the work of Chuzhoy and Khanna (2024), our algorithm is based on iterative augmentation of a current matching using augmenting paths in the corresponding (directed) residual flow network. Our main contribution is a recursive algorithm that exploits the special structure of the resulting flow problem to recover an Ω(1/log2 n)-fraction of the remaining augmentations in n2+o(1) time. Finally, we obtain a randomized n2+o(1)-time algorithm for maximum vertex-capacitated s-t flow in directed graphs when all vertex capacities are identical, using a standard reduction from this problem to MBM.

Publisher's Version

2B

Strong Algebras and Radical Sylvester-Gallai Configurations
Rafael Oliveira and Akash Kumar Sengupta
(University of Waterloo, Canada)
In this paper, we study the following non-linear generalization of the classical Sylvester-Gallai configuration. Let K be an algebraically closed field of characteristic 0 and F={F1,…,Fm} ⊂ K[x1,…,xN] be a set of irreducible homogeneous polynomials of degree at most d such that Fi is not a scalar multiple of Fj for ij. We say that F is a radical Sylvester-Gallai configuration if for any two distinct Fi,FjF, there is ki,j such that Fkrad(Fi,Fj). We prove that such radical Sylvester-Gallai configurations must be low dimensional. More precisely, we show that there exists a function λ : ℕ → ℕ, independent of K,N, and m, such that any such configuration F must satisfy
dim(spanK(F)) ≤ λ(d). 
This takes us one step closer towards the first deterministic polynomial time algorithm for the Polynomial Identity Testing (PIT) problem for depth-4 circuits of bounded top and bottom fanins.

Publisher's Version
Black-Box Identity Testing of Noncommutative Rational Formulas in Deterministic Quasipolynomial Time
V. Arvind, Abhranil Chatterjee, and Partha Mukhopadhyay
(Institute of Mathematical Sciences, India; Chennai Mathematical Institute, India; Indian Statistical Institute, Kolkata, India)
Rational Identity Testing (RIT) is the decision problem of determining whether or not a noncommutative rational formula computes zero in the free skew field. It admits a deterministic polynomial-time white-box algorithm [Garg, Gurvits, Oliveira, and Wigderson (2016); Ivanyos, Qiao, Subrahmanyam (2018); Hamada and Hirai (2021)], and a randomized polynomial-time algorithm [Derksen and Makam (2017)] in the black-box setting, via singularity testing of linear matrices over the free skew field. Indeed, a randomized NC algorithm for RIT in the white-box setting follows from the result of Derksen and Makam (2017). Designing an efficient deterministic black-box algorithm for RIT and understanding the parallel complexity of RIT are major open problems in this area. Despite being open since the work of Garg, Gurvits, Oliveira, and Wigderson (2016), these questions have seen limited progress. In fact, the only known result in this direction is the construction of a quasipolynomial-size hitting set for rational formulas of only inversion height two [Arvind, Chatterjee, and Mukhopadhyay (2022)]. In this paper, we significantly improve the black-box complexity of this problem and obtain the first quasipolynomial-size hitting set for all rational formulas of polynomial size. Our construction also yields the first deterministic quasi-NC upper bound for RIT in the white-box setting.

Publisher's Version
The Minimal Faithful Permutation Degree of Groups without Abelian Normal Subgroups
Bireswar Das and Dhara Thakkar
(IIT Gandhinagar, India)
Cayley’s theorem says that every finite group G can be viewed as a subgroup of a symmetric group Sm for some integer m. The minimal faithful permutation degree µ(G) of a finite group G is the smallest integer m such that there is an injective homomorphism φ from G to Sm. The main result of this paper is a randomized polynomial time algorithm for computing the minimal faithful permutation degree of semisimple permutation groups. Semisimple groups are groups without any abelian normal subgroups. Apart from this, we show that: 1. For any primitive permutation group G, µ(G) can be computed in quasi-polynomial time. 2. Given a permutation group G and an integer k, the problem of deciding if µ(G) ≤ k is in NP. 3. For a group G given by its Cayley table, µ(G) can be computed in DSPACE(log3 |G|).

Publisher's Version
Learning the Coefficients: A Presentable Version of Border Complexity and Applications to Circuit Factoring
C. S. Bhargav, Prateek Dwivedi, and Nitin Saxena
(IIT Kanpur, India)
The border, or the approximative, model of algebraic computation (VP) is quite popular due to the Geometric Complexity Theory (GCT) approach to PNP conjecture, and its complex analytic origins. On the flip side, the definition of the border is inherently existential in the field constants that the model employs. In particular, a poly-size border circuit C(ε, x) cannot be compactly presented in reality, as the limit parameter ε may require exponential precision. In this work we resolve this issue by giving a constructive, or a presentable, version of border circuits and state its applications. We make border presentable by restricting the circuit C to use only those constants, in the function field Fq(ε), that it can generate by the ring operations on {ε}∪Fq, and their division, within poly-size circuit. This model is more expressive than VP as it affords exponential-degree in ε; and analogous to the usual border, we define new border classes called VPε and VNPε. We prove that both these (now called presentable border) classes lie in VNP. Such a ’debordering’ result is not known for the classical border classes VP and respectively for VNP. We pose VPε=VP as a new conjecture to study the border. The heart of our technique is a newly formulated exponential interpolation over a finite field, to bound the Boolean complexity of the coefficients before deducing the algebraic complexity. It attacks two factorization problems which were open before. We make progress on (Conj.8.3 in Bürgisser 2000, FOCS 2001) and solve (Conj.2.1 in Bürgisser 2000; Chou,Kumar,Solomon CCC 2018) over all finite fields: 1. Each poly-degree irreducible factor, with multiplicity coprime to field characteristic, of a poly-size circuit (of possibly exponential-degree), is in VNP. 2. For all finite fields, and all factors, VNP is closed under factoring. Consequently, factors of VP are always in VNP. The prime characteristic cases were open before due to the inseparability obstruction (i.e. when the multiplicity is not coprime to q).

Publisher's Version
On the Power of Homogeneous Algebraic Formulas
Hervé Fournier, Nutan Limaye, Srikanth Srinivasan, and Sébastien Tavenas
(Université Paris Cité - IMJ-PRG, France; IT University of Copenhagen, Copenhagen, Denmark; University of Copenhagen, Copenhagen, Denmark; Université Savoie Mont Blanc - CNRS - LAMA, France)
Proving explicit lower bounds on the size of algebraic formulas is a long-standing open problem in the area of algebraic complexity theory. Recent results in the area (e.g. a lower bound against constant-depth algebraic formulas due to Limaye, Srinivasan, and Tavenas (FOCS 2021)) have indicated a way forward for attacking this question: show that we can convert a general algebraic formula to a homogeneous algebraic formula with moderate blow-up in size, and prove strong lower bounds against the latter model. Here, a homogeneous algebraic formula F for a polynomial P is a formula in which all subformulas compute homogeneous polynomials. In particular, if P is homogeneous of degree d, F does not contain subformulas that compute polynomials of degree greater than d. We investigate the feasibility of the above strategy and prove a number of positive and negative results in this direction.
Lower bounds against weighted homogeneous formulas: We show the first lower bounds against homogeneous formulas of any depth in the weighted setting. Here, each variable has a given weight and the weight of a monomial is the sum of weights of the variables in it. This result builds on a lower bound of Hrubeš and Yehudayoff (Computational Complexity 2011) against homogeneous multilinear formulas. This result is strong indication that lower bounds against homogeneous formulas are within reach. Improved (quasi-)homogenization for formulas: A simple folklore argument shows that any formula F for a homogeneous polynomial of degree d can be homogenized with a size blow-up of dO(logs). We show that this can be improved superpolynomially over fields of characteristic 0 as long as d = so(1). Such a result was previously only known when d = (logs)1+o(1) (Raz (J. ACM 2013)). Further, we show how to get rid of the condition on d at the expense of getting a quasi-homogenization result: this means that subformulas can compute polynomials of degree up to poly(d). Lower bounds for non-commutative homogenization: A recent result of Dutta, Gesmundo, Ikenmeyer, Jindal and Lysikov (2022) implies that to homogenize algebraic formulas of any depth, it suffices to homogenize non-commutative algebraic formulas of depth just 3. We are able to show strong lower bounds for such homogenization, suggesting barriers for this approach. No Girard-Newton identities for positive characteristic: In characteristic 0, it is known how to homogenize constant-depth algebraic formulas with a size blow-up of exp(O(√d)) using the Girard-Newton identities. Finding analogues of these identities in positive characteristic would allow us, paradoxically, to show lower bounds for constant-depth formulas over such fields. We rule out a strong generalization of Girard-Newton identities in the setting of positive characteristic, suggesting that a different approach is required.

Publisher's Version

2C

Super Non-singular Decompositions of Polynomials and Their Application to Robustly Learning Low-Degree PTFs
Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Sihan Liu, and Nikos Zarifis
(University of Wisconsin-Madison, USA; University of California at San Diego, USA; University of Texas at Austin, USA)
We study the efficient learnability of low-degree polynomial threshold functions (PTFs) in the presence of a constant fraction of adversarial corruptions. Our main algorithmic result is a polynomial-time PAC learning algorithm for this concept class in the strong contamination model under the Gaussian distribution with error guarantee Od, c(opt1−c), for any desired constant c>0, where opt is the fraction of corruptions. In the strong contamination model, an omniscient adversary can arbitrarily corrupt an opt-fraction of the data points and their labels. This model generalizes the malicious noise model and the adversarial label noise model. Prior to our work, known polynomial-time algorithms in this corruption model (or even in the weaker adversarial label noise model) achieved error Õd(opt1/(d+1)), which deteriorates significantly as a function of the degree d. Our algorithm employs an iterative approach inspired by localization techniques previously used in the context of learning linear threshold functions. Specifically, we use a robust perceptron algorithm to compute a good partial classifier and then iterate on the unclassified points. In order to achieve this, we need to take a set defined by a number of polynomial inequalities and partition it into several well-behaved subsets. To this end, we develop new polynomial decomposition techniques that may be of independent interest.

Publisher's Version
Calibrated Language Models Must Hallucinate
Adam Tauman Kalai and Santosh S. Vempala
(Open AI, USA; Georgia Institute of Technology, USA)
Recent language models generate false but plausible-sounding text with surprising frequency. Such “hallucinations” are an obstacle to the usability of language-based AI systems and can harm people who rely upon their outputs. This work shows that there is an inherent statistical lower-bound on the rate that pretrained language models hallucinate certain types of facts, having nothing to do with the transformer LM architecture or data quality. For “arbitrary” facts whose veracity cannot be determined from the training data, we show that hallucinations must occur at a certain rate for language models that satisfy a statistical calibration condition appropriate for generative language models. Specifically, if the maximum probability of any fact is bounded, we show that the probability of generating a hallucination is close to the fraction of facts that occur exactly once in the training data (a “Good-Turing” estimate), even assuming ideal training data without errors. One conclusion is that models pretrained to be sufficiently good predictors (i.e., calibrated) may require post-training to mitigate hallucinations on the type of arbitrary facts that tend to appear once in the training set. However, our analysis also suggests that there is no statistical reason that pretraining will lead to hallucination on facts that tend to appear more than once in the training data (like references to publications such as articles and books, whose hallucinations have been particularly notable and problematic) or on systematic facts (like arithmetic calculations). Therefore, different architectures and learning algorithms may mitigate these latter types of hallucinations.

Publisher's Version
Private Graphon Estimation via Sum-of-Squares
Hongjie Chen, Jingqiu Ding, Tommaso D'Orsi, Yiding Hua, Chih-Hung Liu, and David Steurer
(ETH Zurich, Switzerland; Bocconi University, Italy; National Taiwan University, Taiwan)
We develop the first pure node-differentially-private algorithms for learning stochastic block models and for graphon estimation with polynomial running time for any constant number of blocks. The statistical utility guarantees match those of the previous best information-theoretic (exponential-time) node-private mechanisms for these problems. The algorithm is based on an exponential mech- anism for a score function defined in terms of a sum-of-squares relaxation whose level depends on the number of blocks. The key ingredients of our results are (1) a characterization of the distance between the block graphons in terms of a quadratic optimization over the polytope of doubly stochastic matrices, (2) a general sum-of-squares convergence result for polynomial op- timization over arbitrary polytopes, and (3) a general approach to perform Lipschitz extensions of score functions as part of the sum-of-squares algorithmic paradigm.

Publisher's Version
Exploring and Learning in Sparse Linear MDPs without Computationally Intractable Oracles
Noah Golowich, Ankur Moitra, and Dhruv Rohatgi
(Massachusetts Institute of Technology, USA)
The key assumption underlying linear Markov Decision Processes (MDPs) is that the learner has access to a known feature map φ(x, a) that maps state-action pairs to d-dimensional vectors, and that the rewards and transition probabilities are linear functions in this representation. But where do these features come from? In the absence of expert domain knowledge, a tempting strategy is to use the “kitchen sink” approach and hope that the true features are included in a much larger set of potential features. In this paper we revisit linear MDPs from the perspective of feature selection. In a k-sparse linear MDP, there is an unknown subset S ⊂ [d] of size k containing all the relevant features, and the goal is to learn a near-optimal policy in only poly(k,logd) interactions with the environment. Our main result is the first polynomial-time algorithm for this problem. In contrast, earlier works either made prohibitively strong assumptions that obviated the need for exploration, or required solving computationally intractable optimization problems. Along the way we introduce the notion of an emulator: a succinct approximate representation of the transitions, that still suffices for computing certain Bellman backups. Since linear MDPs are a non-parametric model, it is not even obvious whether polynomial-sized emulators exist. We show that they do exist, and moreover can be computed efficiently via convex programming. As a corollary of our main result, we give an algorithm for learning a near-optimal policy in block MDPs whose decoding function is a low-depth decision tree; the algorithm runs in quasi-polynomial time and takes a polynomial number of samples (in the size of the decision tree). This can be seen as a reinforcement learning analogue of classic results in computational learning theory. Furthermore, it gives a natural model where improving the sample complexity via representation learning is computationally feasible.

Publisher's Version
Near-Optimal Mean Estimation with Unknown, Heteroskedastic Variances
Spencer Compton and Gregory Valiant
(Stanford University, USA)
Given data drawn from a collection of Gaussian variables with a common mean but different and unknown variances, what is the best algorithm for estimating their common mean? We present an intuitive and efficient algorithm for this task. As different closed-form guarantees can be hard to compare, the Subset-of-Signals model serves as a benchmark for “heteroskedastic” mean estimation: given n Gaussian variables with an unknown subset of m variables having variance bounded by 1, what is the optimal estimation error as a function of n and m? Our algorithm resolves this open question up to logarithmic factors, improving upon the previous best known estimation error by polynomial factors when m = nc for all 0<c<1. Of particular note, we obtain error o(1) with m = Õ(n1/4) variance-bounded samples, whereas previous work required m = Ω(n1/2). Finally, we show that in the multi-dimensional setting, even for d=2, our techniques enable rates comparable to knowing the variance of each sample.

Publisher's Version

2D

The Power of Two-Sided Recruitment in Two-Sided Markets
Yang Cai, Christopher Liaw, Aranyak Mehta, and Mingfei Zhao
(Yale University, USA; Google Research, USA)
We consider the problem of maximizing the gains from trade (GFT) in two-sided markets. The seminal impossibility result by Myerson and Satterthwaite (1983) shows that even for bilateral trade, there is no individually rational (IR), Bayesian incentive compatible (BIC) and budget balanced (BB) mechanism that can achieve the full GFT. Moreover, the optimal BIC, IR and BB mechanism that maximizes the GFT is known to be complex and heavily depends on the prior. In this paper, we pursue a Bulow-Klemperer-style question, i.e., does augmentation allow for prior-independent mechanisms to compete against the optimal mechanism? Our first main result shows that in the double auction setting with m i.i.d. buyers and n i.i.d. sellers, by augmenting O(1) buyers and sellers to the market, the GFT of a simple, dominant strategy incentive compatible (DSIC), and prior-independent mechanism in the augmented market is at least the optimal in the original market, when the buyers’ distribution first-order stochastically dominates the sellers’ distribution. The mechanism we consider is a slight variant of the standard Trade Reduction mechanism due to McAfee (1992). For comparison, Babaioff, Goldner, and Gonczarowski (2020) showed that if one is restricted to augmenting only one side of the market, then n(m + 4√m) additional agents are sufficient for their mechanism to beat the original optimal and ⌊ log2 m ⌋ additional agents are necessary for any prior-independent mechanism. Next, we go beyond the i.i.d. setting and study the power of two-sided recruitment in more general markets. Our second main result is that for any ε > 0 and any set of O(1/ε) buyers and sellers where the buyers’ value exceeds the sellers’ value with constant probability, if we add these additional agents into any market with arbitrary correlations, the Trade Reduction mechanism obtains a (1−ε)-approximation of the GFT of the augmented market. Importantly, the newly recruited agents are agnostic to the original market.

Publisher's Version
Strategic Budget Selection in a Competitive Autobidding World
Yiding Feng, Brendan Lucier, and Aleksandrs Slivkins
(University of Chicago, USA; Microsoft Research, USA)
We study a game played between advertisers in an online ad platform. The platform sells ad impressions by first-price auction and provides autobidding algorithms that optimize bids on each advertiser's behalf, subject to advertiser constraints such as budgets. Crucially, these constraints are strategically chosen by the advertisers. The chosen constraints define an "inner" budget-pacing game for the autobidders. Advertiser payoffs in the constraint-choosing "metagame" are determined by the equilibrium reached by the autobidders. Advertiser preferences can be more general than what is implied by their constraints: we assume only that they have weakly decreasing marginal value for clicks and weakly increasing marginal disutility for spending money. Nevertheless, we show that at any pure Nash equilibrium of the metagame, the resulting allocation obtains at least half of the liquid welfare of any allocation and this bound is tight. We also obtain a 4-approximation for any mixed Nash equilibrium or Bayes-Nash equilibria. These results rely on the power to declare budgets: if advertisers can specify only a (linear) value per click or an ROI target but not a budget constraint, the approximation factor at equilibrium can be as bad as linear in the number of advertisers.

Publisher's Version
The Role of Transparency in Repeated First-Price Auctions with Unknown Valuations
Nicolo Cesa-Bianchi, Tommaso Cesari, Roberto Colomboni, Federico Fusco, and Stefano Leonardi
(University of Milan, Italy; Politecnico di Milano, Italy; University of Ottawa, Canada; Italian Institute of Technology, Italy; Sapienza University of Rome, Italy)
We study the problem of regret minimization for a single bidder in a sequence of first-price auctions where the bidder discovers the item’s value only if the auction is won. Our main contribution is a complete characterization, up to logarithmic factors, of the minimax regret in terms of the auction’s transparency, which controls the amount of information on competing bids disclosed by the auctioneer at the end of each auction. Our results hold under different assumptions (stochastic, adversarial, and their smoothed variants) on the environment generating the bidder’s valuations and competing bids. These minimax rates reveal how the interplay between transparency and the nature of the environment affects how fast one can learn to bid optimally in first-price auctions.

Publisher's Version
Bilateral Trade with Correlated Values
Shahar Dobzinski and Ariel Shaulker
(Weizmann Institute of Science, Israel)
We study the bilateral trade problem where a seller owns a single indivisible item, and a potential buyer seeks to purchase it. Previous mechanisms for this problem only considered the case where the values of the buyer and the seller are drawn from independent distributions. In contrast, this paper studies bilateral trade mechanisms when the values are drawn from a joint distribution. We prove that the buyer-offering mechanism guarantees an approximation ratio of e/e−1 ≈ 1.582 to the social welfare even if the values are drawn from a joint distribution. The buyer-offering mechanism is Bayesian incentive compatible, but the seller has a dominant strategy. We prove the buyer-offering mechanism is optimal in the sense that no Bayesian mechanism where one of the players has a dominant strategy can obtain an approximation ratio better than e/e−1. We also show that no mechanism in which both sides have a dominant strategy can provide any constant approximation to the social welfare when the values are drawn from a joint distribution. Finally, we prove some impossibility results on the power of general Bayesian incentive compatible mechanisms. In particular, we show that no deterministic Bayesian incentive-compatible mechanism can provide an approximation ratio better than 1+ln2/2≈ 1.346.

Publisher's Version
No-Regret Learning in Bilateral Trade via Global Budget Balance
Martino Bernasconi, Matteo Castiglioni, Andrea Celli, and Federico Fusco
(Bocconi University, Italy; Politecnico di Milano, Italy; Sapienza University of Rome, Italy)
Bilateral trade models the problem of intermediating between two rational agents — a seller and a buyer — both characterized by a private valuation for an item they want to trade. We study the online learning version of the problem, in which at each time step a new seller and buyer arrive and the learner has to set prices for them without any knowledge about their (adversarially generated) valuations.
In this setting, known impossibility results rule out the existence of no-regret algorithms when budget balanced has to be enforced at each time step. In this paper, we introduce the notion of global budget balance, which only requires the learner to fulfill budget balance over the entire time horizon. Under this natural relaxation, we provide the first no-regret algorithms for adversarial bilateral trade under various feedback models. First, we show that in the full-feedback model, the learner can guarantee Õ(√T) regret against the best fixed prices in hindsight, and that this bound is optimal up to poly-logarithmic terms. Second, we provide a learning algorithm guaranteeing a Õ(T 34) regret upper bound with one-bit feedback, which we complement with a Ω(T 57) lower bound that holds even in the two-bit feedback model. Finally, we introduce and analyze an alternative benchmark that is provably stronger than the best fixed prices in hindsight and is inspired by the literature on bandits with knapsacks.

Publisher's Version

3A

Knapsack with Small Items in Near-Quadratic Time
Karl Bringmann
(Saarland University, Saarbrücken, Germany; MPI-INF, Germany)
The Knapsack problem is one of the most fundamental NP-complete problems at the intersection of computer science, optimization, and operations research. A recent line of research worked towards understanding the complexity of pseudopolynomial-time algorithms for Knapsack parameterized by the maximum item weight wmax and the number of items n. A conditional lower bound rules out that Knapsack can be solved in time O((n+wmax)2−δ) for any δ > 0 [Cygan, Mucha, Wegrzycki, Wlodarczyk’17, Künnemann, Paturi, Schneider’17]. This raised the question whether Knapsack can be solved in time Õ((n+wmax)2). This was open both for 0-1-Knapsack (where each item can be picked at most once) and Bounded Knapsack (where each item comes with a multiplicity). The quest of resolving this question lead to algorithms that solve Bounded Knapsack in time Õ(n3 wmax2) [Tamir’09], Õ(n2 wmax2) and Õ(n wmax3) [Bateni, Hajiaghayi, Seddighin, Stein’18], O(n2 wmax2) and Õ(n wmax2) [Eisenbrand and Weismantel’18], O(n + wmax3) [Polak, Rohwedder, Wegrzycki’21], and very recently Õ(n + wmax12/5) [Chen, Lian, Mao, Zhang’23]. In this paper we resolve this question by designing an algorithm for Bounded Knapsack with running time Õ(n + wmax2), which is conditionally near-optimal. This resolves the question both for the classic 0-1-Knapsack problem and for the Bounded Knapsack problem.

Publisher's Version
0-1 Knapsack in Nearly Quadratic Time
Ce Jin
(Massachusetts Institute of Technology, USA)
We study pseudo-polynomial time algorithms for the fundamental 0-1 Knapsack problem. Recent research interest has focused on its fine-grained complexity with respect to the number of items n and the maximum item weight wmax. Under (min,+)-convolution hypothesis, 0-1 Knapsack does not have O((n+wmax)2−δ) time algorithms (Cygan-Mucha-Węgrzycki-Włodarczyk 2017 and K'unnemann-Paturi-Schneider 2017). On the upper bound side, currently the fastest algorithm runs in Õ(n + 12/5) time (Chen, Lian, Mao, and Zhang 2023), improving the earlier O(n + wmax3)-time algorithm by Polak, Rohwedder, and Węgrzycki (2021).
In this paper, we close this gap between the upper bound and the conditional lower bound (up to subpolynomial factors): The 0-1 Knapsack problem has a deterministic algorithm in O(n + wmax2log4wmax) time.
Our algorithm combines and extends several recent structural results and algorithmic techniques from the literature on knapsack-type problems: (1) We generalize the “fine-grained proximity” technique of Chen, Lian, Mao, and Zhang (2023) derived from the additive-combinatorial results of Bringmann and Wellnitz (2021) on dense subset sums. This allows us to bound the support size of the useful partial solutions in the dynamic program.
(2) To exploit the small support size, our main technical component is a vast extension of the “witness propagation” method, originally designed by Deng, Mao, and Zhong (2023) for speeding up dynamic programming in the easier unbounded knapsack settings. To extend this approach to our 0-1 setting, we use a novel pruning method, as well as the two-level color-coding of Bringmann (2017) and the SMAWK algorithm on tall matrices.

Publisher's Version
A Nearly Quadratic-Time FPTAS for Knapsack
Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang
(Zhejiang University, China)
We investigate the classic Knapsack problem and propose a fully polynomial-time approximation scheme (FPTAS) that runs in O(n + (1/)2) time. Prior to our work, the best running time is O(n + (1/)11/5) [Deng, Jin, and Mao’23]. Our algorithm is the best possible (up to a polylogarithmic factor), as Knapsack has no O((n + 1/)2−δ)-time FPTAS for any constant δ > 0, conditioned on the conjecture that (min, +)-convolution has no truly subquadratic-time algorithm.

Publisher's Version
(1 − 𝜀)-Approximation of Knapsack in Nearly Quadratic Time
Xiao Mao
(Stanford University, USA)
Knapsack is one of the most fundamental problems in theoretical computer science. In the (1 − є)-approximation setting, although there is a fine-grained lower bound of (n + 1 / є) 2 − o(1) based on the (min, +)-convolution hypothesis ([K'unnemann, Paturi and Stefan Schneider, ICALP 2017] and [Cygan, Mucha, Wegrzycki and Wlodarczyk, 2017]), the best algorithm is randomized and runs in Õ(n + (1/є)11/5/2Ω(√log(1/є))) time [Deng, Jin and Mao, SODA 2023], and it remains an important open problem whether an algorithm with a running time that matches the lower bound (up to a sub-polynomial factor) exists. We answer the question positively by showing a deterministic (1 − є)-approximation scheme for knapsack that runs in Õ(n + (1 / є) 2) time. We first extend a known lemma in a recursive way to reduce the problem to n є-additive approximation for n items with profits in [1, 2). Then we give a simple efficient geometry-based algorithm for the reduced problem.

Publisher's Version
Approximating Partition in Near-Linear Time
Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang
(Zhejiang University, China)
We propose an O(n + 1/)-time FPTAS (Fully Polynomial-Time Approximation Scheme) for the classical Partition problem. This is the best possible (up to a polylogarithmic factor) assuming SETH (Strong Exponential Time Hypothesis) [Abboud, Bringmann, Hermelin, and Shabtay’22]. Prior to our work, the best known FPTAS for Partition runs in O(n + 1/5/4) time [Deng, Jin and Mao’23, Wu and Chen’22]. Our result is obtained by solving a more general problem of weakly approximating Subset Sum.

Publisher's Version
Approximating Small Sparse Cuts
Aditya Anand, Euiwoong Lee, Jason Li, and Thatchaphol Saranurak
(University of Michigan, USA; Carnegie Mellon University, USA)
We study polynomial-time approximation algorithms for edge and vertex Sparsest Cut and Small Set Expansion in terms of k, the number of edges or vertices cut in the optimal solution. Our main results are O(polylog  k)-approximation algorithms for various versions in this setting. Our techniques involve an extension of the notion of sample sets (Feige and Mahdian STOC’06), originally developed for small balanced cuts, to sparse cuts in general. We then show how to combine this notion of sample sets with two algorithms, one based on an existing framework of LP rounding and another new algorithm based on the cut-matching game, to get such approximation algorithms. Our cut-matching game algorithm can be viewed as a local version of the cut-matching game by Khandekar, Khot, Orecchia and Vishnoi and certifies an expansion of every vertex set of size s in O(logs) rounds. These techniques may be of independent interest. As corollaries of our results, we also obtain an O(logopt) approximation for min-max graph partitioning, where opt is the min-max value of the optimal cut, and improve the bound on the size of multicut mimicking networks computable in polynomial time.

Publisher's Version
Better Coloring of 3-Colorable Graphs
Ken-ichi Kawarabayashi, Mikkel Thorup, and Hirotaka Yoneda
(National Institute of Informatics, Tokyo, Japan; University of Tokyo, Tokyo, Japan; University of Copenhagen, Copenhagen, Denmark)
We consider the problem of coloring a 3-colorable graph in polynomial time using as few colors as possible. This is one of the most challenging problems in graph algorithms. In this paper using Blum’s notion of “progress”, we develop a new combinatorial algorithm for the following: Given any 3-colorable graph with minimum degree >√n, we can, in polynomial time, make progress towards a k-coloring for some k=√nno(1). We balance our main result with the best-known semi-definite(SDP) approach which we use for degrees below n0.605073. As a result, we show that (n0.19747) colors suffice for coloring 3-colorable graphs. This improves on the previous best bound of (n0.19996) by Kawarabayashi and Thorup from 2017.

Publisher's Version

3B

Testing Closeness of Multivariate Distributions via Ramsey Theory
Ilias Diakonikolas, Daniel M. Kane, and Sihan Liu
(University of Wisconsin-Madison, USA; University of California at San Diego, USA)
We investigate the statistical task of closeness (or equivalence) testing for multidimensional distributions. Specifically, given sample access to two unknown distributions p, q on d, we want to distinguish between the case that p=q versus ||pq||Ak > є, where ||pq||Ak denotes the generalized Ak distance between p and q — measuring the maximum discrepancy between the distributions over any collection of k disjoint, axis-aligned rectangles. Our main result is the first closeness tester for this problem with sub-learning sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound. In more detail, we provide a computationally efficient closeness tester with sample complexity O((k6/7/ polyd(є)) logd(k)). On the lower bound side, we establish a qualitatively matching sample complexity lower bound of Ω(k6/7/poly(є)), even for d=2. These sample complexity bounds are surprising because the sample complexity of the problem in the univariate setting is Θ(k4/5/poly(є)). This has the interesting consequence that the jump from one to two dimensions leads to a substantial increase in sample complexity, while increases beyond that do not. As a corollary of our general Ak tester, we obtain dTV-closeness testers for pairs of k-histograms on d over a common unknown partition, and pairs of uniform distributions supported on the union of k unknown disjoint axis-aligned rectangles. Both our algorithm and our lower bound make essential use of tools from Ramsey theory.

Publisher's Version
Semidefinite Programs Simulate Approximate Message Passing Robustly
Misha Ivkov and Tselil Schramm
(Stanford University, USA)
Approximate message passing (AMP) is a family of iterative algorithms that generalize matrix power iteration. AMP algorithms are known to optimally solve many average-case optimization problems. In this paper, we show that a large class of AMP algorithms can be simulated in polynomial time by local statistics hierarchy semidefinite programs (SDPs), even when an unknown principal minor of measure 1/polylog(dimension) is adversarially corrupted. Ours are the first robust guarantees for many of these problems. Further, our results offer an interesting counterpoint to strong lower bounds against less constrained SDP relaxations for average-case max-cut-gain (a.k.a. “optimizing the Sherrington-Kirkpatrick Hamiltonian”) and other problems.

Publisher's Version
Planted Clique Conjectures Are Equivalent
Shuichi Hirahara and Nobutaka Shimizu
(National Institute of Informatics, Tokyo, Japan; Tokyo Institute of Technology, Tokyo, Japan)
The planted clique conjecture states that no polynomial-time algorithm can find a hidden clique of size k ≪ √n in an n-vertex Erdős–Rényi random graph with a k-clique planted. In this paper, we prove the equivalence among many (in fact, most) variants of planted clique conjectures, such as search versions with a success probability exponentially close to 1 and with a non-negligible success probability, a worst-case version (the k-clique problem on incompressible graphs), decision versions with small and large success probabilities, and decision versions with adversarially chosen k and binomially distributed k. In particular, we establish the equivalence between the planted clique problem introduced by Jerrum and Kučera and its decision version suggested by Saks in the 1990s. Moreover, the equivalence among decision versions identifies the optimality of a simple edge counting algorithm: By counting the number of edges, one can efficiently distinguish an n-vertex random graph from a random graph with a k-clique planted with probability Θ(k2/n) for any k ≤ √n. We show that for any k, no polynomial-time algorithm can distinguish these two random graphs with probability ≫ k2 / n if and only if the planted clique conjecture holds. The equivalence among search versions identifies the first one-way function that admits a polynomial-time security-preserving self-reduction from exponentially weak to strong one-way functions. These results reveal a detection-recovery gap in success probabilities for the planted clique problem. We also present another equivalence between the existence of a refutation algorithm for the planted clique problem and an average-case polynomial-time algorithm for the k-clique problem with respect to the Erdős–Rényi random graph.

Publisher's Version
Robust Recovery for Stochastic Block Models, Simplified and Generalized
Sidhanth Mohanty, Prasad Raghavendra, and David X. Wu
(Massachusetts Institute of Technology, USA; University of California at Berkeley, USA)
We study the problem of robust community recovery: efficiently recovering communities in sparse stochastic block models in the presence of adversarial corruptions. In the absence of adversarial corruptions, there are efficient algorithms when the signal-to-noise ratio exceeds the Kesten–Stigum (KS) threshold, widely believed to be the computational threshold for this problem. The question we study is: does the computational threshold for robust community recovery also lie at the KS threshold? We answer this question affirmatively, providing an algorithm for robust community recovery for arbitrary stochastic block models on any constant number of communities, generalizing the work of Ding, d’Orsi, Nasser & Steurer on an efficient algorithm above the KS threshold in the case of 2-community block models. There are three main ingredients to our work: (1) The Bethe Hessian of the graph is defined as HG(t) ≜ (DGI)t2AGt + I where DG is the diagonal matrix of degrees and AG is the adjacency matrix. Empirical work suggested that the Bethe Hessian for the stochastic block model has outlier eigenvectors corresponding to the communities right above the Kesten-Stigum threshold. We formally confirm the existence of outlier eigenvalues for the Bethe Hessian, by explicitly constructing outlier eigenvectors from the community vectors. (2) We develop an algorithm for a variant of robust PCA on sparse matrices. Specifically, an algorithm to partially recover top eigenspaces from adversarially corrupted sparse matrices under mild delocalization constraints. (3) A rounding algorithm to turn vector assignments of vertices into a community assignment, inspired by the algorithm of Charikar & Wirth for 2XOR.

Publisher's Version
New Tools for Smoothed Analysis: Least Singular Value Bounds for Random Matrices with Dependent Entries
Aditya Bhaskara, Eric Evert, Vaidehi Srinivas, and Aravindan Vijayaraghavan
(University of Utah, USA; Northwestern University, USA)
We develop new techniques for proving lower bounds on the least singular value of random matrices with limited randomness. The matrices we consider have entries that are given by polynomials of a few underlying base random variables. This setting captures a core technical challenge for obtaining smoothed analysis guarantees in many algorithmic settings. Least singular value bounds often involve showing strong anti-concentration inequalities that are intricate and much less understood compared to concentration (or large deviation) bounds. First, we introduce a general technique for proving anti-concentration that uses well-conditionedness properties of the Jacobian of a polynomial map, and show how to combine this with a hierarchical є-net argument to prove least singular value bounds. Our second tool is a new statement about least singular values to reason about higher-order lifts of smoothed matrices and the action of linear operators on them. Apart from getting simpler proofs of existing smoothed analysis results, we use these tools to now handle more general families of random matrices. This allows us to produce smoothed analysis guarantees in several previously open settings. These new settings include smoothed analysis guarantees for power sum decompositions and certifying robust entanglement of subspaces, where prior work could only establish least singular value bounds for fully random instances or only show non-robust genericity guarantees.

Publisher's Version Info

3C

Adaptively-Sound Succinct Arguments for NP from Indistinguishability Obfuscation
Brent Waters and David J. Wu
(University of Texas at Austin, USA; NTT Research, USA)
A succinct non-interactive argument (SNARG) for NP allows a prover to convince a verifier that an NP statement x is true with a proof of size o(|x| + |w|), where w is the associated NP witness. A SNARG satisfies adaptive soundness if the malicious prover can choose the statement to prove after seeing the scheme parameters. In this work, we provide the first adaptively-sound SNARG for NP in the plain model assuming sub-exponentially-hard indistinguishability obfuscation, sub-exponentially-hard one-way functions, and either the (polynomial) hardness of the discrete log assumption or the (polynomial) hardness of factoring. This gives the first adaptively-sound SNARG for NP from falsifiable assumptions. All previous SNARGs for NP in the plain model either relied on non-falsifiable cryptographic assumptions or satisfied a weak notion of non-adaptive soundness (where the adversary has to choose the statement it proves before seeing the scheme parameters).

Publisher's Version
A New Approach for Non-Interactive Zero-Knowledge from Learning with Errors
Brent Waters
(University of Texas at Austin, USA; NTT Research, USA)
We put forward a new approach for achieving non-interactive zero-knowledge proofs (NIKZs) from the learning with errors (LWE) assumption (with subexponential modulus to noise ratio). We provide a LWE-based construction of a hidden bits generator that gives rise to a NIZK via the celebrated hidden bits paradigm. A notable feature of our construction is its simplicity. Our construction employs lattice trapdoors, but beyond that uses only simple operations. Unlike prior solutions, we do not rely on a correlation intractability argument nor do we utilize fully homomorphic encryption techniques. Our solution provides a new methodology that adds to the diversity of techniques for solving this fundamental problem.

Publisher's Version
Optimal Load-Balanced Scalable Distributed Agreement
Yuval Gelles and Ilan Komargodski
(Hebrew University of Jerusalem, Israel; NTT Research, USA)
We consider the fundamental problem of designing classical consensus-related distributed abstractions for large-scale networks, where the number of parties can be huge. Specifically, we consider tasks such as Byzantine Agreement, Broadcast, and Committee Election, and our goal is to design scalable protocols in the sense that each honest party processes and sends a number of bits which is sub-linear in n, the total number of parties. In this work, we construct the first such scalable protocols for all of the above tasks. In our protocols, each party processes and sends Õ (√n) bits throughout Õ (1) rounds of communication, and correctness is guaranteed for at most 1/3−є fraction of static byzantine corruptions for every constant є>0 (in the full information model). All previous protocols for the considered agreement tasks were non-scalable, either because the communication complexity was linear or because the computational complexity was super polynomial. We complement our result with a matching lower bound showing that any Byzantine Agreement protocol must have Ω(√n) complexity in our model. Previously, the state of the art was the well-known Ω(∛n) lower bound of Holtby, Kapron, and King (Distributed Computing, 2008).

Publisher's Version
Quantum Oblivious LWE Sampling and Insecurity of Standard Model Lattice-Based SNARKs
Thomas Debris-Alazard, Pouria Fallahpour, and Damien Stehlé
(Inria - Laboratoire LIX - École Polytechnique, France; ENS Lyon - LIP, France; CryptoLab, France)
The Learning With Errors (LWE) problem asks to find s from an input of the form (A, b = As+e) ∈ (ℤ/qℤ)m × n × (ℤ/qℤ)m, for a vector e that has small-magnitude entries. In this work, we do not focus on solving LWE but on the task of sampling instances. As these are extremely sparse in their range, it may seem plausible that the only way to proceed is to first create s and e and then set b = As+e. In particular, such an instance sampler knows the solution. This raises the question whether it is possible to obliviously sample (A, As+e), namely, without knowing the underlying s. A variant of the assumption that oblivious LWE sampling is hard has been used in a series of works to analyze the security of candidate constructions of Succinct Non-interactive Arguments of Knowledge (SNARKs). As the assumption is related to LWE, these SNARKs have been conjectured to be secure in the presence of quantum adversaries.
Our main result is a quantum polynomial-time algorithm that samples well-distributed LWE instances while provably not knowing the solution, under the assumption that LWE is hard. Moreover, the approach works for a vast range of LWE parametrizations, including those used in the above-mentioned SNARKs. This invalidates the assumptions used in their security analyses, although it does not yield attacks against the constructions themselves.

Publisher's Version
Batch Proofs Are Statistically Hiding
Nir Bitansky, Chethan Kamath, Omer Paneth, Ron D. Rothblum, and Prashant Nalini Vasudevan
(Tel Aviv University, Israel; IIT Bombay, India; Technion, Israel; National University of Singapore, Singapore)
Batch proofs are proof systems that convince a verifier that x1,…,xtL, for some NP language L, with communication that is much shorter than sending the t witnesses. In the case of statistical soundness (where the cheating prover is unbounded but the honest prover is efficient given the witnesses), interactive batch proofs are known for UP, the class of unique-witness NP languages. In the case of computational soundness (where both honest and dishonest provers are efficient), non-interactive solutions are now known for all of NP, assuming standard lattice or group assumptions. We exhibit the first negative results regarding the existence of batch proofs and arguments: - Statistically sound batch proofs for L imply that L has a statistically witness indistinguishable (SWI) proof, with inverse polynomial SWI error, and a non-uniform honest prover. The implication is unconditional for obtaining honest-verifier SWI or for obtaining full-fledged SWI from public-coin protocols, whereas for private-coin protocols full-fledged SWI is obtained assuming one-way functions. This poses a barrier for achieving batch proofs beyond UP (where witness indistinguishability is trivial). In particular, assuming that NP does not have SWI proofs, batch proofs for all of NP do not exist. - Computationally sound batch proofs (a.k.a batch arguments or BARGs) for NP, together with one-way functions, imply statistical zero-knowledge (SZK) arguments for NP with roughly the same number of rounds, an inverse polynomial zero-knowledge error, and non-uniform honest prover. Thus, constant-round interactive BARGs from one-way functions would yield constant-round SZK arguments from one-way functions. This would be surprising as SZK arguments are currently only known assuming constant-round statistically-hiding commitments. We further prove new positive implications of non-interactive batch arguments to non-interactive zero knowledge arguments (with explicit uniform prover and verifier): - Non-interactive BARGs for NP, together with one-way functions, imply non-interactive computational zero-knowledge arguments for NP. Assuming also dual-mode commitments, the zero knowledge can be made statistical. Both our negative and positive results stem from a new framework showing how to transform a batch protocol for a language L into an SWI protocol for L.

Publisher's Version

3D

Approximating Maximum Matching Requires Almost Quadratic Time
Soheil Behnezhad, Mohammad Roghani, and Aviad Rubinstein
(Northeastern University, USA; Stanford University, USA)
We study algorithms for estimating the size of maximum matching. This problem has been subject to extensive research. For n-vertex graphs, Bhattacharya, Kiss, and Saranurak [FOCS’23] (BKS) showed that an estimate that is within є n of the optimal solution can be achieved in n2−Ωє(1) time, where n is the number of vertices. While this is subquadratic in n for any fixed є > 0, it gets closer and closer to the trivial Θ(n2) time algorithm that reads the entire input as є is made smaller and smaller. In this work, we close this gap and show that the algorithm of BKS is close to optimal. In particular, we prove that for any fixed δ > 0, there is another fixed є = є(δ) > 0 such that estimating the size of maximum matching within an additive error of є n requires Ω(n2−δ) time in the adjacency list model.

Publisher's Version
Structural Complexities of Matching Mechanisms
Yannai A. Gonczarowski and Clayton Thomas
(Harvard University, USA; Microsoft Research, USA)
We study various novel complexity measures for two-sided matching mechanisms, applied to the two canonical strategyproof matching mechanisms, Deferred Acceptance (DA) and Top Trading Cycles (TTC). Our metrics are designed to capture the complexity of various structural (rather than computational) concerns, in particular ones of recent interest within economics. We consider a unified, flexible approach to formalizing our questions: Define a protocol or data structure performing some task, and bound the number of bits that it requires. Our main results apply this approach to four questions of general interest; for mechanisms matching applicants to institutions, our questions are: (1) How can one applicant affect the outcome matching? (2) How can one applicant affect another applicant's set of options? (3) How can the outcome matching be represented / communicated? (4) How can the outcome matching be verified? Holistically, our results show that TTC is more complex than DA, formalizing previous intuitions that DA has a simpler structure than TTC. For question (2), our result gives a new combinatorial characterization of which institutions are removed from each applicant's set of options when a new applicant is added in DA; this characterization may be of independent interest. For question (3), our result gives new tight lower bounds proving that the relationship between the matching and the priorities is more complex in TTC than in DA. We nonetheless showcase that this higher complexity of TTC is nuanced: By constructing new tight lower-bound instances and new verification protocols, we prove that DA and TTC are comparable in complexity under questions (1) and (4). This more precisely delineates the ways in which TTC is more complex than DA, and emphasizes that diverse considerations must factor into gauging the complexity of matching mechanisms.

Publisher's Version
A Constant-Factor Approximation for Nash Social Welfare with Subadditive Valuations
Shahar Dobzinski, Wenzheng Li, Aviad Rubinstein, and Jan Vondrák
(Weizmann Institute of Science, Israel; Stanford University, USA)
We present a constant-factor approximation algorithm for the Nash Social Welfare (NSW) maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a framework for NSW optimization which assumes two subroutines which (1) solve a configuration-type LP under certain additional conditions, and (2) round the fractional solution with respect to utilitarian social welfare. In particular, a constant-factor approximation for submodular valuations with value queries can also be derived from our framework.

Publisher's Version
Limitations of Stochastic Selection Problems with Pairwise Independent Priors
Shaddin Dughmi, Yusuf Hakan Kalayci, and Neel Patel
(University of Southern California, USA)
Motivated by the growing interest in correlation-robust stochastic optimization, we investigate stochastic selection problems beyond independence. Specifically, we consider the instructive case of pairwise-independent priors and matroid constraints. We obtain essentially-optimal bounds for contention resolution and prophet inequalities. The impetus for our work comes from the recent work of Caragiannis et. al. [WINE 2022], who derived a constant factor approximation for the single-choice prophet inequality with pairwise-independent priors.
For general matroids, our results are tight and largely negative. For both contention resolution and prophet inequalities, our impossibility results hold for the full linear matroid over a finite field. We explicitly construct pairwise-independent distributions which rule out an ω(1/)-balanced offline CRS and an ω(1/log)-competitive prophet inequality against the (usual) oblivious adversary. For both results, we employ a generic approach for constructing pairwise-independent random vectors — one which unifies and generalizes existing pairwise-independence constructions from the literature on universal hash functions and pseudorandomness. Specifically, our approach is based on our observation that random linear maps turn linear independence into stochastic independence.
We then examine the class of matroids which satisfy the so-called partition property — these include most common matroids encountered in optimization. We obtain positive results for both online contention resolution and prophet inequalities with pairwise-independent priors on such matroids, approximately matching the corresponding guarantees for fully independent priors. These algorithmic results hold against the almighty adversary for both problems.

Publisher's Version
Prophet Inequalities Require Only a Constant Number of Samples
Andrés Cristi and Bruno Ziliotto
(University of Chile, Chile; Center for Mathematical Modeling, Chile; CNRS, France; Paris Dauphine University, France)
In a prophet inequality problem, n independent random variables are presented to a gambler one by one. The gambler decides when to stop the sequence and obtains the most recent value as reward. We evaluate a stopping rule by the worst-case ratio between its expected reward and the expectation of the maximum variable. In the classic setting, the order is fixed, and the optimal ratio is known to be 1/2. Three variants of this problem have been extensively studied: the prophet-secretary model, where variables arrive in uniformly random order; the free-order model, where the gambler chooses the arrival order; and the i.i.d. model, where the distributions are all the same, rendering the arrival order irrelevant. Most of the literature assumes that distributions are known to the gambler. Recent work has considered the question of what is achievable when the gambler has access only to a few samples per distribution. Surprisingly, in the fixed-order case, a single sample from each distribution is enough to approximate the optimal ratio, but this is not the case in any of the three variants. We provide a unified proof that for all three variants of the problem, a constant number of samples (independent of n) for each distribution is good enough to approximate the optimal ratios. Prior to our work, this was known to be the case only in the i.i.d. variant. Previous works relied on explicitly constructing sample-based algorithms that match the best possible ratio. Remarkably, the optimal ratios for the prophet-secretary and the free-order variants with full information are still unknown. Consequently, our result requires a significantly different approach than for the classic problem and the i.i.d. variant, where the optimal ratios and the algorithms that achieve them are known. We complement our result showing that our algorithms can be implemented in polynomial time. A key ingredient in our proof is an existential result based on a minimax argument, which states that there must exist an algorithm that attains the optimal ratio and does not rely on the knowledge of the upper tail of the distributions. A second key ingredient is a refined sample-based version of a decomposition of the instance into “small” and “large” variables, first introduced by Liu et al. [EC’21]. The universality of our approach opens avenues for generalization to other sample-based models. Furthermore, we uncover structural properties that might help pinpoint the optimal ratios in the full-information cases.

Publisher's Version

4A

A Unified Approach to Learning Ising Models: Beyond Independence and Bounded Width
Jason Gaitonde and Elchanan Mossel
(Massachusetts Institute of Technology, USA)
We revisit the well-studied problem of efficiently learning the underlying structure and parameters of an Ising model from data. Current algorithmic approaches achieve essentially optimal sample complexity when samples are generated i.i.d. from the stationary measure and the underlying model satisfies ”width” constraints that bound the total ℓ1 interaction involving each node. However, these assumptions are not satisfied in some important settings of interest, like temporally correlated data or more complicated models (like spin glasses) that do not satisfy width bounds.
We analyze a simple existing approach based on node-wise logistic regression, and show it provably succeeds at efficiently recovering the underlying Ising model in several new settings:
Given dynamically generated data from a wide variety of Markov chains, including Glauber, block, and round-robin dynamics, logistic regression recovers the parameters with sample complexity that is optimal up to loglogn factors. This generalizes the specialized algorithm of Bresler, Gamarnik, and Shah (IEEE Trans. Inf. Theory ’18) for structure recovery in bounded degree graphs from Glauber dynamics. For the Sherrington-Kirkpatrick model of spin glasses, given poly(n) independent samples, logistic regression recovers the parameters in most of the proven high-temperature regime via a simple reduction to weaker structural properties of the measure. This improves on recent work of Anari, Jain, Koehler, Pham, and Vuong (SODA ’24) which gives distribution learning at higher temperature. As a simple byproduct of our techniques, logistic regression achieves an exponential improvement in learning from samples in the M-regime of data considered by Dutt, Lokhov, Vuffray, and Misra (ICML ’21) as well as novel guarantees for learning from the adversarial Glauber dynamics of Chin, Moitra, Mossel, and Sandon.
Our approach thus provides a significant generalization of the elegant analysis of logistic regression by Wu, Sanghavi, and Dimakis (Neurips ’19) without any algorithmic modification in each setting.

Publisher's Version
Nonlinear Dynamics for the Ising Model
Pietro Caputo and Alistair Sinclair
(University Rome III, Italy; University of California at Berkeley, USA)
We introduce and analyze a natural class of nonlinear dynamics for spin systems such as the Ising model. This class of dynamics is based on the framework of mass action kinetics, which models the evolution of systems of entities under pairwise interactions, and captures a number of important nonlinear models from various fields, including chemical reaction networks, Boltzmann’s model of an ideal gas, recombination in population genetics, and genetic algorithms. In the context of spin systems, it is a natural generalization of linear dynamics based on Markov chains, such as Glauber dynamics and block dynamics, which are by now well understood. However, the inherent nonlinearity makes the dynamics much harder to analyze, and rigorous quantitative results so far are limited to processes which converge to essentially trivial stationary distributions that are product measures. In this paper we provide the first quantitative convergence analysis for natural nonlinear dynamics in a combinatorial setting where the stationary distribution contains non-trivial correlations, namely spin systems at high temperatures. We prove that nonlinear versions of both the Glauber dynamics and the block dynamics converge to the Gibbs distribution of the Ising model (with given external fields) in times O(nlogn) and O(logn) respectively, where n is the size of the underlying graph (number of spins). Given the lack of general analytical methods for such nonlinear systems, our analysis is unconventional, and combines tools such as information percolation (due in the linear setting to Lubetzky and Sly), a novel coupling of the Ising model with Erdős-Rényi random graphs, and non-traditional branching processes augmented by a ”fragmentation” process. Our results extend immediately to any spin system with a finite number of spins and bounded interactions.

Publisher's Version
Influences in Mixing Measures
Frederic Koehler, Noam Lifshitz, Dor Minzer, and Elchanan Mossel
(University of Chicago, USA; Hebrew University of Jerusalem, Israel; Massachusetts Institute of Technology, USA)
The theory of influences in product measures has profound applications in theoretical computer science, combinatorics, and discrete probability. This deep theory is intimately connected to functional inequalities and to the Fourier analysis of discrete groups. Originally, influences of functions were motivated by the study of social choice theory, wherein a Boolean function represents a voting scheme, its inputs represent the votes, and its output represents the outcome of the elections. Thus, product measures represent a scenario in which the votes of the parties are randomly and independently distributed, which is often far from the truth in real-life scenarios. We begin to develop the theory of influences for more general measures under mixing or spectral independence conditions. More specifically, we prove analogues of the KKL and Talagrand influence theorems for Markov Random Fields on bounded degree graphs when the Glauber dynamics mix rapidly. We thus resolve a long standing challenge, stated for example by Kalai and Safra (2005). We show how some of the original applications of the theory of in terms of voting and coalitions extend to these general dependent measures. Our results thus shed light both on voting with correlated voters and on the behavior of general functions of Markov Random Fields (also called "spin-systems") where the Glauber dynamics mixes rapidly.

Publisher's Version
Parallel Sampling via Counting
Nima Anari, Ruiquan Gao, and Aviad Rubinstein
(Stanford University, USA)
We show how to use parallelization to speed up sampling from an arbitrary distribution µ on a product space [q]n, given oracle access to counting queries: ℙX∼ µ[XSS] for any S⊆ [n] and σS ∈ [q]S. Our algorithm takes O(n2/3· polylog(n,q)) parallel time, to the best of our knowledge, the first sublinear in n runtime for arbitrary distributions. Our results have implications for sampling in autoregressive models. Our algorithm directly works with an equivalent oracle that answers conditional marginal queries ℙX∼ µ[Xii |  XSS], whose role is played by a trained neural network in autoregressive models. This suggests a roughly n1/3-factor speedup is possible for sampling in any-order autoregressive models. We complement our positive result by showing a lower bound of Ω(n1/3) for the runtime of any parallel sampling algorithm making at most poly(n) queries to the counting oracle, even for q=2.

Publisher's Version
On the Fourier Coefficients of High-Dimensional Random Geometric Graphs
Kiril Bangachev and Guy Bresler
(Massachusetts Institute of Technology, USA)
The random geometric graph RGG(n,Sd−1,p) is formed by sampling n i.i.d. vectors {Vi}i = 1n uniformly on Sd−1 and placing an edge between pairs of vertices i and j for which ⟨ Vi,Vj⟩ ≥ τdp, where τdp is such that the expected density is p. We study the low-degree Fourier coefficients of the distribution RGG(n,Sd−1,p) and its Gaussian analogue. Our main conceptual contribution is a novel two-step strategy for bounding Fourier coefficients which we believe is more widely applicable to studying latent space distributions. First, we localize the dependence among edges to few fragile edges. Second, we partition the space of latent vector configurations (Sd−1)n based on the set of fragile edges and on each subset of configurations, we define a noise operator acting independently on edges not incident (in an appropriate sense) to fragile edges. We apply the resulting bounds to: 1) Settle the low-degree polynomial complexity of distinguishing spherical and Gaussian random geometric graphs from Erdos-Renyi both in the case of observing a complete set of edges and in the non-adaptively chosen mask M model recently introduced by Mardia, Verchand, and Wein; 2) Exhibit a statistical-computational gap for distinguishing RGG and a planted coloring model in a regime when RGG is distinguishable from ; 3) Reprove known bounds on the second eigenvalue of random geometric graphs.

Publisher's Version

4B

Classical Simulation of Peaked Shallow Quantum Circuits
Sergey Bravyi, David Gosset, and Yinchen Liu
(IBM Research, USA; University of Waterloo, Canada; Institute for Quantum Computing, Canada; Perimeter Institute for Theoretical Physics, Canada)
An n-qubit quantum circuit is said to be peaked if it has an output probability that is at least inverse-polynomially large as a function of n. We describe a classical algorithm with quasipolynomial runtime nO(logn) that approximately samples from the output distribution of a peaked constant-depth circuit. We give even faster algorithms for circuits composed of nearest-neighbor gates on a D-dimensional grid of qubits, with polynomial runtime nO(1) if D=2 and almost-polynomial runtime nO(loglogn) for D>2. Our sampling algorithms can be used to estimate output probabilities of shallow circuits to within a given inverse-polynomial additive error, improving previously known methods. As a simple application, we obtain a quasipolynomial algorithm to estimate the magnitude of the expected value of any Pauli observable in the output state of a shallow circuit (which may or may not be peaked). This is a dramatic improvement over the prior state-of-the-art algorithm which had an exponential scaling in √n.

Publisher's Version
Quantum and Classical Query Complexities of Functions of Matrices
Ashley Montanaro and Changpeng Shao
(University of Bristol, United Kingdom; Academy of Mathematics and Systems Science at Chinese Academy of Sciences, China)
Let A be an s-sparse Hermitian matrix, f(x) be a univariate function, and i, j be two indices. In this work, we investigate the query complexity of approximating i f(A) j. We show that for any continuous function f(x):[−1,1]→ [−1,1], the quantum query complexity of computing i f(A) j± ε/4 is lower bounded by Ω(degε(f)). The upper bound is at most quadratic in degε(f) and is linear in degε(f) under certain mild assumptions on A. Here the approximate degree degε(f) is the minimum degree such that there is a polynomial of that degree approximating f up to additive error ε in the interval [−1,1]. We also show that the classical query complexity is lower bounded by Ω((s/2)(deg(f)−1)/6) for any s≥ 4. Our results show that the quantum and classical separation is exponential for any continuous function of sparse Hermitian matrices, and also imply the optimality of implementing smooth functions of sparse Hermitian matrices by quantum singular value transformation. The main techniques we used are the dual polynomial method for functions over the reals, linear semi-infinite programming, and tridiagonal matrices.

Publisher's Version
Circuit-to-Hamiltonian from Tensor Networks and Fault Tolerance
Anurag Anshu, Nikolas P. Breuckmann, and Quynh T. Nguyen
(Harvard University, USA; University of Bristol, United Kingdom)
We define a map from an arbitrary quantum circuit to a local Hamiltonian whose ground state encodes the quantum computation. All previous maps relied on the Feynman-Kitaev construction, which introduces an ancillary "clock register" to track the computational steps. Our construction, on the other hand, relies on injective tensor networks with associated parent Hamiltonians, avoiding the introduction of a clock register. This comes at the cost of the ground state containing only a noisy version of the quantum computation, with independent stochastic noise. We can remedy this - making our construction robust - by using quantum fault tolerance. In addition to the stochastic noise, we show that any state with energy density exponentially small in the circuit depth encodes a noisy version of the quantum computation with adversarial noise. We also show that any "combinatorial state" with energy density polynomially small in depth encodes the quantum computation with adversarial noise. This serves as evidence that any state with energy density polynomially small in depth has a similar property. As an application, we show that contracting injective tensor networks to additive error is BQP-hard. We also discuss the implication of our construction to the quantum PCP conjecture, combining with an observation that QMA verification can be done in logarithmic depth.

Publisher's Version
Quantum Time-Space Tradeoffs for Matrix Problems
Paul Beame, Niels Kornerup, and Michael Whitmeyer
(University of Washington, USA; University of Texas at Austin, USA)
We prove lower bounds on the time and space required for quantum computers to solve a wide variety of problems involving matrices, many of which have only been analyzed classically in prior work. Using a novel way of applying recording query methods we show that for many linear algebra problems—including matrix-vector product, matrix inversion, matrix multiplication and powering—existing classical time-space tradeoffs also apply to quantum algorithms with at most a constant factor loss. For example, for almost all fixed matrices A, including the discrete Fourier transform (DFT) matrix, we prove that quantum circuits with at most T input queries and S qubits of memory require T=Ω(n2/S) to compute matrix-vector product Ax for x ∈ {0,1}n. We similarly prove that matrix multiplication for n× n binary matrices requires T=Ω(n3 / √S). Because many of our lower bounds are matched by deterministic algorithms with the same time and space complexity, our results show that quantum computers cannot provide any asymptotic advantage for these problems at any space bound. We also improve the previous quantum time-space tradeoff lower bounds for n× n Boolean (i.e. AND-OR) matrix multiplication from T=Ω(n2.5/S1/2) to T=Ω(n2.5/S1/4) which has optimal exponents for the powerful query algorithms to which it applies. Our method also yields improved lower bounds for classical algorithms.

Publisher's Version
Quadratic Lower Bounds on the Approximate Stabilizer Rank: A Probabilistic Approach
Saeed Mehraban and Mehrdad Tahmasbi
(Tufts University, USA; University of Illinois at Urbana-Champaign, USA)
The approximate stabilizer rank of a quantum state is the minimum number of terms in any approximate decomposition of that state into stabilizer states. Bravyi and Gosset showed that the approximate stabilizer rank of a so-called “magic” state like |Tn, up to polynomial factors, is an upper bound on the number of classical operations required to simulate an arbitrary quantum circuit with Clifford gates and n number of T gates. As a result, an exponential lower bound on this quantity seems inevitable. Despite this intuition, several attempts using various techniques could not lead to a better than a linear lower bound on the “exact” rank of |Tn, meaning the minimal size of a decomposition that exactly produces the state. For the “approximate” rank, which is more realistically related to the cost of simulating quantum circuits, no lower bound better than Ω(√n) has been known. In this paper, we improve the lower bound on the approximate rank to Ω(n2) for a wide range of the approximation parameters. An immediate corollary of our result is the existence of polynomial time computable functions which require a super-linear number of terms in any decomposition into exponentials of quadratic forms over F2, resolving a question by Williams. Our approach is based on a strong lower bound on the approximate rank of a quantum state sampled from the Haar measure, a step-by-step analysis of the approximate rank of a magic-state teleportation protocol to sample from the Haar measure, and a result about trading Clifford operations with T gates.

Publisher's Version

4C

Hardness of Range Avoidance and Remote Point for Restricted Circuits via Cryptography
Yilei Chen and Jiatu Li
(Tsinghua University, China; Shanghai Qi Zhi Institute, Shanghai, China; Massachusetts Institute of Technology, USA)
A recent line of research has introduced a systematic approach to exploring the complexity of explicit construction problems through the use of meta-problems, namely, the range avoidance problem (abbrev. Avoid) and the remote point problem (abbrev. ). The upper and lower bounds for these meta problems provide a unified perspective on the complexity of specific explicit construction problems that were previously studied independently. An interesting question largely unaddressed by previous works is whether we can show hardness of Avoid and RPP for simple circuits, such as low-depth circuits. In this paper, we demonstrate, under plausible cryptographic assumptions, that both the range avoidance problem and the remote point problem cannot be efficiently solved by nondeterministic search algorithms, even when the input circuits are as simple as constant-depth circuits. This extends a hardness result established by Ilango, Li, and Williams (STOC’23) against deterministic algorithms employing witness encryption for NP, where the inputs to Avoid are general Boolean circuits. Our primary technical contribution is a novel construction of witness encryption inspired by public-key encryption for certain promise language in NP that is unlikely to be NP-complete. We introduce a generic approach to transform a public-key encryption scheme with particular properties into a witness encryption scheme for a promise language related to the initial public-key encryption scheme. Based on this translation and variants of standard lattice-based or coding-based PKE schemes, we obtain, under plausible assumption, a provably secure witness encryption scheme for some promise language in NP-coNP/poly. Additionally, we show that our constructions of witness encryption are plausibly secure against nondeterministic adversaries under a generalized notion of security in the spirit of Rudich’s super-bits (RANDOM’97), which is crucial for demonstrating the hardness of Avoid and RPP against nondeterministic algorithms.

Publisher's Version
Communication Lower Bounds for Collision Problems via Density Increment Arguments
Guangxu Yang and Jiapeng Zhang
(University of Southern California, USA)
Collision problems are important problems in complexity theory and cryptography with diverse applications. Previous fruitful works have mainly focused on query models. Driven by various applications, several works by Bauer, Farshim and Mazaheri (CRYPTO 2018), Itsykson and Riazanov (CCC 2021), G'o'os and Jain (RANDOM 2022) independently proposed the communication version of collision problems. In the communication setting, both Alice and Bob receive k random subsets of [N]: S1,…,Sk and T1,…,Tk with each of size roughly √N, where a typical choice of k is in the order of √N for applications. Then Alice and Bob aim to find a pair (x,x′) such that x,x′∈ SiTj for some Si and Tj. A simple protocol that solves this problem with O(N1/4) communication bits is the following: Alice sends to Bob a random subset of S1 of size N1/4 and Bob checks if there is a set Tj that has more than two intersections to this subset. All the papers mentioned above believe this bound should be tight up to some log factors. In this paper, we prove an Ω(N1/4) randomized communication lower bound, affirming the conjecture above. Previously, only an Ω(N1/12) was known by a work of G'o'os and Jain (RANDOM 2022). Our lower bound provides direct applications to cryptography and proof complexity via connections by Bauer, Farshim, and Mazaheri (CRYPTO 2018) and Itsykson and Riazanov (CCC 2021). Our proof technique could be of independent interest as it is an extension of simulation methods to non-lifted functions. Previously, simulations have been widely applied to lifted functions (a.k.a composed functions), which leads to beautiful query-to- communication lifting theorems. However, many important communication problems are not lifted functions. We believe our methods could give more applications. In particular, it may have applications to communication complexity of search problems with many solutions. Note that many existing methods do not apply to this setting.

Publisher's Version
Lower Bounds for Regular Resolution over Parities
Klim Efremenko, Michal Garlík, and Dmitry Itsykson
(Ben-Gurion University of the Negev, Israel; Imperial College London, United Kingdom)
The proof system resolution over parities (Res(⊕)) operates with disjunctions of linear equations (linear clauses) over GF(2); it extends the resolution proof system by incorporating linear algebra over GF(2). Over the years, several exponential lower bounds on the size of tree-like refutations have been established. However, proving a superpolynomial lower bound on the size of dag-like Res(⊕) refutations remains a highly challenging open question. We prove an exponential lower bound for regular Res(⊕). Regular Res(⊕) is a subsystem of dag-like Res(⊕) that naturally extends regular resolution. This is the first known superpolynomial lower bound for a fragment of dag-like Res(⊕) which is exponentially stronger than tree-like Res(⊕). In the regular regime, resolving linear clauses C1 and C2 on a linear form f is permitted only if, for both i∈ {1,2}, the linear form f does not lie within the linear span of all linear forms that were used in resolution rules during the derivation of Ci. Namely, we show that the size of any regular Res(⊕) refutation of the binary pigeonhole principle BPHPnn+1 is at least 2Ω(∛n/logn). A corollary of our result is an exponential lower bound on the size of a strongly read-once linear branching program solving a search problem. This resolves an open question raised by Gryaznov, Pudlak, and Talebanfard (CCC 2022). As a byproduct of our technique, we prove that the size of any tree-like Res(⊕) refutation of the weak binary pigeonhole principle BPHPnm is at least 2Ω(n) using Prover-Delayer games. We also give a direct proof of a width lower bound: we show that any dag-like Res(⊕) refutation of BPHPnm contains a linear clause C with Ω(n) linearly independent equations.

Publisher's Version
XOR Lemmas for Communication via Marginal Information
Siddharth Iyer and Anup Rao
(University of Washington, USA)
We define the marginal information of a communication protocol, and use it to prove XOR lemmas for communication complexity. We show that if every C-bit protocol has bounded advantage for computing a Boolean function f, then every Ω(Cn)-bit protocol has advantage exp(−Ω(n)) for computing the n-fold xor fn. We prove exponentially small bounds in the average case setting, and near optimal bounds for product distributions and for bounded-round protocols.

Publisher's Version
Beating Brute Force for Compression Problems
Shuichi Hirahara, Rahul Ilango, and R. Ryan Williams
(National Institute of Informatics, Tokyo, Japan; Massachusetts Institute of Technology, USA)
A compression problem is defined with respect to an efficient encoding function f; given a string x, our task is to find the shortest y such that f(y) = x. The obvious brute-force algorithm for solving this compression task on n-bit strings runs in time O(2 · t(n)), where ℓ is the length of the shortest description y and t(n) is the time complexity of f when it prints n-bit output. We prove that every compression problem has a Boolean circuit family which finds short descriptions more efficiently than brute force. In particular, our circuits have size 24 ℓ / 5 · poly(t(n)), which is significantly more efficient for all ℓ ≫ log(t(n)). Our construction builds on Fiat-Naor’s data structure for function inversion [SICOMP 1999]: we show how to carefully modify their data structure so that it can be nontrivially implemented using Boolean circuits, and we show how to utilize hashing so that the circuit size is only exponential in the description length. As a consequence, the Minimum Circuit Size Problem for generic fan-in two circuits of size s(n) on truth tables of size 2n can be solved by circuits of size 24/5 · w + o(w) · poly(2n), where w = s(n) log2(s(n) + n). This improves over the brute-force approach of trying all possible size-s(n) circuits for all s(n) ≥ n. Similarly, the task of computing a short description of a string x when its t-complexity is at most ℓ, has circuits of size 24/5 ℓ · poly(t). We also give nontrivial circuits for computing Kt complexity on average, and for solving NP relations with “compressible” instance-witness pairs.

Publisher's Version

4D

Optimization with Pattern-Avoiding Input
Benjamin Aram Berendsohn, László Kozma, and Michal Opler
(Freie Universität Berlin, Berlin, Germany; Czech Technical University, Prague, Czechia)
Permutation pattern-avoidance is a central concept of both enumerative and extremal combinatorics. In this paper we study the effect of permutation pattern-avoidance on the complexity of optimization problems. In the context of the dynamic optimality conjecture (Sleator, Tarjan, STOC 1983), Chalermsook, Goswami, Kozma, Mehlhorn, and Saranurak (FOCS 2015) conjectured that the amortized search cost of an optimal binary search tree (BST) is constant whenever the search sequence is pattern-avoiding. The best known bound to date is 2α(n)(1+o(1)) recently obtained by Chalermsook, Pettie, and Yingchareonthawornchai (SODA 2024); here n is the BST size and α(·) the inverse-Ackermann function. In this paper we resolve the conjecture, showing a tight (1) bound. This indicates a barrier to dynamic optimality: any candidate online BST (e.g., splay trees or greedy trees) must match this optimum, but current analysis techniques only give superconstant bounds. More broadly, we argue that the easiness of pattern-avoiding input is a general phenomenon, not limited to BSTs or even to data structures. To illustrate this, we show that when the input avoids an arbitrary, fixed, a priori unknown pattern, one can efficiently compute: (1) a k-server solution of n requests from a unit interval, with total cost n(1/logk), in contrast to the worst-case Θ(n/k) bound, and (2) a traveling salesman tour of n points from a unit box, of length (logn), in contrast to the worst-case Θ(√n) bound; similar results hold for the euclidean minimum spanning tree, Steiner tree, and nearest-neighbor graphs. We show both results to be tight. Our techniques build on the Marcus-Tardos proof of the Stanley-Wilf conjecture, and on the recently emerging concept of twin-width.

Publisher's Version
Maximum Weight Independent Set in Graphs with no Long Claws in Quasi-Polynomial Time
Peter Gartland, Daniel Lokshtanov, Tomáš Masařík, Marcin Pilipczuk, Michał Pilipczuk, and Paweł Rzążewski
(University of California at Santa Barbara, USA; University of Warsaw, Poland; IT University of Copenhagen, Copenhagen, Denmark; Warsaw University of Technology, Poland)
We show that the Maximum Weight Independent Set problem (MWIS) can be solved in quasi-polynomial time on H-free graphs (graphs excluding a fixed graph H as an induced subgraph) for every H whose every connected component is a path or a subdivided claw (i.e., a tree with at most three leaves). This completes the dichotomy of the complexity of MWIS in F-free graphs for any finite set F of graphs into NP-hard cases and cases solvable in quasi-polynomial time, and corroborates the conjecture that the cases not known to be NP-hard are actually polynomial-time solvable. The key graph-theoretic ingredient in our result is as follows. Fix an integer t ≥ 1. Let St,t,t be the graph created from three paths on t edges by identifying one endpoint of each path into a single vertex. We show that, given a graph G, one can in polynomial time find either an induced St,t,t in G, or a balanced separator consisting of O(log|V(G)|) vertex neighborhoods in G, or an extended strip decomposition of G (a decomposition almost as useful for recursion for MWIS as a partition into connected components) with each particle of weight multiplicatively smaller than the weight of G. This is a strengthening of a result of Majewski, Masařík, Novotná, Okrasa, Pilipczuk, Rzążewski, and Sokołowski [Transactions on Computation Theory 2024] which provided such an extended strip decomposition only after the deletion of O(log|V(G)|) vertex neighborhoods. To reach the final result, we employ an involved branching strategy that relies on the structural lemma presented above.

Publisher's Version
Packing Even Directed Circuits Quarter-Integrally
Maximilian Gorsky, Ken-ichi Kawarabayashi, Stephan Kreutzer, and Sebastian Wiederrecht
(TU Berlin, Berlin, Germany; National Institute of Informatics, Tokyo, Japan; University of Tokyo, Tokyo, Japan; Institute for Basic Science, Daejeon, South Korea)
We prove the existence of a computable function f∶ℕ→ℕ such that for every integer k and every digraph D, either D contains a collection C of k directed cycles of even length such that no vertex of D belongs to more than four cycles in C, or there exists a set SV(D) of size at most f(k) such that DS has no directed cycle of even length. Moreover, we provide an algorithm that finds one of the two outcomes of this statement in time g(k)nO(1) for some computable function g∶ ℕ→ℕ.
Our result unites two deep fields of research from the algorithmic theory for digraphs: The study of the Erdős-Pósa property of digraphs and the study of the Even Dicycle Problem. The latter is the decision problem which asks if a given digraph contains an even dicycle and can be traced back to a question of Pólya from 1913. It remained open until a polynomial time algorithm was finally found by Robertson, Seymour, and Thomas (Ann. of Math. (2) 1999) and, independently, McCuaig (Electron. J. Combin. 2004; announced jointly at STOC 1997). The Even Dicycle Problem is equivalent to the recognition problem of Pfaffian bipartite graphs and has applications even beyond discrete mathematics and theoretical computer science. On the other hand, Younger’s Conjecture (1973), states that dicycles have the Erdős-Pósa property. The conjecture was proven more than two decades later by Reed, Robertson, Seymour, and Thomas (Combinatorica 1996) and opened the path for structural digraph theory as well as the algorithmic study of the directed feedback vertex set problem. Our approach builds upon the techniques used to resolve both problems and combines them into a powerful structural theorem that yields further algorithmic applications for other prominent problems.

Publisher's Version
Edge-Disjoint Paths in Eulerian Digraphs
Dario Giuliano Cavallaro, Ken-ichi Kawarabayashi, and Stephan Kreutzer
(TU Berlin, Berlin, Germany; National Institute of Informatics, Tokyo, Japan; University of Tokyo, Tokyo, Japan)
Disjoint paths problems are among the most prominent problems in combinatorial optimisation. The edge- as well as the Vertex-Disjoint Paths problem are NP-complete, both on directed and undirected graphs. But on undirected graphs, Robertson and Seymour developed an algorithm for both problems that runs in cubic time for every fixed number p of terminal pairs, i.e. they proved that the problem is fixed-parameter tractable on undirected graphs. This is in sharp contrast to the situation on directed graphs, where Fortune, Hopcroft, and Wyllie proved that both problems are NP-complete already for p=2 terminal pairs. In this paper, we study the Edge-Disjoint Paths problem (EDPP) on Eulerian digraphs, a problem that has received significant attention in the literature. Marx proved that the Eulerian EDPP is NP-complete even on structurally very simple Eulerian digraphs. On the positive side, polynomial time algorithms are known only for very restricted cases, such as p≤ 3 or where the demand graph is a union of two stars. The question for which values of p the Edge-Disjoint Paths problem can be solved in polynomial time on Eulerian digraphs has already been raised by Frank, Ibaraki, and Nagamochi almost 30 years ago. But despite considerable effort, the complexity of the problem is still wide open and is considered to be the main open problem in this area. In this paper, we solve this long-open problem by showing that the Edge-Disjoint Paths problem is fixed-parameter tractable on Eulerian digraphs in general (parameterized by the number of terminal pairs). The algorithm itself is reasonably simple but the proof of its correctness requires a deep structural analysis of Eulerian digraphs.

Publisher's Version
A Flat Wall Theorem for Matching Minors in Bipartite Graphs
Archontia C. Giannopoulou and Sebastian Wiederrecht
(National and Kapodistrian University of Athens, Athens, Greece; Institute for Basic Science, Daejeon, South Korea)
In 1913, Pólya asked for which (0,1)-matrices A it is possible to create a new matrix A′ by changing some of the signs such that the permanent of A equals the determinant of A′. A combinatorial solution to this problem was found by Little in 1975; he found these matrices to be exactly the biadjacency matrices of bipartite graphs excluding K3,3 as a matching minor. Utilising ideas from graph minors theory, this characterisation was later shown to yield a polynomial time algorithm to compute the permanent of matrices which satisfy Little’s condition. By a seminal result of Valiant, computing the permanent of (0,1)-matrices in general is #P-hard; however, it can be observed that the tractability of the permanent is closely related to the exclusion of matchings minors in bipartite graphs.
Building on the results of Robertson’s and Seymour’s graph minors theory it was shown that the permanent remains tractable under the exclusion of a planar or a single-crossing matching minor. In this paper, we provide an essential next step in the form of a matching theoretic analogue of the Flat Wall Theorem for bipartite graphs, describing the local structure of bipartite graphs excluding Kt,t as a matching minor. Our result builds on a tight relationship between structural digraph theory and matching theory and allows us to deduce a Flat Wall Theorem for digraphs which substantially differs from a previously established directed variant of this theorem.

Publisher's Version

5A

Generalized GM-MDS: Polynomial Codes Are Higher Order MDS
Joshua Brakensiek, Manik Dhar, and Sivakanth Gopi
(Independent, USA; Massachusetts Institute of Technology, USA; Microsoft Research, USA)
The GM-MDS theorem, conjectured by Dau-Song-Dong-Yuen and proved by Lovett and Yildiz-Hassibi, shows that the generator matrices of Reed-Solomon codes can attain every possible configuration of zeros for an MDS code. The recently emerging theory of higher order MDS codes has connected the GM-MDS theorem to other important properties of Reed-Solomon codes, including showing that Reed-Solomon codes can achieve list decoding capacity, even over fields of size linear in the message length. A few works have extended the GM-MDS theorem to other families of codes, including Gabidulin and skew polynomial codes. In this paper, we generalize all these previous results by showing that the GM-MDS theorem applies to any polynomial code, i.e., a code where the columns of the generator matrix are obtained by evaluating linearly independent polynomials at different points. We also show that the GM-MDS theorem applies to dual codes of such polynomial codes, which is non-trivial since the dual of a polynomial code may not be a polynomial code. More generally, we show that GM-MDS theorem also holds for algebraic codes (and their duals) where columns of the generator matrix are chosen to be points on some irreducible variety which is not contained in a hyperplane through the origin. Our generalization has applications to constructing capacity-achieving list-decodable codes as shown in a follow-up work [Brakensiek, Dhar, Gopi, Zhang; 2024], where it is proved that randomly punctured algebraic-geometric (AG) codes achieve list-decoding capacity over constant-sized fields.

Publisher's Version
AG Codes Achieve List Decoding Capacity over Constant-Sized Fields
Joshua Brakensiek, Manik Dhar, Sivakanth Gopi, and Zihan Zhang
(Independent, USA; Massachusetts Institute of Technology, USA; Microsoft Research, USA; Ohio State University, USA)
The recently-emerging field of higher order MDS codes has sought to unify a number of concepts in coding theory. Such areas captured by higher order MDS codes include maximally recoverable (MR) tensor codes, codes with optimal list-decoding guarantees, and codes with constrained generator matrices (as in the GM-MDS theorem). By proving these equivalences, Brakensiek-Gopi-Makam showed the existence of optimally list-decodable Reed-Solomon codes over exponential sized fields. Building on this, recent breakthroughs by Guo-Zhang and Alrabiah-Guruswami-Li have shown that randomly punctured Reed-Solomon codes achieve list-decoding capacity (which is a relaxation of optimal list-decodability) over linear size fields. We extend these works by developing a formal theory of relaxed higher order MDS codes. In particular, we show that there are two inequivalent relaxations which we call lower and upper relaxations. The lower relaxation is equivalent to relaxed optimal list-decodable codes and the upper relaxation is equivalent to relaxed MR tensor codes with a single parity check per column. We then generalize the techniques of Guo-Zhang and Alrabiah-Guruswami-Li to show that both these relaxations can be constructed over constant size fields by randomly puncturing suitable algebraic-geometric codes. For this, we crucially use the generalized GM-MDS theorem for polynomial codes recently proved by Brakensiek-Dhar-Gopi. We obtain the following corollaries from our main result: Randomly punctured algebraic-geometric codes of rate R are list-decodable up to radius L/L+1(1−R−є) with list size L over fields of size exp(O(L/є)). In particular, they achieve list-decoding capacity with list size O(1/є) and field size exp(O(1/є2)). Prior to this work, AG codes were not even known to achieve list-decoding capacity. By randomly puncturing algebraic-geometric codes, we can construct relaxed MR tensor codes with a single parity check per column over constant-sized fields, whereas (non-relaxed) MR tensor codes require exponential field size.

Publisher's Version
Constant Query Local Decoding against Deletions Is Impossible
Meghal Gupta
(University of California at Berkeley, USA)
Locally decodable codes (LDC’s) are error-correcting codes that allow recovery of individual message indices by accessing only a constant number of codeword indices. For substitution errors, it is evident that LDC’s exist – Hadamard codes are examples of 2-query LDC’s. Research on this front has focused on finding the optimal encoding length for LDC’s, for which there is a nearly exponential gap between the best lower bounds and constructions. Ostrovsky and Paskin-Cherniavsky (ICITS 2015) introduced the notion of local decoding to the insertion and deletion setting. In this context, it is not clear whether constant query LDC’s exist at all. Indeed, in contrast to the classical setting, Block et al. conjecture that they do not exist. Blocki et al. (FOCS 2021) make progress towards this conjecture, proving that any potential code must have at least exponential encoding length. Our work definitively resolves the conjecture and shows that constant query LDC’s do not exist in the insertion/deletion (or even deletion-only) setting. Using a reduction shown by Blocki et al., this also implies that constant query locally correctable codes do not exist in this setting.

Publisher's Version
Local Correction of Linear Functions over the Boolean Cube
Prashanth Amireddy, Amik Raj Behera, Manaswi Paraashar, Srikanth Srinivasan, and Madhu Sudan
(Harvard University, USA; Aarhus University, Aarhus, Denmark; University of Copenhagen, Copenhagen, Denmark)
We consider the task of locally correcting, and locally list-correcting, multivariate linear functions over the domain {0,1}n over arbitrary fields and more generally Abelian groups. Such functions form error-correcting codes of relative distance 1/2 and we give local-correction algorithms correcting up to nearly 1/4-fraction errors making O(logn) queries. This query complexity is optimal up to poly(loglogn) factors. We also give local list-correcting algorithms correcting (1/2 − ε)-fraction errors with Oε(logn) queries. These results may be viewed as natural generalizations of the classical work of Goldreich and Levin whose work addresses the special case where the underlying group is ℤ2. By extending to the case where the underlying group is, say, the reals, we give the first non-trivial locally correctable codes (LCCs) over the reals (with query complexity being sublinear in the dimension (also known as message length)). Previous works in the area mostly focused on the case where the domain is a vector space or a group and this lends to tools that exploit symmetry. Since our domains lack such symmetries, we encounter new challenges whose resolution may be of independent interest. The central challenge in constructing the local corrector is constructing “nearly balanced vectors” over {−1,1}n that span 1n — we show how to construct O(logn) vectors that do so, with entries in each vector summing to ±1. The challenge to the local-list-correction algorithms, given the local corrector, is principally combinatorial, i.e., in proving that the number of linear functions within any Hamming ball of radius (1/2−ε) is Oε(1). Getting this general result covering every Abelian group requires integrating a variety of known methods with some new combinatorial ingredients analyzing the structural properties of codewords that lie within small Hamming balls.

Publisher's Version
An Exponential Lower Bound for Linear 3-Query Locally Correctable Codes
Pravesh K. Kothari and Peter Manohar
(Institute for Advanced Study, Princeton, USA; Princeton University, USA; Carnegie Mellon University, USA)
We prove that the blocklength n of a linear 3-query locally correctable code (LCC) L ∶ Fk → Fn with distance δ must be at least n ≥ 2Ω((δ2 k/(|F|−1)2)1/8). In particular, the blocklength of a linear 3-query LCC with constant distance over any small field grows exponentially with k. This improves on the best prior lower bound of n ≥ Ω(k3), which holds even for the weaker setting of 3-query locally decodable codes (LDCs), and comes close to matching the best-known construction of 3-query LCCs based on binary Reed–Muller codes, which achieve n ≤ 2O(k1/2). Because there is a 3-query LDC with a strictly subexponential blocklength, as a corollary we obtain the first strong separation between q-query LCCs and LDCs for any constant q ≥ 3. Our proof is based on a new upgrade of the method of spectral refutations via Kikuchi matrices developed in recent works that reduces establishing (non-)existence of combinatorial objects to proving unsatisfiability of associated XOR instances. Our key conceptual idea is to apply this method with XOR instances obtained via long-chain derivations — a structured variant of low-width resolution for XOR formulas from proof complexity.

Publisher's Version
Explicit Two-Sided Unique-Neighbor Expanders
Jun-Ting Hsieh, Theo McKenzie, Sidhanth Mohanty, and Pedro Paredes
(Carnegie Mellon University, USA; Stanford University, USA; Massachusetts Institute of Technology, USA; Princeton University, USA)
We study the problem of constructing explicit sparse graphs that exhibit strong vertex expansion. Our main result is the first two-sided construction of imbalanced unique-neighbor expanders, meaning bipartite graphs where small sets contained in both the left and right bipartitions exhibit unique-neighbor expansion, along with algebraic properties relevant to constructing quantum codes.
Our constructions are obtained from instantiations of the tripartite line product of a large tripartite spectral expander and a sufficiently good constant-sized unique-neighbor expander, a new graph product we defined that generalizes the line product and the routed product of previous well-known works. To analyze the vertex expansion of graphs arising from the tripartite line product, we develop a sharp characterization of subgraphs that can arise in bipartite spectral expanders, generalizing previously known results, which may be of independent interest.
By picking appropriate graphs to apply our product to, we give a strongly explicit construction of an infinite family of (d1,d2)-biregular graphs (Gn)n≥ 1 (for large enough d1 and d2) where all sets S with fewer than a small constant fraction of vertices have Ω(d1· |S|) unique-neighbors (assuming d1d2). Additionally, we can also guarantee that subsets of vertices of size up to exp(Ω(√log|V(Gn)|)) expand losslessly.

Publisher's Version

5B

Data-Dependent LSH for the Earth Mover’s Distance
Rajesh Jayaram, Erik Waingarten, and Tian Zhang
(Google Research, USA; University of Pennsylvania, USA)
We give new data-dependent locality sensitive hashing schemes (LSH) for the Earth Mover’s Distance (EMD), and as a result, improve the best approximation for nearest neighbor search under EMD by a quadratic factor. Here, the metric EMDs(ℝd,ℓp) consists of sets of s vectors in d, and for any two sets x,y of s vectors the distance EMD(x,y) is the minimum cost of a perfect matching between x,y, where the cost of matching two vectors is their ℓp distance. Previously, Andoni, Indyk, and Krauthgamer gave a (data-independent) locality-sensitive hashing scheme for EMDs(ℝd,ℓp) when p ∈ [1,2] with approximation O(log2 s). By being data-dependent, we improve the approximation to Õ(logs). Our main technical contribution is to show that for any distribution µ supported on the metric EMDs(ℝd, ℓp), there exists a data-dependent LSH for dense regions of µ which achieves approximation Õ(logs), and that the data-independent LSH actually achieves a Õ(logs)-approximation outside of those dense regions. Finally, we show how to “glue” together these two hashing schemes without any additional loss in the approximation. Beyond nearest neighbor search, our data-dependent LSH also gives optimal (distributional) sketches for the Earth Mover’s Distance. By known sketching lower bounds, this implies that our LSH is optimal (up to poly(loglogs) factors) among those that collide close points with constant probability.

Publisher's Version
Polylog-Competitive Deterministic Local Routing and Scheduling
Bernhard Haeupler, Shyamal Patel, Antti Roeyskoe, Cliff Stein, and Goran Zuzic
(ETH Zurich, Switzerland; Carnegie Mellon University, USA; Columbia University, USA; Google Research, Switzerland)
This paper addresses point-to-point packet routing in undirected networks, which is the most important communication primitive in most networks. The main result proves the existence of routing tables that deterministically guarantee a polylog-competitive completion-time:
In any undirected network, it is possible to give each node simple stateless deterministic local forwarding rules, such that, any adversarially chosen set of packets are delivered as fast as possible, up to polylog factors.
All previous routing strategies crucially required randomization for both route selection and packet scheduling. The core technical contribution of this paper is a new local packet scheduling result of independent interest. This scheduling strategy integrates well with recent sparse semi-oblivious path selection strategies. Such strategies deterministically select not one but several candidate paths for each packet and require a global coordinator to know all packets to adaptively select a single good path from those candidates for each packet. Of course, global knowledge of all packets is exactly what local routing tables cannot have. Another challenge is that, even if a single path is selected for each packet, no strategy for scheduling packets along low-congestion paths that is both local and deterministic is known. Our novel scheduling strategy utilizes the fact that every semi-oblivious routing strategy uses only a small (polynomial) subset of candidate routes. It overcomes the issue of global coordination by furthermore being provably robust to adversarial noise. This avoids the issue of having to choose a single path per packet by treating congestion caused by ineffective candidate paths as noise. Beyond more efficient routing tables, our results can be seen as making progress on fundamental questions regarding the importance and power of randomization in network communications and distributed computing. For example, our results imply the first deterministic universally-optimal algorithms in the distributed supported-CONGEST model for many important global distributed tasks, including computing minimum spanning trees, approximate shortest paths, and part-wise aggregates.

Publisher's Version
Connectivity Labeling and Routing with Multiple Vertex Failures
Merav Parter, Asaf Petruschka, and Seth Pettie
(Weizmann Institute of Science, Israel; University of Michigan, USA)
We present succinct labeling schemes for answering connectivity queries in graphs subject to a specified number of vertex failures. An f-vertex/edge fault tolerant (f-V/EFT) connectivity labeling is a scheme that produces succinct labels for the vertices (and possibly to the edges) of an n-vertex graph G, such that given only the labels of two vertices s,t and of at most f faulty vertices/edges F, one can infer if s and t are connected in GF. The primary complexity measure is the maximum label length (in bits). The f-EFT setting is relatively well understood: [Dory and Parter, PODC 2021] gave a randomized scheme with succinct labels of O(log3 n) bits, which was subsequently derandomized by [Izumi et al., PODC 2023] with Õ(f2)-bit labels. As both noted, handling vertex faults is more challenging. The known bounds for the f-VFT setting are far away: [Parter and Petruschka, DISC 2022] gave Õ(n1−1/2Θ(f))-bit labels, which is linear in n already for f =Ω(loglogn). In this work we present an efficient f-VFT connectivity labeling scheme using poly(f, logn) bits. Specifically, we present a randomized scheme with O(f3 log5 n)-bit labels, and a derandomized version with O(f7 log13 n)-bit labels, compared to an Ω(f)-bit lower bound on the required label length. Our schemes are based on a new low-degree graph decomposition that improves on [Duan and Pettie, SODA 2017], and facilitates its distributed representation into labels. This is accompanied with specialized linear graph sketches that extend the techniques of the Dory and Parter to the vertex fault setting, which are derandomized by adapting the approach of Izumi et al. and combining it with hit-miss hash families of [Karthik and Parter, SODA 2021]. Finally, we show that our labels naturally yield routing schemes avoiding a given set of at most f vertex failures with table and header sizes of only poly(f,logn) bits. This improves significantly over the linear size bounds implied by the EFT routing scheme of Dory and Parter.

Publisher's Version
Optimal Multi-pass Lower Bounds for MST in Dynamic Streams
Sepehr Assadi, Gillat Kol, and Zhijun Zhang
(University of Waterloo, Canada; Rutgers University, USA; Princeton University, USA)
The seminal work of Ahn, Guha, and McGregor in 2012 introduced the graph sketching technique and used it to present the first streaming algorithms for various graph problems over dynamic streams with both insertions and deletions of edges. This includes algorithms for cut sparsification, spanners, matchings, and minimum spanning trees (MSTs). These results have since been improved or generalized in various directions, leading to a vastly rich host of efficient algorithms for processing dynamic graph streams.
A curious omission from the list of improvements has been the MST problem. The best algorithm for this problem remains the original AGM algorithm that for every integer p ≥ 1, uses n1+O(1/p) space in p passes on n-vertex graphs, and thus achieves the desired semi-streaming space of Õ(n) at a relatively high cost of O(logn/loglogn) passes. On the other hand, no lower bound beyond a folklore one-pass lower bound is known for this problem.
We provide a simple explanation for this lack of improvements: The AGM algorithm for MSTs is optimal for the entire range of its number of passes! We prove that even for the simplest decision version of the problem — deciding whether the weight of MSTs is at least a given threshold or not — any p-pass dynamic streaming algorithm requires n1+Ω(1/p) space. This implies that semi-streaming algorithms do need Ω(logn/loglogn) passes.
Our result relies on proving new multi-round communication complexity lower bounds for a variant of the universal relation problem that has been instrumental in proving prior lower bounds for single-pass dynamic streaming algorithms. The proof also involves proving new composition theorems in communication complexity, including majority lemmas and multi-party XOR lemmas, via information complexity approaches.

Publisher's Version
O(log log n) Passes Is Optimal for Semi-streaming Maximal Independent Set
Sepehr Assadi, Christian Konrad, Kheeran K. Naidu, and Janani Sundaresan
(University of Waterloo, Canada; Rutgers University, USA; University of Bristol, United Kingdom)
In the semi-streaming model for processing massive graphs, an algorithm makes multiple passes over the edges of a given n-vertex graph and is tasked with computing the solution to a problem using O(n · log(n)) space. Semi-streaming algorithms for Maximal Independent Set (MIS) that run in O(loglogn) passes have been known for almost a decade, however, the best lower bounds can only rule out single-pass algorithms. We close this large gap by proving that the current algorithms are optimal: Any semi-streaming algorithm for finding an MIS with constant probability of success requires Ω(loglogn) passes. This settles the complexity of this fundamental problem in the semi-streaming model, and constitutes one of the first optimal multi-pass lower bounds in this model. We establish our result by proving an optimal round vs communication tradeoff for the (multi-party) communication complexity of MIS. The key ingredient of this result is a new technique, called hierarchical embedding, for performing round elimination: we show how to pack many but small hard (r−1)-round instances of the problem into a single r-round instance, in a way that enforces any r-round protocol to effectively solve all these (r−1)-round instances also. These embeddings are obtained via a novel application of results from extremal graph theory—in particular dense graphs with many disjoint unique shortest paths—together with a newly designed graph product, and are analyzed via information-theoretic tools such as direct-sum and message compression arguments.

Publisher's Version

5C

The Asymptotic Rank Conjecture and the Set Cover Conjecture Are Not Both True
Andreas Björklund and Petteri Kaski
(IT University of Copenhagen, Copenhagen, Denmark; Aalto University, Finland)
Strassen’s asymptotic rank conjecture [Progr. Math. 120 (1994)] claims a strong submultiplicative upper bound on the rank of a three-tensor obtained as an iterated Kronecker product of a constant-size base tensor. The conjecture, if true, most notably would put square matrix multiplication in quadratic time. We note here that some more-or-less unexpected algorithmic results in the area of exponential-time algorithms would also follow. Specifically, we study the so-called set cover conjecture, which states that for any є>0 there exists a positive integer constant k such that no algorithm solves the k-Set Cover problem in worst-case time ((2−є)n|F|poly(n)). The k-Set Cover problem asks, given as input an n-element universe U, a family F of size-at-most-k subsets of U, and a positive integer t, whether there is a subfamily of at most t sets in F whose union is U. The conjecture was formulated by Cygan, Fomin, Kowalik, Lokshtanov, Marx, Pilipczuk, Pilipczuk, and Saurabh in the monograph Parameterized Algorithms [Springer, 2015], but was implicit as a hypothesis already in Cygan, Dell, Lokshtanov, Marx, Nederlof, Okamoto, Paturi, Saurabh, and Wahlstr'om [CCC 2012, ACM Trans. Algorithms 2016], there conjectured to follow from the Strong Exponential Time Hypothesis. We prove that if the asymptotic rank conjecture is true, then the set cover conjecture is false. Using a reduction by Krauthgamer and Trabelsi [STACS 2019], in this scenario we would also get an ((2−δ)n)-time randomized algorithm for some constant δ>0 for another well-studied problem for which no such algorithm is known, namely that of deciding whether a given n-vertex directed graph has a Hamiltonian cycle. At a fine-grained level, our results do not need the full strength of the asymptotic rank conjecture; it suffices that the conclusion of the conjecture holds approximately for a single 7× 7× 7 tensor.

Publisher's Version
A Stronger Connection between the Asymptotic Rank Conjecture and the Set Cover Conjecture
Kevin Pratt
(New York University, USA)
We give a short proof that Strassen’s asymptotic rank conjecture implies that for every ε > 0 there exists a (3/22/3 + ε)n-time algorithm for set cover on a universe of size n with sets of bounded size. This strengthens and simplifies a recent result of Bj'orklund and Kaski that Strassen’s asymptotic rank conjecture implies that the set cover conjecture is false. From another perspective, we show that the set cover conjecture implies that a particular family of tensors Tn ∈ ℂN ⊗ ℂN ⊗ ℂN has asymptotic rank greater than N1.08. Furthermore, if one could improve a known upper bound of 8n on the tensor rank of Tn to 2/9 · n8n for any n, then the set cover conjecture is false.

Publisher's Version
Equality Cases of the Alexandrov–Fenchel Inequality Are Not in the Polynomial Hierarchy
Swee Hong Chan and Igor Pak
(Rutgers University, USA; University of California at Los Angeles, USA)
Describing the equality conditions of the Alexandrov–Fenchel inequality has been a major open problem for decades. We prove that for a natural class of convex polytopes, the equality cases of the AF inequality are not in unless the polynomial hierarchy collapses to a finite level. This is the first hardness result for the problem. The proof involves Stanley’s order polytopes and a delicate analysis of linear extensions of finite posets, with some number theoretic results added to the mix. We also give applications to combinatorial interpretations of the defect of Stanley’s log-concave inequality for the number of linear extensions.

Publisher's Version
Semigroup Algorithmic Problems in Metabelian Groups
Ruiwen Dong
(Saarland University, Saarbrücken, Germany)
We consider semigroup algorithmic problems in finitely generated metabelian groups. Our paper focuses on three decision problems introduced by Choffrut and Karhum'aki (2005): the Identity Problem (does a semigroup contain a neutral element?), the Group Problem (is a semigroup a group?) and the Inverse Problem (does a semigroup contain the inverse of a generator?). We show that all three problems are decidable for finitely generated subsemigroups of finitely generated metabelian groups. In particular, we establish a correspondence between polynomial semirings and subsemigroups of metabelian groups using an interaction of graph theory, convex polytopes, algebraic geometry and number theory. Since the Semigroup Membership problem (does a semigroup contain a given element?) is known to be undecidable in finitely generated metabelian groups, our result completes the decidability characterization of semigroup algorithmic problems in metabelian groups.

Publisher's Version
The Complexity of Computing KKT Solutions of Quadratic Programs
John Fearnley, Paul W. Goldberg, Alexandros Hollender, and Rahul Savani
(University of Liverpool, United Kingdom; University of Oxford, United Kingdom; Alan Turing Institute, United Kingdom)
It is well known that solving a (non-convex) quadratic program is NP-hard. We show that the problem remains hard even if we are only looking for a Karush-Kuhn-Tucker (KKT) point, instead of a global optimum. Namely, we prove that computing a KKT point of a quadratic polynomial over the domain [0,1]n is complete for the class CLS = PPAD∩PLS.

Publisher's Version
Minimum Star Partitions of Simple Polygons in Polynomial Time
Mikkel Abrahamsen, Joakim Blikstad, André Nusser, and Hanwen Zhang
(University of Copenhagen, Copenhagen, Denmark; KTH Royal Institute of Technology, Stockholm, Sweden; MPI-INF, Germany)
We devise a polynomial-time algorithm for partitioning a simple polygon P into a minimum number of star-shaped polygons. The question of whether such an algorithm exists has been open for more than four decades [Avis and Toussaint, Pattern Recognit., 1981] and it has been repeated frequently, for example in O’Rourke’s famous book [Art Gallery Theorems and Algorithms, 1987]. In addition to its strong theoretical motivation, the problem is also motivated by practical domains such as CNC pocket milling, motion planning, and shape parameterization.
The only previously known algorithm for a non-trivial special case is for P being both monotone and rectilinear [Liu and Ntafos, Algorithmica, 1991]. For general polygons, an algorithm was only known for the restricted version in which Steiner points are disallowed [Keil, SIAM J. Comput., 1985], meaning that each corner of a piece in the partition must also be a corner of P. Interestingly, the solution size for the restricted version may be linear for instances where the unrestricted solution has constant size. The covering variant in which the pieces are star-shaped but allowed to overlap—known as the Art Gallery Problem—was recently shown to be ∃ℝ-complete and is thus likely not in NP [Abrahamsen, Adamaszek and Miltzow, STOC 2018 & J. ACM 2022]; this is in stark contrast to our result. Arguably the most related work to ours is the polynomial-time algorithm to partition a simple polygon into a minimum number of convex pieces by Chazelle and Dobkin [STOC, 1979 & Comp. Geom., 1985].

Publisher's Version

6A

Revisiting Local Computation of PageRank: Simple and Optimal
Hanzhi Wang, Zhewei Wei, Ji-Rong Wen, and Mingji Yang
(Renmin University of China, China)
We revisit ApproxContributions, the classic local graph exploration algorithm proposed by Andersen, Borgs, Chayes, Hopcroft, Mirrokni, and Teng (WAW ’07, Internet Math. ’08) for computing an є-approximation of the PageRank contribution vector for a target node t on a graph with n nodes and m edges. We give a worst-case complexity bound of it as O(nπ(t)/є·min(Δinout,√m)), where π(t) is the PageRank score of t, and Δin and Δout are the maximum in-degree and out-degree of the graph, resp. We also give a lower bound of Ω(min(Δin/δ,Δout/δ,√m/δ,m)) for detecting t’s δ-contributing set, showing that the simple ApproxContributions algorithm is already optimal.
As ApproxContributions has become a cornerstone for computing random-walk probabilities, our results and techniques can be applied to derive better bounds for various relevant problems. In particular, we investigate the computational complexity of locally estimating a node’s PageRank centrality. We improve the best-known upper bound of O(n2/3·min(Δout1/3,m1/6)) given by Bressan, Peserico, and Pretto (SICOMP ’23) to O(n1/2·min(Δin1/2out1/2,m1/4)) by combining ApproxContributions with Monte Carlo sampling. We also improve their lower bound of Ω(min(n1/2Δout1/2,n1/3m1/3)) to Ω(n1/2·min(Δin1/2out1/2,m1/4)) if min(Δinout)=Ω(n1/3), and to Ω(n1/2−γ(min(Δinout))1/2+γ) otherwise, where γ>0 is an arbitrarily small constant. Our matching upper and lower bounds resolve the open problem of whether one can tighten the bounds given by Bressan, Peserico, and Pretto (FOCS ’18, SICOMP ’23). Remarkably, the techniques and analyses for proving all our results are surprisingly simple.

Publisher's Version
Towards Optimal Output-Sensitive Clique Listing or: Listing Cliques from Smaller Cliques
Mina Dalirrooyfard, Surya Mathialagan, Virginia Vassilevska Williams, and Yinzhan Xu
(Morgan Stanley Research, Canada; Massachusetts Institute of Technology, USA)
We study the problem of finding and listing k-cliques in an m-edge, n-vertex graph, for constant k≥ 3. This is a fundamental problem of both theoretical and practical importance.
Our first contribution is an algorithmic framework for finding k-cliques that gives the first improvement in 19 years over the old runtimes for 4 and 5-clique finding, as a function of m [Eisenbrand and Grandoni, TCS’04]. With the current bounds on matrix multiplication, our algorithms run in O(m1.66) and O(m2.06) time, respectively, for 4-clique and 5-clique finding.
Our main contribution is an output-sensitive algorithm for listing k-cliques, for any constant k≥ 3. We complement the algorithm with tight lower bounds based on standard fine-grained assumptions. Previously, the only known conditionally optimal output-sensitive algorithms were for the case of 3-cliques given by Bj'orklund, Pagh, Vassilevska W. and Zwick [ICALP’14]. If the matrix multiplication exponent ω is 2, and if the number of k-cliques t is large enough, the running time of our algorithms is Õ(min{m1/k−2t1 − 2/k(k−2),n2/k−1t1−2/k(k−1)}), and this is tight under the Exact-k-Clique Hypothesis. This running time naturally extends the running time obtained by Bj'orklund, Pagh, Vassilevska W. and Zwick for k=3.
Our framework is very general in that it gives k-clique listing algorithms whose running times can be measured in terms of the number of ℓ-cliques Δ in the graph for any 1≤ ℓ<k. This generalizes the typical parameterization in terms of n (the number of 1-cliques) and m (the number of 2-cliques).
If ω is 2, and if the size of the output, Δk, is sufficiently large, then for every ℓ<k, the running time of our algorithm for listing k-cliques is Õ(Δ2/ℓ (k − ℓ)Δk1−2/k(k−ℓ)). We also show that this runtime is optimal for all 1 ≤ ℓ < k under the Exact k-Clique hypothesis.

Publisher's Version
New Graph Decompositions and Combinatorial Boolean Matrix Multiplication Algorithms
Amir Abboud, Nick Fischer, Zander Kelley, Shachar Lovett, and Raghu Meka
(Weizmann Institute of Science, Israel; University of Illinois at Urbana-Champaign, USA; University of California at San Diego, USA; University of California at Los Angeles, USA)
We revisit the fundamental Boolean Matrix Multiplication (BMM) problem. With the invention of algebraic fast matrix multiplication over 50 years ago, it also became known that BMM can be solved in truly subcubic O(nω) time, where ω<3; much work has gone into bringing ω closer to 2. Since then, a parallel line of work has sought comparably fast combinatorial algorithms but with limited success. The na'ive O(n3)-time algorithm was initially improved by a log2n factor [Arlazarov et al.; RAS’70], then by log2.25n [Bansal and Williams; FOCS’09], then by log3n [Chan; SODA’15], and finally by log4n [Yu; ICALP’15].
We design a combinatorial algorithm for BMM running in time n3 / 2Ω((logn)1/7) – a speed-up over cubic time that is stronger than any poly-log factor. This comes tantalizingly close to refuting the conjecture from the 90s that truly subcubic combinatorial algorithms for BMM are impossible. This popular conjecture is the basis for dozens of fine-grained hardness results.
Our main technical contribution is a new regularity decomposition theorem for Boolean matrices (or equivalently, bipartite graphs) under a notion of regularity that was recently introduced and analyzed analytically in the context of communication complexity [Kelley, Lovett, Meka; STOC’24], and is related to a similar notion from the recent work on 3-term arithmetic progression free sets [Kelley, Meka; FOCS’23].

Publisher's Version
Nearly Optimal Fault Tolerant Distance Oracle
Dipan Dey and Manoj Gupta
(IIT Gandhinagar, India)
We present an f-fault tolerant distance oracle for an undirected weighted graph where each edge has an integral weight from [1 … W]. Given a set F of f edges, as well as a source node s and a destination node t, our oracle returns the shortest path from s to t avoiding F in O((cf log(nW))O(f2)) time, where c > 1 is a constant. The space complexity of our oracle is O(f4n2log2 (nW)). For a constant f, our oracle is nearly optimal both in terms of space and time (barring some logarithmic factor).

Publisher's Version
Almost Linear Size Edit Distance Sketch
Michal Koucký and Michael E. Saks
(Charles University, Prague, Czechia; Rutgers University, USA)
We design an almost linear-size sketching scheme for computing edit distance up to a given threshold k. The scheme consists of two algorithms, a sketching algorithm and a recovery algorithm. The sketching algorithm depends on the parameter k and takes as input a string x and a public random string ρ and computes a sketch skρ(x;k), which is a compressed version of x. The recovery algorithm is given two sketches skρ(x;k) and skρ(y;k) as well as the public random string ρ used to create the two sketches, and (with high probability) if the edit distance ED(x,y) between x and y is at most k, will output ED(x,y) together with an optimal sequence of edit operations that transforms x to y, and if ED(x,y) > k will output large. The size of the sketch output by the sketching algorithm on input x is k2O(√log(n)loglog(n)) (where n is an upper bound on length of x). The sketching and recovery algorithms both run in time polynomial in n. The dependence of sketch size on k is information theoretically optimal and improves over the quadratic dependence on k in schemes of Kociumaka, Porat and Starikovskaya (FOCS’2021), and Bhattacharya and Koucký (STOC’2023).

Publisher's Version

6B

Commitments from Quantum One-Wayness
Dakshita Khurana and Kabir Tomer
(University of Illinois at Urbana-Champaign, USA)
One-way functions are central to classical cryptography. They are necessary for the existence of non-trivial classical cryptosystems, and also sufficient to realize meaningful primitives including commitments, pseudorandom generators and digital signatures. At the same time, a mounting body of evidence suggests that assumptions even weaker than one-way functions may suffice for many cryptographic tasks of interest in a quantum world, including bit commitments and secure multi-party computation. This work studies one-way state generators [Morimae-Yamakawa, CRYPTO 2022], a natural quantum relaxation of one-way functions. Given a secret key, a one-way state generator outputs a hard to invert quantum state. A fundamental question is whether this type of quantum one-wayness suffices to realize quantum cryptography. We obtain an affirmative answer to this question, by proving that one-way state generators with pure state outputs imply quantum bit commitments and secure multiparty computation. Along the way, we use efficient shadow tomography [Huang et. al., Nature Physics 2020] to build an intermediate primitive with classical outputs, which we call a (quantum) one-way puzzle. Our main technical contribution is a proof that one-way puzzles imply quantum bit commitments. This proof develops new techniques for pseudoentropy generation [Hastad et. al., SICOMP 1999] from arbitrary distributions, which may be of independent interest.

Publisher's Version
A One-Query Lower Bound for Unitary Synthesis and Breaking Quantum Cryptography
Alex Lombardi, Fermi Ma, and John Wright
(Princeton University, USA; Simons Institute for the Theory of Computing, Berkeley, USA; University of California at Berkeley, USA)
The Unitary Synthesis Problem (Aaronson-Kuperberg 2007) asks whether any n-qubit unitary U can be implemented by an efficient quantum algorithm A augmented with an oracle that computes an arbitrary Boolean function f. In other words, can the task of implementing any unitary be efficiently reduced to the task of implementing any Boolean function? In this work, we prove a one-query lower bound for unitary synthesis. We show that there exist unitaries U such that no quantum polynomial-time oracle algorithm Af can implement U, even approximately, if it only makes one (quantum) query to f. Our approach also has implications for quantum cryptography: we prove (relative to a random oracle) the existence of quantum cryptographic primitives that remain secure against all one-query adversaries Af. Since such one-query algorithms can decide any language, solve any classical search problem, and even prepare any quantum state, our result suggests that implementing random unitaries and breaking quantum cryptography may be harder than all of these tasks. To prove this result, we formulate unitary synthesis as an efficient challenger-adversary game, which enables proving lower bounds by analyzing the maximum success probability of an adversary Af. Our main technical insight is to identify a natural spectral relaxation of the one-query optimization problem, which we bound using tools from random matrix theory. We view our framework as a potential avenue to rule out polynomial-query unitary synthesis, and we state conjectures in this direction.

Publisher's Version
Two Prover Perfect Zero Knowledge for MIP*
Kieran Mastel and William Slofstra
(University of Waterloo, Canada)
The recent MIP*=RE theorem of Ji, Natarajan, Vidick, Wright, and Yuen shows that the complexity class MIP* of multiprover proof systems with entangled provers contains all recursively enumerable languages. Prior work of Grilo, Slofstra, and Yuen [FOCS '19] further shows (via a technique called simulatable codes) that every language in MIP* has a perfect zero knowledge (PZK) MIP* protocol. The MIP*=RE theorem uses two-prover one-round proof systems, and hence such systems are complete for MIP*. However, the construction in Grilo, Slofstra, and Yuen uses six provers, and there is no obvious way to get perfect zero knowledge with two provers via simulatable codes. This leads to a natural question: are there two-prover PZK-MIP* protocols for all of MIP*? In this paper, we show that every language in MIP* has a two-prover one-round PZK-MIP* protocol, answering the question in the affirmative. For the proof, we use a new method based on a key consequence of the MIP*=RE theorem, which is that every MIP* protocol can be turned into a family of boolean constraint system (BCS) nonlocal games. This makes it possible to work with MIP* protocols as boolean constraint systems, and in particular allows us to use a variant of a construction due to Dwork, Feige, Kilian, Naor, and Safra [Crypto '92] which gives a classical MIP protocol for 3SAT with perfect zero knowledge. To show quantum soundness of this classical construction, we develop a toolkit for analyzing quantum soundness of reductions between BCS games, which we expect to be useful more broadly. This toolkit also applies to commuting operator strategies, and our argument shows that every language with a commuting operator BCS protocol has a two prover PZK commuting operator protocol.

Publisher's Version
How to Use Quantum Indistinguishability Obfuscation
Andrea Coladangelo and Sam Gunn
(University of Washington, USA; University of California at Berkeley, USA)
Quantum copy protection, introduced by Aaronson, enables giving out a quantum program-description that cannot be meaningfully duplicated. Despite over a decade of study, copy protection is only known to be possible for a very limited class of programs. As our first contribution, we show how to achieve "best-possible" copy protection for all programs. We do this by introducing quantum state indistinguishability obfuscation (qsiO), a notion of obfuscation for quantum descriptions of classical programs. We show that applying qsiO to a program immediately achieves best-possible copy protection. Our second contribution is to show that, assuming injective one-way functions exist, qsiO is concrete copy protection for a large family of puncturable programs --- significantly expanding the class of copy-protectable programs. A key tool in our proof is a new variant of unclonable encryption (UE) that we call coupled unclonable encryption (cUE). While constructing UE in the standard model remains an important open problem, we are able to build cUE from one-way functions. If we additionally assume the existence of UE, then we can further expand the class of puncturable programs for which qsiO is copy protection. Finally, we construct qsiO relative to an efficient quantum oracle.

Publisher's Version
Quantum State Obfuscation from Classical Oracles
James Bartusek, Zvika Brakerski, and Vinod Vaikuntanathan
(University of California at Berkeley, USA; Weizmann Institute of Science, Israel; Massachusetts Institute of Technology, USA)
A major unresolved question in quantum cryptography is whether it is possible to obfuscate arbitrary quantum computation. Indeed, there is much yet to understand about the feasibility of quantum obfuscation even in the classical oracle model, where one is given for free the ability to obfuscate any classical circuit. In this work, we develop a new array of techniques that we use to construct a quantum state obfuscator, a powerful notion formalized recently by Coladangelo and Gunn (arXiv:2311.07794) in their pursuit of better software copy-protection schemes. Quantum state obfuscation refers to the task of compiling a quantum program, consisting of a quantum circuit C with a classical description and an auxiliary quantum state ψ, into a functionally-equivalent obfuscated quantum program that hides as much as possible about C and ψ. We prove the security of our obfuscator when applied to any pseudo-deterministic quantum program, i.e. one that computes a (nearly) deterministic classical input / classical output functionality. Our security proof is with respect to an efficient classical oracle, which may be heuristically instantiated using quantum-secure indistinguishability obfuscation for classical circuits. Our result improves upon the recent work of Bartusek, Kitagawa, Nishimaki and Yamakawa (STOC 2023) who also showed how to obfuscate pseudo-deterministic quantum circuits in the classical oracle model, but only ones with a completely classical description. Furthermore, our result answers a question of Coladangelo and Gunn, who provide a construction of quantum state indistinguishability obfuscation with respect to a quantum oracle, but leave the existence of a concrete real-world candidate as an open problem. Indeed, our quantum state obfuscator together with Coladangelo-Gunn gives the first candidate realization of a “best-possible” copy-protection scheme for all polynomial-time functionalities. Our techniques deviate significantly from previous works on quantum obfuscation. We develop several novel technical tools which we expect to be broadly useful in quantum cryptography. These tools include a publicly-verifiable, linearly-homomorphic quantum authentication scheme with classically-decodable ZX measurements (which we build from coset states), and a method for compiling any quantum circuit into a ”linear + measurement” () quantum program: an alternating sequence of CNOT operations and partial ZX measurements.

Publisher's Version
Nonlocality under Computational Assumptions
Grzegorz Gluch, Khashayar Barooti, Alexandru Gheorghiu, and Marc-Olivier Renou
(EPFL, Lausanne, Switzerland; Aztec Labs, United Kingdom; Chalmers University of Technology, Sweden; Inria - Université Paris-Saclay - CPHT - École Polytechnique - Institut Polytechnique de Paris, France)
Nonlocality and its connections to entanglement are fundamental features of quantum mechanics that have found numerous applications in quantum information science. A set of correlations is said to be nonlocal if it cannot be reproduced by spacelike-separated parties sharing randomness and performing local operations. An important practical consideration is that the runtime of the parties has to be shorter than the time it takes light to travel between them. One way to model this restriction is to assume that the parties are computationally bounded. We therefore initiate the study of nonlocality under computational assumptions and derive the following results:
(a) We define the set NEL (not-efficiently-local) as consisting of all bipartite states whose correlations arising from local measurements cannot be reproduced with shared randomness and polynomial-time local operations. (b) Under the assumption that the Learning With Errors problem cannot be solved in quantum polynomial-time, we show that NEL=ENT, where ENT is the set of all bipartite entangled states (pure and mixed). This is in contrast to the standard notion of nonlocality where it is known that some entangled states, e.g. Werner states, are local. In essence, we show that there exist (efficient) local measurements producing correlations that cannot be reproduced through shared randomness and quantum polynomial-time computation. (c) We prove that if NEL=ENT unconditionally, then BQP≠PP. In other words, the ability to certify all bipartite entangled states against computationally bounded adversaries gives a non-trivial separation of complexity classes. (d) Using (c), we show that a certain natural class of 1-round delegated quantum computation protocols that are sound against PP provers cannot exist.

Publisher's Version

6C

Detecting Low-Degree Truncation
Anindya De, Huan Li, Shivam Nadimpalli, and Rocco A. Servedio
(University of Pennsylvania, USA; Columbia University, USA)
We consider the following basic, and very broad, statistical problem: Given a known high-dimensional distribution D over ℝn and a collection of data points in ℝn, distinguish between the two possibilities that (i) the data was drawn from D, versus (ii) the data was drawn from D|S, i.e. from D subject to truncation by an unknown truncation set S ⊆ ℝn. We study this problem in the setting where D is a high-dimensional i.i.d. product distribution and S is an unknown degree-d polynomial threshold function (one of the most well-studied types of Boolean-valued function over ℝn). Our main results are an efficient algorithm when D is a hypercontractive distribution, and a matching lower bound: 1. For any constant d, we give a polynomial-time algorithm which successfully distinguishes D from D|S using O(nd/2) samples (subject to mild technical conditions on D and S); 2. Even for the simplest case of D being the uniform distribution over {±1}n, we show that for any constant d, any distinguishing algorithm for degree-d polynomial threshold functions must use Ω(nd/2) samples.

Publisher's Version
Optimal Non-adaptive Tolerant Junta Testing via Local Estimators
Shivam Nadimpalli and Shyamal Patel
(Columbia University, USA)
We give a non-adaptive algorithm that makes 2O(√klog(1/ε2 − ε1)) queries to a Boolean function f:{±1}n→{±1} and distinguishes between f being ε1-close to some k-junta versus ε2-far from every k-junta. At the heart of our algorithm is a local mean estimation procedure for Boolean functions that may be of independent interest. We complement our upper bound with a matching lower bound, improving a recent lower bound obtained by Chen et al. We thus obtain the first tight bounds for a natural property of Boolean functions in the tolerant testing model.

Publisher's Version
Distribution-Free Testing of Decision Lists with a Sublinear Number of Queries
Xi Chen, Yumou Fei, and Shyamal Patel
(Columbia University, USA; Peking University, China)
We give a distribution-free testing algorithm for decision lists with Õ(n11/123) queries. This is the first sublinear algorithm for this problem, which shows that, unlike halfspaces, testing is strictly easier than learning for decision lists. Complementing the algorithm, we show that any distribution-free tester for decision lists must make Ω(√n) queries, or draw Ω(n) samples when the algorithm is sample-based.

Publisher's Version
On the Power of Interactive Proofs for Learning
Tom Gur, Mohammad Mahdi Jahanara, Mohammad Mahdi Khodabandeh, Ninad Rajgopal, Bahar Salamatian, and Igor Shinkar
(University of Cambridge, United Kingdom; Simon Fraser University, Canada; Qualcomm, Canada)
We continue the study of doubly-efficient proof systems for verifying agnostic PAC learning, for which we obtain the following results. We construct an interactive protocol for learning the t largest Fourier characters of a given function f ∶ {0,1}n → {0,1} up to an arbitrarily small error, wherein the verifier uses poly(t) random examples. This improves upon the Interactive Goldreich-Levin protocol of Goldwasser, Rothblum, Shafer, and Yehudayoff (ITCS 2021) whose sample complexity is poly(t,n). For agnostically learning the class AC0[2] under the uniform distribution, we build on the work of Carmosino, Impagliazzo, Kabanets, and Kolokolova (APPROX/RANDOM 2017) and design an interactive protocol, where given a function f ∶ {0,1}n → {0,1}, the verifier learns the closest hypothesis up to polylog(n) multiplicative factor, using quasi-polynomially many random examples. In contrast, this class has been notoriously resistant even for constructing realisable learners (without a prover) using random examples. For agnostically learning k-juntas under the uniform distribution, we obtain an interactive protocol, where the verifier uses O(2k) random examples to a given function f ∶ {0,1}n → {0,1}. Crucially, the sample complexity of the verifier is independent of n. We also show that if we do not insist on doubly-efficient proof systems, then the model becomes trivial. Specifically, we show a protocol for an arbitrary class C of Boolean functions in the distribution-free setting, where the verifier uses O(1) labeled examples to learn f.

Publisher's Version
Complexity-Theoretic Implications of Multicalibration
Sílvia Casacuberta, Cynthia Dwork, and Salil Vadhan
(University of Oxford, United Kingdom; Harvard University, USA)
We present connections between the recent literature on multigroup fairness for prediction algorithms and classical results in computational complexity. Multiaccurate predictors are correct in expectation on each member of an arbitrary collection of pre-specified sets. Multicalibrated predictors satisfy a stronger condition: they are calibrated on each set in the collection. Multiaccuracy is equivalent to a regularity notion for functions defined by Trevisan, Tulsiani, and Vadhan (2009). They showed that, given a class F of (possibly simple) functions, an arbitrarily complex function g can be approximated by a low-complexity function h that makes a small number of oracle calls to members of F, where the notion of approximation requires that h cannot be distinguished from g by members of F. This complexity-theoretic Regularity Lemma is known to have implications in different areas, including in complexity theory, additive number theory, information theory, graph theory, and cryptography. Starting from the stronger notion of multicalibration, we obtain stronger and more general versions of a number of applications of the Regularity Lemma, including the Hardcore Lemma, the Dense Model Theorem, and the equivalence of conditional pseudo-min-entropy and unpredictability. For example, we show that every boolean function (regardless of its hardness) has a small collection of disjoint hardcore sets, where the sizes of those hardcore sets are related to how balanced the function is on corresponding pieces of an efficient partition of the domain.

Publisher's Version

6D

Local Geometry of NAE-SAT Solutions in the Condensation Regime
Allan Sly and Youngtak Sohn
(Princeton University, USA; Massachusetts Institute of Technology, USA)
The local behavior of typical solutions of random constraint satisfaction problems (csp) describes many important phenomena including clustering thresholds, decay of correlations, and the behavior of message passing algorithms. When the constraint density is low, studying the planted model is a powerful technique for determining this local behavior which in many examples has a simple Markovian structure. Work of Coja-Oghlan, Kapetanopoulos, M'uller (2020) showed that for a wide class of models, this description applies up to the so-called condensation threshold. Understanding the local behavior after the condensation threshold is more complex due to long-range correlations. In this work, we revisit the random regular nae-sat model in the condensation regime and determine the local weak limit which describes a random solution around a typical variable. This limit exhibits a complicated non-Markovian structure arising from the space of solutions being dominated by a small number of large clusters. This is the first description of the local weak limit in the condensation regime for any sparse random csps in the one-step replica symmetry breaking (1rsb) class. Our result is non-asymptotic, and characterizes the tight fluctuation O(n−1/2) around the limit. Our proof is based on coupling the local neighborhoods of an infinite spin system, which encodes the structure of the clusters, to a broadcast model on trees whose channel is given by the 1rsb belief-propagation fixed point. We believe that our proof technique has broad applicability to random csps in the 1rsb class.

Publisher's Version Info
Trickle-Down in Localization Schemes and Applications
Nima Anari, Frederic Koehler, and Thuy-Duong Vuong
(Stanford University, USA; University of Chicago, USA)
Trickle-down is a phenomenon in high-dimensional expanders with many important applications — for example, it is a key ingredient in various constructions of high-dimensional expanders or the proof of rapid mixing for the basis exchange walk on matroids and in the analysis of log-concave polynomials. We formulate a generalized trickle-down equation in the abstract context of linear-tilt localization schemes. Building on this generalization, we improve the best-known results for several Markov chain mixing or sampling problems — for example, we improve the threshold up to which Glauber dynamics is known to mix rapidly in the Sherrington-Kirkpatrick spin glass model. Other applications of our framework include near-linear time sampling algorithms from the antiferromagnetic Ising model and the fixed magnetization (antiferromagnetic or ferromagnetic) Ising model on expanders. For this application, we use a new dynamics inspired by polarization, a technique from the theory of stable polynomials.

Publisher's Version
Optimal Embedding Dimension for Sparse Subspace Embeddings
Shabarish Chenakkod, Michał Dereziński, Xiaoyu Dong, and Mark Rudelson
(University of Michigan, USA)
A random m× n matrix S is an oblivious subspace embedding (OSE) with parameters є>0, δ∈(0,1/3) and dmn, if for any d-dimensional subspace WRn, P( ∀xW (1+є)−1||x||≤ ||Sx||≤ (1+є)||x|| )≥ 1−δ. It is known that the embedding dimension of an OSE must satisfy md, and for any θ > 0, a Gaussian embedding matrix with m≥ (1+θ) d is an OSE with є = Oθ(1). However, such optimal embedding dimension is not known for other embeddings. Of particular interest are sparse OSEs, having sm non-zeros per column (Clarkson and Woodruff, STOC 2013), with applications to problems such as least squares regression and low-rank approximation.
We show that, given any θ > 0, an m× n random matrix S with m≥ (1+θ)d consisting of randomly sparsified ±1/√s entries and having s= O(log4(d)) non-zeros per column, is an oblivious subspace embedding with є = Oθ(1). Our result addresses the main open question posed by Nelson and Nguyen (FOCS 2013), who conjectured that sparse OSEs can achieve m=O(d) embedding dimension, and it improves on m=O(dlog(d)) shown by Cohen (SODA 2016). We use this to construct the first oblivious subspace embedding with O(d) embedding dimension that can be applied faster than current matrix multiplication time, and to obtain an optimal single-pass algorithm for least squares regression.
We further extend our results to Leverage Score Sparsification (LESS), which is a recently introduced non-oblivious embedding technique. We use LESS to construct the first subspace embedding with low distortion є=o(1) and optimal embedding dimension m=O(d2) that can be applied in current matrix multiplication time, addressing a question posed by Cherapanamjeri, Silwal, Woodruff and Zhou (SODA 2023).

Publisher's Version
Solving Dense Linear Systems Faster Than via Preconditioning
Michał Dereziński and Jiaming Yang
(University of Michigan, USA)
We give a stochastic optimization algorithm that solves a dense n× n real-valued linear system Ax=b, returning x such that ||Ax−b||≤ є||b|| in time: Õ((n2+nkω−1)log1/є), where k is the number of singular values of A larger than O(1) times its smallest positive singular value, ω < 2.372 is the matrix multiplication exponent, and Õ hides a poly-logarithmic in n factor. When k=O(n1−θ) (namely, A has a flat-tailed spectrum, e.g., due to noisy data or regularization), this improves on both the cost of solving the system directly, as well as on the cost of preconditioning an iterative method such as conjugate gradient. In particular, our algorithm has an Õ(n2) runtime when k=O(n0.729). We further adapt this result to sparse positive semidefinite matrices and least squares regression. Our main algorithm can be viewed as a randomized block coordinate descent method, where the key challenge is simultaneously ensuring good convergence and fast per-iteration time. In our analysis, we use theory of majorization for elementary symmetric polynomials to establish a sharp convergence guarantee when coordinate blocks are sampled using a determinantal point process. We then use a Markov chain coupling argument to show that similar convergence can be attained with a cheaper sampling scheme, and accelerate the block coordinate descent update via matrix sketching.

Publisher's Version
Improving the Bit Complexity of Communication for Distributed Convex Optimization
Mehrdad Ghadiri, Yin Tat Lee, Swati Padmanabhan, William Swartworth, David P. Woodruff, and Guanghao Ye
(Massachusetts Institute of Technology, USA; University of Washington, USA; Microsoft Research, USA; Carnegie Mellon University, USA)
We consider the communication complexity of some fundamental convex optimization problems in the point-to-point (coordinator) and blackboard communication models. We strengthen known bounds for approximately solving linear regression, p-norm regression (for 1≤ p≤ 2), linear programming, minimizing the sum of finitely many convex nonsmooth functions with varying supports, and low rank approximation; for a number of these fundamental problems our bounds are nearly optimal, as proven by our lower bounds. Among our techniques, we use the notion of block leverage scores, which have been relatively unexplored in this context, as well as dropping all but the “middle” bits in Richardson-style algorithms. We also introduce a new communication problem for accurately approximating inner products and establish a lower bound using the spherical Radon transform. Our lower bound can be used to show the first separation of linear programming and linear systems in the distributed model when the number of constraints is polynomial, addressing an open question in prior work.

Publisher's Version

7A

Fully Dynamic All-Pairs Shortest Paths: Likely Optimal Worst-Case Update Time
Xiao Mao
(Stanford University, USA)
The All-Pairs Shortest Paths (APSP) problem is one of the fundamental problems in theoretical computer science. It asks to compute the distance matrix of a given n-vertex graph. We revisit the classical problem of maintaining the distance matrix under a fully dynamic setting undergoing vertex insertions and deletions with a fast worst-case running time and efficient space usage. Although an algorithm with amortized update-time Õ(n 2) has been known for nearly two decades [Demetrescu and Italiano, STOC 2003], the current best algorithm for worst-case running time with efficient space usage runs is due to [Gutenberg and Wulff-Nilsen, SODA 2020], which improves the space usage of the previous algorithm due to [Abraham, Chechik, and Krinninger, SODA 2017] to Õ(n 2) but fails to improve their running time of Õ(n 2 + 2 / 3). It has been conjectured that no algorithm in O(n 2.5 − є) worst-case update time exists. For graphs without negative cycles, we meet this conjectured lower bound by introducing a Monte Carlo algorithm running in randomized Õ(n 2.5) time while keeping the Õ(n 2) space bound from the previous algorithm. Our breakthrough is made possible by the idea of “hop-dominant shortest paths,” which are shortest paths with a constraint on hops (number of vertices) that remain shortest after we relax the constraint by a constant factor.

Publisher's Version
Space Lower Bounds for Dynamic Filters and Value-Dynamic Retrieval
William Kuszmaul and Stefan Walzer
(Harvard University, USA; KIT, Karlsruhe, Germany)
A filter is a data structure that answers approximate-membership queries on a set S of n elements, with a false-positive rate of є. A filter is said to be dynamic if it supports insertions/deletions to the set S, subject to a capacity constraint of n. This paper considers the space requirement of filters, regardless of running time. It has been known for decades that static filters have optimal space n logє−1 + O(1) expected bits, and that dynamic filters can be implemented in space n logє−1 + Θ(n) bits. We prove that this Θ(n)-bit gap is fundamental: any dynamic filter must use n logє−1 + Ω(n) bits, no matter the choice of є. Extending our techniques, we are also able to obtain a lower bound for the value-dynamic retrieval problem. Here again, we show that there is a Θ(n)-bit gap between the optimal static and (value-)dynamic solutions.

Publisher's Version
Almost-Linear Time Algorithms for Incremental Graphs: Cycle Detection, SCCs, s-t Shortest Path, and Minimum-Cost Flow
Li Chen, Rasmus Kyng, Yang P. Liu, Simon Meierhans, and Maximilian Probst Gutenberg
(Carnegie Mellon University, USA; ETH Zurich, Switzerland; Institute for Advanced Study, Princeton, USA)
We give the first almost-linear time algorithms for several problems in incremental graphs including cycle detection, strongly connected component maintenance, s-t shortest path, maximum flow, and minimum-cost flow. To solve these problems, we give a deterministic data structure that returns a mo(1)-approximate minimum-ratio cycle in fully dynamic graphs in amortized mo(1) time per update. Combining this with the interior point method framework of Brand-Liu-Sidford (STOC 2023) gives the first almost-linear time algorithm for deciding the first update in an incremental graph after which the cost of the minimum-cost flow attains value at most some given threshold F. By rather direct reductions to minimum-cost flow, we are then able to solve the problems in incremental graphs mentioned above.
Our new data structure also leads to a modular and deterministic almost-linear time algorithm for minimum-cost flow by removing the need for complicated modeling of a restricted adversary, in contrast to the recent randomized and deterministic algorithms for minimum-cost flow in Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva (FOCS 2022)Brand-Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva-Sidford (FOCS 2023).
At a high level, our algorithm dynamizes the ℓ1 oblivious routing of Rozhoň-Grunau-Haeupler-Zuzic-Li (STOC 2022), and develops a method to extract an approximate minimum ratio cycle from the structure of the oblivious routing. To maintain the oblivious routing, we use tools from concurrent work of Kyng-Meierhans-Probst Gutenberg (STOC 2024) which designed vertex sparsifiers for shortest paths, in order to maintain a sparse neighborhood cover in fully dynamic graphs.
To find a cycle, we first show that an approximate minimum ratio cycle can be represented as a fundamental cycle on a small set of trees resulting from the oblivious routing. Then, we find a cycle whose quality is comparable to the best tree cycle. This final cycle query step involves vertex and edge sparsification procedures reminiscent of the techniques introduced in Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva (FOCS 2022), but crucially requires a more powerful dynamic spanner, which can handle far more edge insertions than prior work. We build such a spanner via a construction that hearkens back to the classic greedy spanner algorithm of Althöfer-Das-Dobkin-Joseph-Soares (DiscreteComputational Geometry 1993).

Publisher's Version
A Dynamic Shortest Paths Toolbox: Low-Congestion Vertex Sparsifiers and Their Applications
Rasmus Kyng, Simon Meierhans, and Maximilian Probst Gutenberg
(ETH Zurich, Switzerland)
We present a general toolbox, based on new vertex sparsifiers, for designing data structures to maintain shortest paths in graphs undergoing edge insertions and/or deletions. In particular, we obtain the following results:
the first data structure to maintain mo(1)-approximate all-pairs shortest paths (APSP) in an m-edge graph undergoing edge insertions and deletions with worst-case update time mo(1) and query time Õ(1), and a data structure to maintain a tree T that has diameter no larger than a subpolynomial factor than the underlying graph G that is undergoing edge insertions and deletions where each update is handled in amortized subpolynomial time, and a simpler and more efficient data structure to maintain a (1+є)-approximate single-source shortest paths (SSSP) tree T in a graph undergoing edge deletions in amortized time mo(1) per update.
All our data structures are deterministic. For the last two data structures, we further have that while the trees T are not subgraphs of G, they do embed with small edge congestion into G. This is in stark contrast to previous approaches and is particularly useful for algorithms that use these data structures internally to route flow along shortest paths. To illustrate the power of our new toolbox, we show that our SSSP data structure can be used directly to give a deterministic implementation of the classic MWU algorithm for approximate undirected minimum-cost flow running in time m1+o(1). Previously, Bernstein-Gutenberg-Saranurak [FOCS’21] had built a randomized data structure achieving m1+o(1) time whp. By using our SSSP data structure in the recent almost-linear time algorithm for computing Gomory-Hu trees by Abboud-Li-Panigrahi-Saranurak [FOCS’23], we simplify their algorithm significantly and slightly improve their runtime. To obtain our toolbox, we give the first algorithm that, given a graph G undergoing edge insertions and deletions and a dynamic terminal set A, maintains a vertex sparsifier H that approximately preserves distances between terminals in A, consists of at most |A|mo(1) vertices and edges, and can be updated in worst-case time mo(1). Crucially, our vertex sparsifier construction allows us to maintain a low edge-congestion embedding of H into G. This low congestion embedding is needed when using our toolbox in data structures that are then in turn used to implement algorithms routing flows along shortest paths. A concurrent work Chen-Kyng-Liu-Meierhans-Probst Gutenberg [STOC’24] takes our toolbox as the starting point for developing new data structures that solve min-ratio cycle problems on fully dynamic graphs, which in turn leads to the first almost-linear time algorithms for a host of problems on incremental graphs including cycle detection, maintaining SCCs, s-t shortest paths, and minimum-cost flow.

Publisher's Version
Dynamic O(Arboricity) Coloring in Polylogarithmic Worst-Case Time
Mohsen Ghaffari and Christoph Grunau
(Massachusetts Institute of Technology, USA; ETH Zurich, Switzerland)
A recent work by Christiansen, Nowicki, and Rotenberg [STOC’23] provides dynamic algorithms for coloring sparse graphs, concretely as a function of the graph’s arboricity α. They give two randomized algorithms: O(α logα) implicit coloring in poly(logn) worst-case update and query times, and O(min{α logα, α logloglogn}) implicit coloring in poly(logn) amortized update and query times (against an oblivious adversary). We improve these results in terms of the number of colors and the time guarantee: First, we present an extremely simple algorithm that computes an O(α)-implicit coloring with poly(logn) amortized update and query times. Second, and as the main technical contribution of our work, we show that the time complexity guarantee can be strengthened from amortized to worst-case. That is, we give a dynamic algorithm for implicit O(α)-coloring with poly(logn) worst-case update and query times (against an oblivious adversary).

Publisher's Version

7B

Settling the Communication Complexity of VCG-Based Mechanisms for All Approximation Guarantees
Frederick Qiu and S. Matthew Weinberg
(Princeton University, USA)
We consider truthful combinatorial auctions with items M = [m] for sale to n bidders, where each bidder i has a private monotone valuation function vi: 2M → ℝ+. Among truthful mechanisms, maximal-in-range (MIR) mechanisms (sometimes called VCG-based) achieve the best-known approximation guarantees among all poly-communication deterministic truthful mechanisms in all previously-studied settings. Our work settles the communication complexity necessary to achieve any approximation guarantee via an MIR mechanism. Specifically: Let MIRSubMod(m, k) denote the best approximation guarantee achievable by an MIR mechanism using 2k communication between bidders with submodular valuations over m items. Then for all k = Ω(log(m)), MIRSubMod(m,k) = Ω(√m/(klog(m/k))). When we set k = Θ(log(m)), this improves the previous best lower bound for polynomial communication maximal-in-range mechanisms from Ω(m1/3/log2/3(m)) to Ω(√m/log(m)). Additionally, MIRSubMod(m, k) = O(√m/k). Moreover, our mechanism can be implemented with 2k simultaneous value queries and computation, and is optimal with respect to the value query and computational/succinct representation models. The mechanism also works for bidders with subadditive valuations. When k = Θ(log(m)), this improves the previous best approximation guarantee for polynomial communication maximal-in-range mechanisms from O(√m) to O(√m/log(m)). Let also MIRGen(m,k) denote the best approximation guarantee achievable by an MIR mechanism using 2k communication between bidders with general valuations over m items. Then for all k = Ω(log(m)), MIRGen(m, k) = Ω(m/k). When k = Θ(log(m)), this improves the previous best lower bound for polynomial communication maximal-in-range mechanisms from Ω(m/log2(m)) to Ω(m/log(m)). Additionally, MIRGen(m, k) = O(m/k). Moreover, our mechanism can be implemented with 2k simultaneous value queries and computation, and is optimal with respect to the value query and computational/succinct representation models. When k = Θ(log(m)), this improves the previous best approximation guarantee for polynomial communication maximal-in-range mechanisms from O(m/√log(m)) to O(m/log(m)).

Publisher's Version
PPAD-Membership for Problems with Exact Rational Solutions: A General Approach via Convex Optimization
Aris Filos-Ratsikas, Kristoffer Arnsfelt Hansen, Kasper Høgh, and Alexandros Hollender
(University of Edinburgh, United Kingdom; Aarhus University, Aarhus, Denmark; University of Oxford, United Kingdom)
We introduce a general technique for proving membership of search problems with exact rational solutions in PPAD, one of the most well-known classes containing total search problems with polynomial-time verifiable solutions. In particular, we construct a "pseudogate", coined the linear-OPT-gate, which can be used as a "plug-and-play" component in a piecewise-linear (PL) arithmetic circuit, as an integral component of the "Linear-FIXP" equivalent definition of the class. The linear-OPT-gate can solve several convex optimization programs, including quadratic programs, which often appear organically in the simplest existence proofs for these problems. This effectively transforms existence proofs to PPAD-membership proofs, and consequently establishes the existence of solutions described by rational numbers. Using the linear-OPT-gate, we are able to significantly simplify and generalize almost all known PPAD-membership proofs for finding exact solutions in the application domains of game theory, competitive markets, auto-bidding auctions, and fair division, as well as to obtain new PPAD-membership results for problems in these domains.

Publisher's Version
From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces
Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson, and Noah Golowich
(University of California at Berkeley, USA; Massachusetts Institute of Technology, USA)
We provide a novel reduction from swap-regret minimization to external-regret minimization, which improves upon the classical reductions of Blum-Mansour and Stoltz-Lugosi in that it does not require finiteness of the space of actions. We show that, whenever there exists a no-external-regret algorithm for some hypothesis class, there must also exist a no-swap-regret algorithm for that same class. For the problem of learning with expert advice, our result implies that it is possible to guarantee that the swap regret is bounded by є after (logN)Õ(1/є) rounds and with O(N) per iteration complexity, where N is the number of experts, while the classical reductions of Blum-Mansour and Stoltz-Lugosi require at least Ω(N2) rounds and at least Ω(N3) total computational cost. Our result comes with an associated lower bound, which—in contrast to that of Blum-Mansour—holds for oblivious and ℓ1-constrained adversaries and learners that can employ distributions over experts, showing that the number of rounds must be Ω(N2) or exponential in 1/є.
Our reduction implies that, if no-regret learning is possible in some game, then this game must have approximate correlated equilibria, of arbitrarily good approximation. This strengthens the folklore implication of no-regret learning that approximate coarse correlated equilibria exist. Importantly, it provides a sufficient condition for the existence of approximate correlated equilibrium which vastly extends the requirement that the action set is finite or the requirement that the action set is compact and the utility functions are continuous, allowing for games with finite Littlestone or finite sequential fat shattering dimension, thus answering a question left open in “Fast rates for nonparametric online learning: from realizability to learning in games” and “ Online learning and solving infinite games with an ERM oracle”. Moreover, it answers several outstanding questions about equilibrium computation and/or learning in games. In particular, for constant values of є: (a) we show that є-approximate correlated equilibria in extensive-form games can be computed efficiently, advancing a long-standing open problem for extensive-form games; see e.g. “ Extensive-form correlated equilibrium: Definition and computational complexity” and “ Polynomial-Time Linear-Swap Regret Minimization in Imperfect-Information Sequential Games”; (b) we show that the query and communication complexities of computing є-approximate correlated equilibria in N-action normal-form games are N · poly log(N) and poly logN respectively, advancing an open problem of “Informational Bounds on Equilibria”; (c) we show that є-approximate correlated equilibria of sparsity poly logN can be computed efficiently, advancing an open problem of “Simple Approximate Equilibria in Large Games”; (d) finally, we show that in the adversarial bandit setting, sublinear swap regret can be achieved in only Õ(N) rounds, advancing an open problem of “From External to Internal Regret” and “Tight Lower Bound and Efficient Reduction for Swap Regret”.

Publisher's Version
Fast Swap Regret Minimization and Applications to Approximate Correlated Equilibria
Binghui Peng and Aviad Rubinstein
(Columbia University, USA; Stanford University, USA)
We give a simple and computationally efficient algorithm that, for any constant ε>0, obtains ε T-swap regret within only T = (n) rounds; this is an exponential improvement compared to the super-linear number of rounds required by the state-of-the-art algorithm, and resolves the main open problem of []. Our algorithm has an exponential dependence on ε, but we prove a new, matching lower bound. Our algorithm for swap regret implies faster convergence to ε-Correlated Equilibrium (ε-CE) in several regimes: For normal form two-player games with n actions, it implies the first uncoupled dynamics that converges to the set of ε-CE in polylogarithmic rounds; a (n)-bit communication protocol for ε-CE in two-player games (resolving an open problem mentioned by [, , ]); and an Õ(n)-query algorithm for ε-CE (resolving an open problem of [] and obtaining the first separation between ε-CE and ε-Nash equilibrium in the query complexity model). For extensive-form games, our algorithm implies a PTAS for normal form correlated equilibria, a solution concept often conjectured to be computationally intractable (e.g. [, ]).

Publisher's Version
Fair Division via Quantile Shares
Yakov Babichenko, Michal Feldman, Ron Holzman, and Vishnu V. Narayan
(Technion, Israel; Tel Aviv University, Israel)
We consider the problem of fair division, where a set of indivisible goods should be distributed fairly among a set of agents with combinatorial valuations. To capture fairness, we adopt the notion of shares, where each agent is entitled to a fair share, based on some fairness criterion, and an allocation is considered fair if the value of every agent (weakly) exceeds her fair share. A share-based notion is considered universally feasible if it admits a fair allocation for every profile of monotone valuations. A major question arises: is there a non-trivial share-based notion that is universally feasible? The most well-known share-based notions, namely the proportional share and the maximin share, are not universally feasible, nor are any constant approximations of them.
We propose a novel share notion, where an agent assesses the fairness of a bundle by comparing it to her valuation in a random allocation. In this framework, a bundle is considered q-quantile fair, for q∈[0,1], if it is at least as good as a bundle obtained in a uniformly random allocation with probability at least q. Our main question is whether there exists a constant value of q for which the q-quantile share is universally feasible.
Our main result establishes a strong connection between the feasibility of quantile shares and the classical Erdős Matching Conjecture. Specifically, we show that if a version of this conjecture is true, then the 1/2e-quantile share is universally feasible. Furthermore, we provide unconditional feasibility results for additive, unit-demand and matroid-rank valuations for constant values of q. Finally, we discuss the implications of our results for other share notions.

Publisher's Version
Prophet Inequalities with Cancellation Costs
Farbod Ekbatani, Rad Niazadeh, Pranav Nuti, and Jan Vondrák
(University of Chicago, USA; Stanford University, USA)
Most of the literature on online algorithms and sequential decision-making focuses on settings with “irrevocable decisions” where the algorithm’s decision upon arrival of the new input is set in stone and can never change in the future. One canonical example is the classic prophet inequality problem, where realizations of a sequence of independent random variables X1, X2,… with known distributions are drawn one by one and a decision maker decides when to stop and accept the arriving random variable, with the goal of maximizing the expected value of their pick. We consider “prophet inequalities with recourse” in the linear buyback cost setting, where after accepting a variable Xi, we can still discard Xi later and accept another variable Xj, at a buyback cost of f × Xi. The goal is to maximize the expected net reward, which is the value of the final accepted variable minus the total buyback cost. Our first main result is an optimal prophet inequality in the regime of f ≥ 1, where we prove that we can achieve an expected reward 1+f/1+2f times the expected offline optimum. The problem is still open for 0<f<1 and we give some partial results in this regime. In particular, as our second main result, we characterize the asymptotic behavior of the competitive ratio for small f and provide almost matching upper and lower bounds that show a factor of 1−Θ(flog(1/f)). Our results are obtained by two fundamentally different approaches: One is inspired by various proofs of the classical prophet inequality, while the second is based on combinatorial optimization techniques involving LP duality, flows, and cuts.

Publisher's Version

7C

Explicit Orthogonal Arrays and Universal Hashing with Arbitrary Parameters
Nicholas Harvey and Arvin Sahami
(University of British Columbia, Canada)
Orthogonal arrays are a type of combinatorial design that emerged in the 1940s in the design of statistical experiments. In 1947, Rao proved a lower bound on the size of any orthogonal array, and raised the problem of constructing arrays of minimum size. Kuperberg, Lovett and Peled (2017) gave a non-constructive existence proof of orthogonal arrays whose size is near-optimal (i.e., within a polynomial of Rao’s lower bound), leaving open the question of an algorithmic construction. We give the first explicit, deterministic, algorithmic construction of orthogonal arrays achieving near-optimal size for all parameters. Our construction uses algebraic geometry codes. In pseudorandomness, the notions of t-independent generators or t-independent hash functions are equivalent to orthogonal arrays. Classical constructions of t-independent hash functions are known when the size of the codomain is a prime power, but very few constructions are known for an arbitrary codomain. Our construction yields algorithmically efficient t-independent hash functions for arbitrary domain and codomain.

Publisher's Version
Tree Evaluation Is in Space 𝑂 (log 𝑛 · log log 𝑛)
James Cook and Ian Mertz
(Unaffiliated, Canada; University of Warwick, United Kingdom)
The Tree Evaluation Problem (TreeEval) (Cook et al. 2009) is a central candidate for separating polynomial time (P) from logarithmic space (L) via composition. While space lower bounds of Ω(log2 n) are known for multiple restricted models, it was recently shown by Cook and Mertz (2020) that TreeEval can be solved in space O(log2 n/loglogn). Thus its status as a candidate hard problem for L remains a mystery. Our main result is to improve the space complexity of TreeEval to O(logn · loglogn), thus greatly strengthening the case that Tree Evaluation is in fact in L. We show two consequences of these results. First, we show that the KRW conjecture (Karchmer, Raz, and Wigderson 1995) implies LNC1; this itself would have many implications, such as branching programs not being efficiently simulable by formulas. Our second consequence is to increase our understanding of amortized branching programs, also known as catalytic branching programs; we show that every function f on n bits can be computed by such a program of length Poly(n) and width 2O(n).

Publisher's Version
Locality Bounds for Sampling Hamming Slices
Daniel M. Kane, Anthony Ostuni, and Kewen Wu
(University of California at San Diego, USA; University of California at Berkeley, USA)
Spurred by the influential work of Viola (Journal of Computing 2012), the past decade has witnessed an active line of research into the complexity of (approximately) sampling distributions, in contrast to the traditional focus on the complexity of computing functions.
We build upon and make explicit earlier implicit results of Viola to provide superconstant lower bounds on the locality of Boolean functions approximately sampling the uniform distribution over binary strings of particular Hamming weights, both exactly and modulo an integer, answering questions of Viola (Journal of Computing 2012) and Filmus, Leigh, Riazanov, and Sokolov (RANDOM 2023). Applications to data structure lower bounds and quantum-classical separations are discussed.
This is an extended abstract. The full paper can be found at https://arxiv.org/abs/2402.14278.

Publisher's Version
No Complete Problem for Constant-Cost Randomized Communication
Yuting Fang, Lianna Hambardzumyan, Nathaniel Harms, and Pooya Hatami
(Ohio State University, USA; Hebrew University of Jerusalem, Israel; EPFL, Lausanne, Switzerland)
We prove that the class of communication problems with public-coin randomized constant-cost protocols, called BPP0, does not contain a complete problem. In other words, there is no randomized constant-cost problem QBPP0, such that all other problems PBPP0 can be computed by a constant-cost deterministic protocol with access to an oracle for Q. We also show that the k-Hamming Distance problems form an infinite hierarchy within BPP0. Previously, it was known only that Equality is not complete for BPP0. We introduce a new technique, using Ramsey theory, that can prove lower bounds against arbitrary oracles in BPP0, and more generally, we show that k-Hamming Distance matrices cannot be expressed as a Boolean combination of any constant number of matrices which forbid large Greater-Than subproblems.

Publisher's Version
Explicit Separations between Randomized and Deterministic Number-on-Forehead Communication
Zander Kelley, Shachar Lovett, and Raghu Meka
(University of Illinois at Urbana-Champaign, USA; University of California at San Diego, USA; University of California at Los Angeles, USA)
We study the power of randomness in the Number-on-Forehead (NOF) model in communication complexity. We construct an explicit 3-player function f:[N]3 → {0,1}, such that: (i) there exist a randomized NOF protocol computing it that sends a constant number of bits; but (ii) any deterministic or nondeterministic NOF protocol computing it requires sending about (logN)1/3 many bits. This exponentially improves upon the previously best-known such separation. At the core of our proof is an extension of a recent result on sets of integers without 3-term arithmetic progressions into a non-arithmetic setting.

7D

An Area Law for the Maximally-Mixed Ground State in Arbitrarily Degenerate Systems with Good AGSP
Itai Arad, Raz Firanko, and Rahul Jain
(Centre for Quantum Technologies, Singapore; Technion, Israel; National University of Singapore, Singapore)
We show an area law in the mutual information for the maximally-mixed state Ω in the ground space of general Hamiltonians, which is independent of the underlying ground space degeneracy. Our result assumes the existence of a ‘good’ approximation to the ground state projector (a good AGSP), a crucial ingredient in former area-law proofs. Such approximations have been explicitly derived for 1D gapped local Hamiltonians and 2D frustration-free locally-gapped local Hamiltonians. As a corollary, we show that in 1D gapped local Hamiltonians, for any є>0 and any bi-partition LLc of the system,
I^є_max(L)(L^c) ≤O( log(|L|)+log(1/є)),
where |L| represents the number of sites in L and Imaxє(L)(Lc)Ω represents the є-smoothed maximum mutual information with respect to the L:Lc partition in Ω. From this bound we then conclude I(L)(Lc)ΩO(log(|L|)) – an area law for the mutual information in 1D systems with a logarithmic correction. In addition, we show that Ω can be approximated up to an є in trace norm with a state of Schmidt rank of at most poly(|L|/є). Similar corollaries are derived for the mutual information of 2D frustration-free and locally-gapped local Hamiltonians.

Publisher's Version
Local Minima in Quantum Systems
Chi-Fang Chen, Hsin-Yuan Huang, John Preskill, and Leo Zhou
(California Institute of Technology, USA; AWS Center for Quantum Computing, USA; Google Quantum AI, USA; Massachusetts Institute of Technology, USA)
Finding ground states of quantum many-body systems is known to be hard for both classical and quantum computers. As a result, when Nature cools a quantum system in a low-temperature thermal bath, the ground state cannot always be found efficiently. Instead, Nature finds a local minimum of the energy. In this work, we study the problem of finding local minima in quantum systems under thermal perturbations. While local minima are much easier to find than ground states, we show that finding a local minimum is computationally hard for classical computers, even when the task is to output a single-qubit observable at any local minimum. In contrast, we prove that a quantum computer can always find a local minimum efficiently using a thermal gradient descent algorithm that mimics the cooling process in Nature. To establish the classical hardness of finding local minima, we consider a family of two-dimensional Hamiltonians such that any problem solvable by polynomial-time quantum algorithms can be reduced to finding local minima of these Hamiltonians. Therefore, cooling systems to local minima is universal for quantum computation, and, assuming quantum computation is more powerful than classical computation, finding local minima is classically hard and quantumly easy.

Publisher's Version
An Optimal Tradeoff between Entanglement and Copy Complexity for State Tomography
Sitan Chen, Jerry Li, and Allen Liu
(Harvard University, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA)
There has been significant interest in understanding how practical constraints on contemporary quantum devices impact the complexity of quantum learning. For the classic question of tomography, recent work tightly characterized the copy complexity for any protocol that can only measure one copy of the unknown state at a time, showing it is polynomially worse than if one can make fully-entangled measurements. While we now have a fairly complete picture of the rates for such tasks in the near-term and fault-tolerant regimes, it remains poorly understood what the landscape in between these extremes looks like, and in particular how to gracefully scale up our protocols as we transition away from NISQ. In this work, we study tomography in the natural setting where one can make measurements of t copies at a time. For sufficiently small є, we show that for any td2, Θ(d3/√tє2) copies are necessary and sufficient to learn an unknown d-dimensional state ρ to trace distance є. This gives a smooth and optimal interpolation between the known rates for single-copy measurements and fully-entangled measurements. To our knowledge, this is the first smooth entanglement-copy tradeoff known for any quantum learning task, and for tomography, no intermediate point on this curve was known, even at t = 2. An important obstacle is that unlike the optimal single-copy protocol, the optimal fully-entangled protocol is inherently a biased estimator. This bias precludes naive batching approaches for interpolating between the two protocols. Instead, we devise a novel two-stage procedure that uses Keyl’s algorithm to refine a crude estimate for ρ based on single-copy measurements. A key insight is to use Schur-Weyl sampling not to estimate the spectrum of ρ, but to estimate the deviation of ρ from the maximally mixed state. When ρ is far from the maximally mixed state, we devise a novel quantum splitting procedure that reduces to the case where ρ is close to maximally mixed.

Publisher's Version
Learning Shallow Quantum Circuits
Hsin-Yuan Huang, Yunchao Liu, Michael Broughton, Isaac Kim, Anurag Anshu, Zeph Landau, and Jarrod R. McClean
(California Institute of Technology, USA; Google Quantum AI, USA; University of California at Berkeley, USA; University of California at Davis, USA; Harvard University, USA)
Despite fundamental interests in learning quantum circuits, the existence of a computationally efficient algorithm for learning shallow quantum circuits remains an open question. Because shallow quantum circuits can generate distributions that are classically hard to sample from, existing learning algorithms do not apply. In this work, we present a polynomial-time classical algorithm for learning the description of any unknown n-qubit shallow quantum circuit U (with arbitrary unknown architecture) within a small diamond distance using single-qubit measurement data on the output states of U. We also provide a polynomial-time classical algorithm for learning the description of any unknown n-qubit state | ψ ⟩ = U | 0n ⟩ prepared by a shallow quantum circuit U (on a 2D lattice) within a small trace distance using single-qubit measurements on copies of | ψ ⟩. Our approach uses a quantum circuit representation based on local inversions and a technique to combine these inversions. This circuit representation yields an optimization landscape that can be efficiently navigated and enables efficient learning of quantum circuits that are classically hard to simulate.

Publisher's Version
Improved Stabilizer Estimation via Bell Difference Sampling
Sabee Grewal, Vishnu Iyer, William Kretschmer, and Daniel Liang
(University of Texas at Austin, USA; Simons Institute for the Theory of Computing, Berkeley, USA; Rice University, USA)
We study the complexity of learning quantum states in various models with respect to the stabilizer formalism and obtain the following results: We prove that Ω(n) T-gates are necessary for any Clifford+T circuit to prepare computationally pseudorandom quantum states, an exponential improvement over the previously known bound. This bound is asymptotically tight if linear-time quantum-secure pseudorandom functions exist. Given an n-qubit pure quantum state |ψ⟩ that has fidelity at least τ with some stabilizer state, we give an algorithm that outputs a succinct description of a stabilizer state that witnesses fidelity at least τ − ε. The algorithm uses O(n/(ε2τ4)) samples and exp(O(n4)) / ε2 time. In the regime of τ constant, this algorithm estimates stabilizer fidelity substantially faster than the naive exp(O(n2))-time brute-force algorithm over all stabilizer states. In the special case of τ > cos2(π/8), we show that a modification of the above algorithm runs in polynomial time. We exhibit a tolerant property testing algorithm for stabilizer states. The underlying algorithmic primitive in all of our results is Bell difference sampling. To prove our results, we establish and/or strengthen connections between Bell difference sampling, symplectic Fourier analysis, and graph theory.

Publisher's Version

8A

Computing a Fixed Point of Contraction Maps in Polynomial Queries
Xi Chen, Yuhao Li, and Mihalis Yannakakis
(Columbia University, USA)
We give an algorithm for finding an є-fixed point of a contraction map f:[0,1]k↦[0,1]k under the ℓ-norm with query complexity O (k2log(1/є ) ).

Publisher's Version
Self-Improvement for Circuit-Analysis Problems
R. Ryan Williams
(Massachusetts Institute of Technology, USA)
Many results in fine-grained complexity reveal intriguing consequences from solving various SAT problems even slightly faster than exhaustive search. We prove a “self-improving” (or “bootstrapping”) theorem for Circuit-SAT, #Circuit-SAT, and its fully-quantified version: solving one of these problems faster for “large” circuit sizes implies a significant speed-up for “smaller” circuit sizes. Our general arguments work for a variety of models solving circuit-analysis problems, including non-uniform circuits and randomized models of computation.
We derive striking consequences for the complexities of these problems, in both the fine-grained and parameterized setting. For example, we show that certain fine-grained improvements on the runtime exponents of polynomial-time versions of Circuit-SAT would imply subexponential-time algorithms for Circuit-SAT on 2o(n)-size circuits, refuting the Exponential Time Hypothesis. We also show that any algorithm for Circuit-SAT with k inputs and n gates running in 1000000k + n1+ε time (for all ε > 0) would imply algorithms running in time (1+ε)k + n1+ε time (for all ε > 0), also refuting the Exponential Time Hypothesis. Applying our ideas in the #Circuit-SAT setting, we prove new unconditional lower bounds against uniform circuits with symmetric gates for functions in deterministic linear time.

Publisher's Version
Formula Size-Depth Tradeoffs for Iterated Sub-permutation Matrix Multiplication
Benjamin Rossman
(Duke University, USA)
Iterated Sub-Permutation Matrix Multiplication is the problem of computing the product of k n-by-n Boolean matrices with at most a single 1 in each row and column. For all d ≤ logk, this problem is solvable by size nO(dk1/d) monotone AC0 formulas of depth d+1, as well as semi-unbounded fan-in “SAC0” formulas of ∧-depth d and ∧-fan-in O(k1/d). In this paper, we prove matching nΩ(dk1/d) lower bounds for monotone AC0 and SAC0 formulas for all k ≤ loglogn, and slightly weaker nΩ(dk1/2d) lower bounds for non-monotone AC0 and SAC0 formulas. These size-depth tradeoffs converge at d = logk to known asymptotically tight nΩ(logk) lower bounds for both unbounded-depth monotone formulas and bounded-depth non-monotone formulas. Our lower bounds for non-monotone formulas extend to the Iterated Permutation Matrix Multiplication problem, improving the previous best known nkexp(−O(d)) tradeoff.

Publisher's Version
Functional Lower Bounds in Algebraic Proofs: Symmetry, Lifting, and Barriers
Tuomas Hakoniemi, Nutan Limaye, and Iddo Tzameret
(University of Helsinki, Helsinki, Finland; IT University of Copenhagen, Copenhagen, Denmark; Imperial College London, United Kingdom)
Strong algebraic proof systems such as IPS (Ideal Proof System; Grochow-Pitassi [J. ACM, 65(6):37:1–55, 2018]) offer a general model for deriving polynomials in an ideal and refuting unsatisfiable propositional formulas, subsuming most standard propositional proof systems. A major approach for lower bounding the size of IPS refutations is the Functional Lower Bound Method (Forbes, Shpilka, Tzameret and Wigderson [Theory Comput., 17: 1-88, 2021]), which reduces the hardness of refuting a polynomial equation f(x)=0 with no Boolean solutions to the hardness of computing the function 1/f(x) over the Boolean cube with an algebraic circuit. Using symmetry we provide a general way to obtain many new hard instances against fragments of IPS via the functional lower bound method. This includes hardness over finite fields and hard instances different from Subset Sum variants both of which were unknown before, and stronger constant-depth lower bounds. Conversely, we expose the limitation of this method by showing it cannot lead to proof complexity lower bounds for any hard Boolean instance (e.g., CNFs) for any sufficiently strong proof systems. Specifically, we show the following:
Nullstellensatz degree lower bounds using symmetry: Extending [Forbes et al. Theory Comput., 17: 1-88, 2021] we show that every unsatisfiable symmetric polynomial with n variables requires degree >n refutations (over sufficiently large characteristic). Using symmetry again, by characterising the n/2-homogeneous slice appearing in refutations, we show that unsatisfiable invariant polynomials of degree n/2 require degree ≥ n refutations. Lifting to size lower bounds: Lifting our Nullstellensatz degree bounds to IPS-size lower bounds, we obtain exponential lower bounds for any poly-logarithmic degree symmetric instance against IPS refutations written as oblivious read-once algebraic programs (roABP-IPS). For invariant polynomials, we show lower bounds against roABP-IPS and refutations written as multilinear formulas in the placeholder IPS regime (studied by Andrews and Forbes [54th Ann. Symp. Theory Comput., STOC 2022]), where the hard instances do not necessarily have small roABPs themselves, including over positive characteristic fields. This provides the first IPS-fragment lower bounds over finite fields. By an adaptation of the work of Amireddy, Garg, Kayal, Saha and Thankey [50th Intl. Colloq. Aut. Lang. Prog., ICALP 2023], we strengthen the constant-depth IPS lower bounds obtained recently in Govindasamy, Hakoniemi and Tzameret [63rd IEEE Ann. Symp. Found. Comput. Sci., FOCS 2022]. Barriers for Boolean instances: While lower bounds against strong propositional proof systems were the original motivation for studying algebraic proof systems in the 1990s [Beame et al. Proc. London Math. Soc. (3) 73, 1 (1996), 1–26; Buss et al. Computational Complexity 6, 3 (1996), 256–298] we show that the functional lower bound method alone cannot establish any size lower bound for Boolean instances for any sufficiently strong proof systems, and in particular, cannot lead to lower bounds against AC0[p]-Frege and TC0-Frege.

Publisher's Version
Black-Box PPP Is Not Turing-Closed
Noah Fleming, Stefan Grosser, Toniann Pitassi, and Robert Robere
(Memorial University of Newfoundland, Canada; McGill University, Canada; Columbia University, USA)
The complexity class PPP contains all total search problems many-one reducible to the Pigeon problem, where we are given a succinct encoding of a function mapping n+1 pigeons to n holes, and must output two pigeons that collide in a hole. PPP is one of the “original five” syntactically-defined subclasses of TFNP, and has been extensively studied due to the strong connections between its defining problem — the pigeonhole principle — and problems in cryptography, extremal combinatorics, proof complexity, and other fields. However, despite its importance, PPP appears to be less robust than the other important TFNP subclasses. In particular, unlike all other major TFNP subclasses, it was conjectured by Buss and Johnson that PPP is not closed under Turing reductions, and they called for a black-box separation in order to provide evidence for this conjecture. The question of whether PPP contains its Turing closure was further highlighted by Daskalakis in his recent IMU Abacus Medal Lecture. In this work we prove that PPP is indeed not Turing-closed in the black-box setting, affirmatively resolving the above conjecture and providing strong evidence that PPP is not Turing-closed. In fact, we are able to separate PPP from its non-adaptive Turing closure, in which all calls to the Pigeon oracle must be made in parallel. This differentiates PPP from all other important TFNP subclasses, and especially from its closely-related subclass PWPP — defined by reducibility to the weak pigeonhole principle — which is known to be non-adaptively Turing-closed. Our proof requires developing new tools for PPP lower bounds, and creates new connections between PPP and the theory of pseudoexpectation operators used for Sherali-Adams and Sum-of-Squares lower bounds. In particular, we introduce a new type of pseudoexpectation operator that is precisely tailored for lower bounds against black-box PPP, which may be of independent interest.

Publisher's Version

8B

Product Mixing in Compact Lie Groups
David Ellis, Guy Kindler, Noam Lifshitz, and Dor Minzer
(University of Bristol, United Kingdom; Hebrew University of Jerusalem, Israel; Massachusetts Institute of Technology, USA)
If G is a group, we say a subset S of G is product-free if the equation xy=z has no solutions with x,y,zS.In 1985, Babai and Sós [] asked, for a finite group G, how large a subset SG can be if it is product-free. The main tool (hitherto) for studying this problem has been the notion of a quasirandom group. For D ∈ ℕ, a group G is said to be D-quasirandom if the minimal dimension of a nontrivial complex irreducible representation of G is at least D. Gowers showed that in a D-quasirandom finite group G, the maximal size of a product-free set is at most |G|/D1/3. This disproved a longstanding conjecture of Babai and Sós from 1985. For the special unitary group, G=(n), Gowers observed that his argument yields an upper bound of n−1/3 on the measure of a measurable product-free subset. In this paper, we improve Gowers’ upper bound to exp(−cn1/3), where c>0 is an absolute constant. In fact, we establish something stronger, namely, product-mixing for measurable subsets of (n) with measure at least exp(−cn1/3); for this product-mixing result, the n1/3 in the exponent is sharp. Our approach involves introducing novel hypercontractive inequalities, which imply that the non-Abelian Fourier spectrum of the indicator function of a small set concentrates on high-dimensional irreducible representations. Our hypercontractive inequalities are obtained via methods from representation theory, harmonic analysis, random matrix theory and differential geometry. We generalize our hypercontractive inequalities from (n) to an arbitrary D-quasirandom compact connected Lie group for D at least an absolute constant, thereby extending our results on product-free sets to such groups. We also demonstrate various other applications of our inequalities to geometry (viz., non-Abelian Brunn-Minkowski type inequalities), mixing times, and the theory of growth in compact Lie groups. A subsequent work due to Arunachalam, Girish and Lifshitz uses our inequalities to establish new separation results between classical and quantum communication complexity.

Publisher's Version
On Approximability of Satisfiable k-CSPs: IV
Amey Bhangale, Subhash Khot, and Dor Minzer
(University of California at Riverside, USA; New York University, USA; Massachusetts Institute of Technology, USA)
We prove a stability result for general 3-wise correlations over distributions satisfying mild connectivity properties. More concretely, we show that if Σ,Γ and Φ are alphabets of constant size, and µ is a distribution over Σ×Γ×Φ satisfying: (1) the probability of each atom is at least Ω(1), (2) µ is pairwise connected, and (3) µ has no Abelian embeddings into (ℤ,+), then the following holds. Any triplets of 1-bounded functions f∶ Σn→ℂ, g∶ Γn→ℂ, h∶ Φn→ℂ satisfying
 (x,y,z)∼ µ⊗ nf(x)g(y)h(z)≥   
must arise from an Abelian group associated with the distribution µ. More specifically, we show that there is an Abelian group (H,+) of constant size such that for any such f,g and h, the function f (and similarly g and h) is correlated with a function of the form f(x) = χ(σ(x1),…,σ(xn)) L (x), where σ∶ Σ → H is some map, χ∈ Ĥn is a character, and L∶ Σn→ℂ is a low-degree function with bounded 2-norm.
En route we prove a few additional results that may be of independent interest, such as an improved direct product theorem, as well as a result we refer to as a “restriction inverse theorem” about the structure of functions that, under random restrictions, with noticeable probability have significant correlation with a product function.
In companion papers, we show applications of our results to the fields of Probabilistically Checkable Proofs, as well as various areas in discrete mathematics such as extremal combinatorics and additive combinatorics.

Publisher's Version
Probabilistically Checkable Reconfiguration Proofs and Inapproximability of Reconfiguration Problems
Shuichi Hirahara and Naoto Ohsaka
(National Institute of Informatics, Tokyo, Japan; CyberAgent, Japan)
Motivated by the inapproximability of reconfiguration problems, we present a new PCP-type characterization of PSPACE, which we call a probabilistically checkable reconfiguration proof (PCRP): Any PSPACE computation can be encoded into an exponentially long sequence of polynomially long proofs such that every adjacent pair of the proofs differs in at most one bit, and every proof can be probabilistically checked by reading a constant number of bits.
Using the new characterization, we prove PSPACE-completeness of approximate versions of many reconfiguration problems, such as the Maxmin 3-SAT Reconfiguration problem. This resolves the open problem posed by Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (ISAAC 2008; Theor. Comput. Sci. 2011) as well as the Reconfiguration Inapproximability Hypothesis by Ohsaka (STACS 2023) affirmatively. We also present PSPACE-completeness of approximating the Maxmin Clique Reconfiguration problem to within a factor of nε for some constant ε > 0.

Publisher's Version
Cosystolic Expansion of Sheaves on Posets with Applications to Good 2-Query Locally Testable Codes and Lifted Codes
Uriya A. First and Tali Kaufman
(University of Haifa, Israel; Bar-Ilan University, Israel)
We show that cosystolic expansion of sheaves on posets can be derived from local expansion conditions of the sheaf and the poset. When the poset at hand is a cell complex — typically a high dimensional expander — a sheaf may be thought of as generalizing coefficient groups used for defining homology and cohomology, by letting the coefficient group vary along the cell complex. Previous works established local criteria for cosystolic expansion only for simplicial complexes and with respect to constant coefficients. Our main technical contribution is providing a criterion that is more general in two ways: it applies to posets and sheaves, respectively.
The importance of working with sheaves on posets (rather than constant coefficients and simplicial complexes) stems from applications to locally testable codes (LTCs). It has been observed by Kaufman–Lubotzky that cosystolic expansion is related to property testing in the context of simplicial complexes and constant coefficients, but unfortunately, this special case does not give rise to interesting LTCs. We observe that this relation also exists in the much more general setting of sheaves on posets. As the language of sheaves is more expressive, it allows us to put this relation to use. Specifically, we apply our criterion for cosystolic expansion in two ways.
First, we show the existence of good 2-query LTCs. These codes are actually related to the recent good q-query LTCs of Dinur–Evra–Livne–Lubotzky–Mozes and Panteleev–Kalachev, being the formers’ so-called line codes, but we get them from a new, more illuminating perspective. By realizing these codes as cocycle codes of sheaves on posets, we can derive their good properties directly from our criterion for cosystolic expansion. The local expansion conditions that our criterion requires unfold to the conditions on the “small codes” in Dinur et. al and Panteleev–Kalachev, and hence give a conceptual explanation to why conditions such as agreement testability are required.
Second, we show that local testability of a lifted code could be derived solely from local conditions, namely from agreement expansion properties of the local “small” codes which define it. In a work of Dikstein–Dinur–Harsha–Ron-Zewi, it was shown that one can obtain local testability of lifted codes from a mixture of local and global conditions, namely, from local testability of the local codes and global agreement expansion of an auxiliary 3-layer system called a multilayered agreement sampler. Our result achieves the same, but using genuinely local conditions and a simpler 3-layer structure. It is derived neatly from our local criterion for cosystolic expansion, by interpreting the situation in the language of sheaves on posets.

Publisher's Version
Randomly Punctured Reed–Solomon Codes Achieve List-Decoding Capacity over Linear-Sized Fields
Omar Alrabiah, Venkatesan Guruswami, and Ray Li
(University of California at Berkeley, USA; Santa Clara University, USA)
Reed–Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed–Solomon codes can optimally achieve list-decoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed–Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed–Solomon codes over an exponentially large field size 2O(n), where n is the block length of the code. A natural question is whether Reed–Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed–Solomon codes are list-decodable to capacity with field size O(n2). We show that Reed–Solomon codes are list-decodable to capacity with linear field size O(n), which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size q and code length n cannot be bounded by an absolute constant. Our techniques also show that random linear codes are list-decodable up to (the alphabet-independent) capacity with optimal list-size O(1/ε) and near-optimal alphabet size 2O(1/ε2), where ε is the gap to capacity. As far as we are aware, list-decoding up to capacity with optimal list-size O(1/ε) was not known to be achievable with any linear code over a constant alphabet size (even non-constructively), and it was also not known to be achievable for random linear codes over any alphabet size. Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices. With our proof, which maintains a hypergraph perspective of the list-decoding problem, we include an alternate presentation of ideas from Brakensiek, Gopi, and Makam that more directly connects the list-decoding problem to the GM-MDS theorem via a hypergraph orientation theorem.

Publisher's Version

8C

Learning Quantum Hamiltonians at Any Temperature in Polynomial Time
Ainesh Bakshi, Allen Liu, Ankur Moitra, and Ewin Tang
(Massachusetts Institute of Technology, USA; University of California at Berkeley, USA)
We study the problem of learning a local quantum Hamiltonian H given copies of its Gibbs state ρ = e−β H/(e−β H) at a known inverse temperature β>0. Anshu, Arunachalam, Kuwahara, and Soleimanifar gave an algorithm to learn a Hamiltonian on n qubits to precision with only polynomially many copies of the Gibbs state, but which takes exponential time. Obtaining a computationally efficient algorithm has been a major open problem, with prior work only resolving this in the limited cases of high temperature or commuting terms. We fully resolve this problem, giving a polynomial time algorithm for learning H to precision from polynomially many copies of the Gibbs state at any constant β > 0. Our main technical contribution is a new flat polynomial approximation to the exponential function, and a translation between multi-variate scalar polynomials and nested commutators. This enables us to formulate Hamiltonian learning as a polynomial system. We then show that solving a low-degree sum-of-squares relaxation of this polynomial system suffices to accurately learn the Hamiltonian.

Publisher's Version
An Efficient Quantum Parallel Repetition Theorem and Applications
John Bostanci, Luowen Qian, Nicholas Spooner, and Henry Yuen
(Columbia University, USA; Boston University, USA; University of Warwick, United Kingdom; New York University, USA)
We prove a tight parallel repetition theorem for 3-message computationally-secure quantum interactive protocols between an efficient challenger and an efficient adversary. We also prove under plausible assumptions that the security of 4-message computationally secure protocols does not generally decrease under parallel repetition. These mirror the classical results of Bellare, Impagliazzo, and Naor. Finally, we prove that all quantum argument systems can be generically compiled to an equivalent 3-message argument system, mirroring the transformation for quantum proof systems. As immediate applications, we show how to derive hardness amplification theorems for quantum bit commitment schemes (answering a question of Yan), EFI pairs (answering a question of Brakerski, Canetti, and Qian), public-key quantum money schemes (answering a question of Aaronson and Christiano), and quantum zero-knowledge argument systems. We also derive an XOR lemma for quantum predicates as a corollary.

Publisher's Version
The Power of Adaptivity in Quantum Query Algorithms
Uma Girish, Makrand Sinha, Avishay Tal, and Kewen Wu
(Princeton University, USA; University of Illinois at Urbana-Champaign, USA; University of California at Berkeley, USA)
Motivated by limitations on the depth of near-term quantum devices, we study the depth-computation trade-off in the query model, where depth corresponds to the number of adaptive query rounds and the computation per layer corresponds to the number of parallel queries per round. We achieve the strongest known separation between quantum algorithms with r versus r−1 rounds of adaptivity. We do so by using the k-fold Forrelation problem introduced by Aaronson and Ambainis (SICOMP’18). For k=2r, this problem can be solved using an r round quantum algorithm with only one query per round, yet we show that any r−1 round quantum algorithm needs an exponential (in the number of qubits) number of parallel queries per round. Our results are proven following the Fourier analytic machinery developed in recent works on quantum-classical separations. The key new component in our result are bounds on the Fourier weights of quantum query algorithms with bounded number of rounds of adaptivity. These may be of independent interest as they distinguish the polynomials that arise from such algorithms from arbitrary bounded polynomials of the same degree.

Publisher's Version
On the Pauli Spectrum of QAC0
Shivam Nadimpalli, Natalie Parham, Francisca Vasconcelos, and Henry Yuen
(Columbia University, USA; University of California at Berkeley, USA)
The circuit class QAC0 was introduced by Moore (1999) as a model for constant depth quantum circuits where the gate set includes many-qubit Toffoli gates. Proving lower bounds against such circuits is a longstanding challenge in quantum circuit complexity; in particular, showing that polynomial-size QAC0 cannot compute the parity function has remained an open question for over 20 years. In this work, we identify a notion of the Pauli spectrum of QAC0 circuits, which can be viewed as the quantum analogue of the Fourier spectrum of classical AC0 circuits. We conjecture that the Pauli spectrum of QAC0 circuits satisfies low-degree concentration, in analogy to the famous Linial, Mansour, Nisan (LMN) theorem on the low-degree Fourier concentration of AC0 circuits. If true, this conjecture immediately implies that polynomial-size QAC0 circuits cannot compute parity. We prove this conjecture for the class of depth-d, polynomial-size QAC0 circuits with at most nO(1/d) auxiliary qubits. We obtain new circuit lower bounds and learning results as applications: this class of circuits cannot correctly compute the n-bit parity function on more than (1/2 + 2−Ω(n1/d))-fraction of inputs, and the n-bit majority function on more than (1/2 + O(n−1/4))-fraction of inputs. Additionally we show that this class of QAC0 circuits with limited auxiliary qubits can be learned with quasipolynomial sample complexity, giving the first learning result for QAC0 circuits. More broadly, our results add evidence that “Pauli-analytic” techniques can be a powerful tool in studying quantum circuits.

Publisher's Version
Approaching the Quantum Singleton Bound with Approximate Error Correction
Thiago Bergamaschi, Louis Golowich, and Sam Gunn
(University of California at Berkeley, USA)
It is well known that no quantum error correcting code of rate R can correct adversarial errors on more than a (1−R)/4 fraction of symbols. But what if we only require our codes to approximately recover the message?
In this work, we construct efficiently-decodable approximate quantum codes against adversarial error rates approaching the quantum Singleton bound of (1−R)/2, for any constant rate R. Specifically, for every R ∈ (0,1) and γ>0, we construct codes of rate R, message length k, and alphabet size 2O(1/γ5), that are efficiently decodable against a (1−R−γ)/2 fraction of adversarial errors and recover the message up to inverse-exponential error 2−Ω(k).
At a technical level, we use classical robust secret sharing and quantum purity testing to reduce approximate quantum error correction to a suitable notion of quantum list decoding. We then instantiate our notion of quantum list decoding by (i) introducing folded quantum Reed-Solomon codes, and (ii) applying a new, quantum version of distance amplification.

Publisher's Version

8D

Counting Small Induced Subgraphs with Edge-Monotone Properties
Simon Döring, Dániel Marx, and Philip Wellnitz
(MPI-INF, Germany; Saarbrücken Graduate School of Computer Science, Saarbrücken, Germany; Saarland Informatics Campus, Saarbrücken, Germany; CISPA Helmholtz Center for Information Security, Germany)
We study the parameterized complexity of #IndSub(Φ), where given a graph G and an integer k, the task is to count the number of induced subgraphs on k vertices that satisfy the graph property Φ. Focke and Roth [STOC 2022] completely characterized the complexity for each Φ that is a hereditary property (that is, closed under vertex deletions): #IndSub(Φ) is #W[1]-hard except in the degenerate cases when every graph satisfies Φ or only finitely many graphs satisfy Φ. We complement this result with a classification for each Φ that is edge-monotone (that is, closed under edge deletions): #IndSub(Φ) is #W[1]-hard except in the degenerate case when there are only finitely many integers k such that Φ is nontrivial on k-vertex graphs. Our result generalizes earlier results for specific properties Φ that are related to the connectivity or density of the graph. Further, we extend the #W[1]-hardness result by a lower bound which shows that #IndSub(Φ) cannot be solved in time f(k) · |V(G)|o(√logk / loglogk) for any function f, unless the Exponential-Time Hypothesis (ETH) fails. For many natural properties, we obtain even a tight bound f(k) · |V(G)|o(k); for example, this is the case for every property Φ that is nontrivial on k-vertex graphs for each k greater than some k0.

Publisher's Version
Near-Optimal Streaming Ellipsoidal Rounding for General Convex Polytopes
Yury Makarychev, Naren Sarayu Manoj, and Max Ovsiankin
(Toyota Technological Institute, Chicago, USA)
We give near-optimal algorithms for computing an ellipsoidal rounding of a convex polytope whose vertices are given in a stream. The approximation factor is linear in the dimension (as in John's theorem) and only loses an excess logarithmic factor in the aspect ratio of the polytope. Our algorithms are nearly optimal in two senses: first, their runtimes nearly match those of the most efficient known algorithms for the offline version of the problem. Second, their approximation factors nearly match a lower bound we show against a natural class of geometric streaming algorithms. In contrast to existing works in the streaming setting that compute ellipsoidal roundings only for centrally symmetric convex polytopes, our algorithms apply to general convex polytopes. We also show how to use our algorithms to construct coresets from a stream of points that approximately preserve both the ellipsoidal rounding and the convex hull of the original set of points.

Publisher's Version
Almost-Linear Time Parameterized Algorithm for Rankwidth via Dynamic Rankwidth
Tuukka Korhonen and Marek Sokołowski
(University of Bergen, Norway; University of Warsaw, Poland)
We give an algorithm that given a graph G with n vertices and m edges and an integer k, in time Ok(n1+o(1)) + O(m) either outputs a rank decomposition of G of width at most k or determines that the rankwidth of G is larger than k; the Ok(·)-notation hides factors depending on k. Our algorithm returns also a (2k+1−1)-expression for cliquewidth, yielding a (2k+1−1)-approximation algorithm for cliquewidth with the same running time. This improves upon the Ok(n2) time algorithm of Fomin and Korhonen [STOC 2022].
The main ingredient of our algorithm is a fully dynamic algorithm for maintaining rank decompositions of bounded width: We give a data structure that for a dynamic n-vertex graph G that is updated by edge insertions and deletions maintains a rank decomposition of G of width at most 4k under the promise that the rankwidth of G never grows above k. The amortized running time of each update is Ok(2√logn loglogn). The data structure furthermore can maintain whether G satisfies some fixed CMSO1 property within the same running time. We also give a framework for performing “dense” edge updates inside a given set of vertices X, where the new edges inside X are described by a given CMSO1 sentence and vertex labels, in amortized Ok(|X| · 2√logn loglogn) time. Our dynamic algorithm generalizes the dynamic treewidth algorithm of Korhonen, Majewski, Nadara, Pilipczuk, and Sokołowski [FOCS 2023].

Publisher's Version
Flip-Breakability: A Combinatorial Dichotomy for Monadically Dependent Graph Classes
Jan Dreier, Nikolas Mählmann, and Szymon Toruńczyk
(TU Wien, Austria; University of Bremen, Bremen, Germany; University of Warsaw, Poland)
A conjecture in algorithmic model theory predicts that the model-checking problem for first-order logic is fixed-parameter tractable on a hereditary graph class if and only if the class is monadically dependent. Originating in model theory, this notion is defined in terms of logic, and encompasses nowhere dense classes, monadically stable classes, and classes of bounded twin-width. Working towards this conjecture, we provide the first two combinatorial characterizations of monadically dependent graph classes. This yields the following dichotomy. On the structure side, we characterize monadic dependence by a Ramsey-theoretic property called flip-breakability. This notion generalizes the notions of uniform quasi-wideness, flip-flatness, and bounded grid rank, which characterize nowhere denseness, monadic stability, and bounded twin-width, respectively, and played a key role in their respective model checking algorithms. Natural restrictions of flip-breakability additionally characterize bounded treewidth and cliquewidth and bounded treedepth and shrubdepth. On the non-structure side, we characterize monadic dependence by explicitly listing few families of forbidden induced subgraphs. This result is analogous to the characterization of nowhere denseness via forbidden subdivided cliques, and allows us to resolve one half of the motivating conjecture: First-order model checking is AW[*]-hard on every hereditary graph class that is monadically independent. The result moreover implies that hereditary graph classes which are small, have almost bounded twin-width, or have almost bounded flip-width, are monadically dependent. Lastly, we lift our result to also obtain a combinatorial dichotomy in the more general setting of monadically dependent classes of binary structures.

Publisher's Version
A Strongly Polynomial Algorithm for Linear Programs with At Most Two Nonzero Entries per Row or Column
Daniel Dadush, Zhuan Khye Koh, Bento Natura, Neil Olver, and László A. Végh
(CWI, Amsterdam, Netherlands; Georgia Institute of Technology, USA; London School of Economics, United Kingdom)
We give a strongly polynomial algorithm for minimum cost generalized flow, and hence for optimizing any linear program with at most two non-zero entries per row, or at most two non-zero entries per column. Primal and dual feasibility were shown by Végh (MOR ’17) and Megiddo (SICOMP ’83), respectively. Our result can be viewed as progress towards understanding whether all linear programs can be solved in strongly polynomial time, also referred to as Smale’s 9th problem. Our approach is based on the recent primal-dual interior point method (IPM) by Allamigeon, Dadush, Loho, Natura, and Végh (FOCS ’22). The number of iterations needed by the IPM is bounded, up to a polynomial factor in the number of inequalities, by the straight line complexity of the central path. Roughly speaking, this is the minimum number of pieces of any piecewise linear curve that multiplicatively approximates the central path. As our main contribution, we show that the straight line complexity of any minimum cost generalized flow instance is polynomial in the number of arcs and vertices. By applying a reduction of Hochbaum (ORL ’04), the same bound applies to any linear program with at most two non-zeros per column or per row. To be able to run the IPM, one requires a suitable initial point. For this purpose, we develop a novel multistage approach, where each stage can be solved in strongly polynomial time given the result of the previous stage. Beyond this, substantial work is needed to ensure that the bit complexity of each iterate remains bounded during the execution of the algorithm. For this purpose, we show that one can maintain a representation of the iterates as a low complexity convex combination of vertices and extreme rays. Our approach is black-box and can be applied to any log-barrier path-following method.

Publisher's Version Info

9A (Best Student Papers)

Shaving Logs via Large Sieve Inequality: Faster Algorithms for Sparse Convolution and More
Ce Jin and Yinzhan Xu
(Massachusetts Institute of Technology, USA)
In sparse convolution-type problems, a common technique is to hash the input integers modulo a random prime p∈ [Q/2,Q] for some parameter Q, which reduces the range of the input integers while preserving their additive structure. However, this hash family suffers from two drawbacks, which led to bottlenecks in many state-of-the-art algorithms: (1) The collision probability of two elements from [N] is O(logN/Q) rather than O(1/Q); (2) It is difficult to derandomize the choice of p; known derandomization techniques lead to super-logarithmic overhead [Chan, Lewenstein STOC’15].
In this paper, we partially overcome these drawbacks in certain scenarios, via novel applications of the large sieve inequality from analytic number theory. Consequently, we obtain the following improved algorithms for various problems (in the standard word RAM model):
Sparse Nonnegative Convolution: We obtain an O(tlogt)-time Las Vegas algorithm that computes the convolution AB of two nonnegative integer vectors A,B, where t is the output sparsity ||AB||0. Moreover, our algorithm terminates in O(tlogt) time with 1−1/poly(t) probability. This simultaneously improves the O(tlogt loglogt)-time Las Vegas algorithm [Bringmann, Fischer, Nakos SODA’22] and the Monte Carlo O(tlogt)-time algorithm with failure probability 2−√logt [Bringmann, Fischer, Nakos STOC’21].
Text-to-Pattern Hamming Distances: Given a length-m pattern P and a length-n text T, we obtain an O(nmloglogm)-time deterministic algorithm that exactly computes the Hamming distance between P and every length-m substring of T. This improves the previous O(nm(logmloglogm)1/4)-time deterministic algorithm [Chan, Jin, Vassilevska Williams, Xu FOCS’23] and nearly matches their O(nm)-time Las Vegas algorithm.
Sparse General Convolution: For sparse convolution with possibly negative input, all previous approaches required Ω(tlog2 t) time, where t is the maximum of input and output sparsity, and an important question left open by [Bringmann, Fischer, Nakos STOC’21] is whether this can be improved. We make partial progress towards solving this question by giving a Monte Carlo O(tlogt) time algorithm in the restricted case where the length N of the input vectors satisfies Nt1.99.

Publisher's Version
Relaxed Local Correctability from Local Testing
Vinayak M. Kumar and Geoffrey Mon
(University of Texas at Austin, USA)
We construct the first asymptotically good relaxed locally correctable codes with polylogarithmic query complexity, bringing the upper bound polynomially close to the lower bound of Gur and Lachish (SICOMP 2021). Our result follows from showing that a high-rate locally testable code can boost the block length of a smaller relaxed locally correctable code, while preserving the correcting radius and incurring only a modest additive cost in rate and query complexity. We use the locally testable code's tester to check if the amount of corruption in the input is low; if so, we can “zoom-in” to a suitable substring of the input and recurse on the smaller code’s local corrector. Hence, iterating this operation with a suitable family of locally testable codes due to Dinur, Evra, Livne, Lubotzky, and Mozes (STOC 2022) yields asymptotically good codes with relaxed local correctability, arbitrarily large block length, and polylogarithmic query complexity.
Our codes asymptotically inherit the rate and distance of any locally testable code used in the final invocation of the operation. Therefore, our framework also yields nonexplicit relaxed locally correctable codes with polylogarithmic query complexity that have rate and distance approaching the Gilbert–Varshamov bound.

Publisher's Version

10A

On Optimal Coreset Construction for Euclidean (k,z)-Clustering
Lingxiao Huang, Jian Li, and Xuan Wu
(Nanjing University, China; Tsinghua University, China)
Constructing small-sized coresets for various clustering problems in different metric spaces has attracted significant attention for the past decade. A central problem in the coreset literature is to understand what is the best possible coreset size for (k,z)-clustering in Euclidean space. While there has been significant progress in the problem, there is still a gap between the state-of-the-art upper and lower bounds. For instance, the best known upper bound for k-means (z=2) is min{O(k3/2 ε−2),O(k ε−4)} [Cohen-Addad, Larsen, Saulpic, Schwiegelshohn, Sheikh-Omar, NeurIPS’22], while the best known lower bound is Ω(kε−2) [Cohen-Addad, Larsen, Saulpic, Schwiegelshohn. STOC’22]. In this paper, we make significant progress on both upper and lower bounds. For a large range of parameters (i.e., ε, k), we have a complete understanding of the optimal coreset size. In particular, we obtain the following results: (1) We present a new coreset lower bound Ω(k εz−2) for Euclidean (k,z)-clustering when ε ≥ Ω(k−1/(z+2)). In view of the prior upper bound Õz(k εz−2) [Cohen-Addad, Larsen, Saulpic, Schwiegelshohn. STOC’22], the bound is optimal. The new lower bound is surprising since Ω(kε−2) [Cohen-Addad, Larsen, Saulpic, Schwiegelshohn. STOC’22] is “conjectured” to be the correct bound in some recent works (see e.g., [Cohen-Addad, Larsen, Saulpic, Schwiegelshohn. STOC’22; Cohen-Addad, Larsen, Saulpic, Schwiegelshohn, Sheikh-Omar, NeurIPS’22]). Our new lower bound instance is a delicate construction with multiple clusters of points, which is a significant departure from the previous construction in [Cohen-Addad, Larsen, Saulpic, Schwiegelshohn. STOC’22] that contains a single cluster of points. The new lower bound also implies improved lower bounds for (k,z)-clustering in doubling metrics. (2) For the upper bound, we provide efficient coreset construction algorithms for (k,z)-clustering with improved or optimal coreset sizes in several metric spaces. In particular, we provide an Õz(k2z+2/z+2 ε−2)-sized coreset, with a unfied analysis, for (k,z)-clustering for all z≥ 1 in Euclidean space. This upper bound improves upon the Õz(k2ε−2) upper bound by [Cohen-Addad, Larsen, Saulpic, Schwiegelshohn. STOC’22] (when k≤ ε−1), and matches the recent independent results [Cohen-Addad, Larsen, Saulpic, Schwiegelshohn, Sheikh-Omar, NeurIPS’22] for k-median and k-means (z=1,2) and extends them to all z≥ 1.

Publisher's Version
Understanding the Cluster Linear Program for Correlation Clustering
Nairen Cao, Vincent Cohen-Addad, Euiwoong Lee, Shi Li, Alantha Newman, and Lukas Vogl
(Boston College, USA; Google Research, France; University of Michigan, USA; Nanjing University, China; CNRS - Université Grenoble Alpes, France; EPFL, Lausanne, Switzerland)
In the classic Correlation Clustering problem introduced by Bansal, Blum, and Chawla (FOCS 2002), the input is a complete graph where edges are labeled either + or −, and the goal is to find a partition of the vertices that minimizes the sum of the +edges across parts plus the sum of the -edges within parts. In recent years, Chawla, Makarychev, Schramm and Yaroslavtsev (STOC 2015) gave a 2.06-approximation by providing a near-optimal rounding of the standard LP, and Cohen-Addad, Lee, Li, and Newman (FOCS 2022, 2023) finally bypassed the integrality gap of 2 for this LP giving a 1.73-approximation for the problem. While introducing new ideas for Correlation Clustering, their algorithm is more complicated than typical approximation algorithms in the following two aspects: (1) It is based on two different relaxations with separate rounding algorithms connected by the round-or-cut procedure. (2) Each of the rounding algorithms has to separately handle seemingly inevitable correlated rounding errors, coming from correlated rounding of Sherali-Adams and other strong LP relaxations. In order to create a simple and unified framework for Correlation Clustering similar to those for typical approximate optimization tasks, we propose the cluster LP as a strong linear program that might tightly capture the approximability of Correlation Clustering. It unifies all the previous relaxations for the problem. It is exponential-sized, but we show that it can be (1+є)-approximately solved in polynomial time for any є > 0, providing the framework for designing rounding algorithms without worrying about correlated rounding errors; these errors are handled uniformly in solving the relaxation. We demonstrate the power of the cluster LP by presenting a simple rounding algorithm, and providing two analyses, one analytically proving a 1.49-approximation and the other solving a factor-revealing SDP to show a 1.437-approximation. Both proofs introduce principled methods by which to analyze the performance of the algorithm, resulting in a significantly improved approximation guarantee. Finally, we prove an integrality gap of 4/3 for the cluster LP, showing our 1.437-upper bound cannot be drastically improved. Our gap instance directly inspires an improved NP-hardness of approximation with a ratio 24/23 ≈ 1.042; no explicit hardness ratio was known before.

Publisher's Version Info
Combinatorial Correlation Clustering
Vincent Cohen-Addad, David Rasmussen Lolck, Marcin Pilipczuk, Mikkel Thorup, Shuyi Yan, and Hanwen Zhang
(Google Research, France; University of Copenhagen, Copenhagen, Denmark; University of Warsaw, Poland)
Correlation Clustering is a classic clustering objective arising in numerous machine learning and data mining applications. Given a graph G=(V,E), the goal is to partition the vertex set into clusters so as to minimize the number of edges between clusters plus the number of edges missing within clusters. The problem is APX-hard and the best known polynomial time approximation factor is 1.73 by Cohen-Addad, Lee, Li, and Newman [FOCS’23]. They use an LP with |V|1/єΘ(1) variables for some small є. However, due to the practical relevance of correlation clustering, there has also been great interest in getting more efficient sequential and parallel algorithms. The classic combinatorial pivot algorithm of Ailon, Charikar and Newman [JACM’08] provides a 3-approximation in linear time. Like most other algorithms discussed here, this uses randomization. Recently, Behnezhad, Charikar, Ma and Tan [FOCS’22] presented a 3+є-approximate solution for solving problem in a constant number of rounds in the Massively Parallel Computation (MPC) setting. Very recently, Cao, Huang, Su [SODA’24] provided a 2.4-approximation in a polylogarithmic number of rounds in the MPC model and in Õ (|E|1.5) time in the classic sequential setting. They asked whether it is possible to get a better than 3-approximation in near-linear time? We resolve this problem with an efficient combinatorial algorithm providing a drastically better approximation factor. It achieves a ∼ 2−2/13 < 1.847-approximation in sub-linear (Õ(|V|)) sequential time or in sub-linear (Õ(|V|)) space in the streaming setting, and it uses only a constant number of rounds in the MPC model.

Publisher's Version Info
Random-Order Contention Resolution via Continuous Induction: Tightness for Bipartite Matching under Vertex Arrivals
Calum MacRury and Will Ma
(Columbia University, USA)
We introduce a new approach for designing Random-order Contention Resolution Schemes (RCRS’s) via exact solution in continuous time. Given a function c(y):[0,1] → [0,1], we show how to select each element which arrives at time y ∈ [0,1] with probability exactly c(y). We provide a rigorous algorithmic framework for achieving this, which discretizes the time interval and also needs to sample its past execution to ensure these exact selection probabilities. We showcase our framework in the context of online contention resolution schemes for matching with random-order vertex arrivals. For bipartite graphs with two-sided arrivals, we design a (1+e−2)/2 ≈ 0.567-selectable RCRS, which we also show to be tight. Next, we show that the presence of short odd-length cycles is the only barrier to attaining a (tight) (1+e−2)/2-selectable RCRS on general graphs. By generalizing our bipartite RCRS, we design an RCRS for graphs with odd-length girth g which is (1+ e−2)/2-selectable as g → ∞. This convergence happens very rapidly: for triangle-free graphs (i.e., g ≥ 5), we attain a 121/240 + 7/16 e2 ≈ 0.563-selectable RCRS. Finally, for general graphs we improve on the 8/15 ≈ 0.533-selectable RCRS of (Fu et al., 2021) and design an RCRS which is at least 0.535-selectable. Due to the reduction of (Ezra et al., 2020), our bounds yield a 0.535-competitive (respectively, (1+ e−2)/2-competitive) algorithm for prophet secretary matching on general (respectively, bipartite) graphs under vertex arrivals.

Publisher's Version Info
Prize-Collecting Steiner Tree: A 1.79 Approximation
Ali Ahmadi, Iman Gholami, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, and Mohammad Mahdavi
(University of Maryland, USA)
Prize-Collecting Steiner Tree (PCST) is a generalization of the Steiner Tree problem, a fundamental problem in computer science. In the classic Steiner Tree problem, we aim to connect a set of vertices known as terminals using the minimum-weight tree in a given weighted graph. In this generalized version, each vertex has a penalty, and there is flexibility to decide whether to connect each vertex or pay its associated penalty, making the problem more realistic and practical.
Both the Steiner Tree problem and its Prize-Collecting version had long-standing 2-approximation algorithms, matching the integrality gap of the natural LP formulations for both. This barrier for both problems has been surpassed, with algorithms achieving approximation factors below 2. While research on the Steiner Tree problem has led to a series of reductions in the approximation ratio below 2, culminating in a ln(4)+є approximation by Byrka, Grandoni, Rothvoß, and Sanità [STOC’10], the Prize-Collecting version has not seen improvements in the past 15 years since the work of Archer, Bateni, Hajiaghayi, and Karloff [FOCS’09, SIAM J. Comput.’11], which reduced the approximation factor for this problem from 2 to 1.9672. Interestingly, even the Prize-Collecting TSP approximation, which was first improved below 2 in the same paper, has seen several advancements since then (see, e.g., Blauth and N'agele [STOC’23]).
In this paper, we reduce the approximation factor for the PCST problem substantially to 1.7994 via a novel iterative approach.

Publisher's Version

10B

Reconfiguration of Basis Pairs in Regular Matroids
Kristóf Bérczi, Bence Mátravölgyi, and Tamás Schwarcz
(University of Eötvös Loránd, Hungary; ETH Zurich, Switzerland)
In recent years, combinatorial reconfiguration problems have attracted great attention due to their connection to various topics such as optimization, counting, enumeration, or sampling. One of the most intriguing open questions concerns the exchange distance of two matroid basis sequences, a problem that appears in several areas of computer science and mathematics. In 1980, White proposed a conjecture for the characterization of two basis sequences being reachable from each other by symmetric exchanges, which received a significant interest also in algebra due to its connection to toric ideals and Gr'obner bases. In this work, we verify White’s conjecture for basis sequences of length two in regular matroids, a problem that was formulated as a separate question by Farber, Richter, and Shank and Andres, Hochst'attler, and Merkel. Most of previous work on White’s conjecture has not considered the question from an algorithmic perspective. We study the problem from an optimization point of view: our proof implies a polynomial algorithm for determining a sequence of symmetric exchanges that transforms a basis pair into another, thus providing the first polynomial upper bound on the exchange distance of basis pairs in regular matroids. As a byproduct, we verify a conjecture of Gabow from 1976 on the serial symmetric exchange property of matroids for the regular case.

Publisher's Version
Sparsifying Generalized Linear Models
Arun Jambulapati, James R. Lee, Yang P. Liu, and Aaron Sidford
(Simons Institute for the Theory of Computing, Berkeley, USA; University of Washington, USA; Institute for Advanced Study, Princeton, USA; Stanford University, USA)
We consider the sparsification of sums F : ℝn → ℝ+ where F(x) = f1(⟨ a1,x⟩) + ⋯ + fm(⟨ am,x⟩) for vectors a1,…,am ∈ ℝn and functions f1,…,fm : ℝ → ℝ+. We show that (1+ε)-approximate sparsifiers of F with support size n2 (logn/ε)O(1) exist whenever the functions f1,…,fm are symmetric, monotone, and satisfy natural growth bounds. Additionally, we give efficient algorithms to compute such a sparsifier assuming each fi can be evaluated efficiently. Our results generalize the classical case of ℓp sparsification, where fi(z) = |z|p, for p ∈ (0, 2], and give the first near-linear size sparsifiers in the well-studied setting of the Huber loss function and its generalizations, e.g., fi(z) = min{|z|p, |z|2} for 0 < p ≤ 2. Our sparsification algorithm can be applied to give near-optimal reductions for optimizing a variety of generalized linear models including ℓp regression for p ∈ (1, 2] to high accuracy, via solving (logn)O(1) sparse regression instances with mn(logn)O(1), plus runtime proportional to the number of nonzero entries in the vectors a1, …, am.

Publisher's Version Video
Sampling Balanced Forests of Grids in Polynomial Time
Sarah Cannon, Wesley Pegden, and Jamie Tucker-Foltz
(Claremont McKenna College, USA; Carnegie Mellon University, USA; Harvard University, USA)
We prove that a polynomial fraction of the set of k-component forests in the m × n grid graph have equal numbers of vertices in each component, for any constant k. This resolves a conjecture of Charikar, Liu, Liu, and Vuong, and establishes the first provably polynomial-time algorithm for (exactly or approximately) sampling balanced grid graph partitions according to the spanning tree distribution, which weights each k-partition according to the product, across its k pieces, of the number of spanning trees of each piece. Our result follows from a careful analysis of the probability a uniformly random spanning tree of the grid can be cut into balanced pieces. Beyond grids, we show that for a broad family of lattice-like graphs, we achieve balance up to any multiplicative (1 ± ε) constant with constant probability. More generally, we show that, with constant probability, components derived from uniform spanning trees can approximate any given partition of a planar region specified by Jordan curves. This implies polynomial-time algorithms for sampling approximately balanced tree-weighted partitions for lattice-like graphs. Our results have applications to understanding political districtings, where there is an underlying graph of indivisible geographic units that must be partitioned into k population-balanced connected subgraphs. In this setting, tree-weighted partitions have interesting geometric properties, and this has stimulated significant effort to develop methods to sample them.

Publisher's Version
Sampling Proper Colorings on Line Graphs Using (1+o(1))Δ Colors
Yulin Wang, Chihao Zhang, and Zihan Zhang
(Shanghai Jiao Tong University, China; SOKENDAI, Japan)
We prove that the single-site Glauber dynamics for sampling proper q-colorings mixes in OΔ(nlogn) time on line graphs with n vertices and maximum degree Δ when q>(1+o(1))Δ. The main tool in our proof is the matrix trickle-down theorem developed by Abdolazimi, Liu and Oveis Gharan (FOCS, 2021).

Publisher's Version
Hypergraph Unreliability in Quasi-Polynomial Time
Ruoxu Cen, Jason Li, and Debmalya Panigrahi
(Duke University, USA; Carnegie Mellon University, USA)
The hypergraph unreliability problem asks for the probability that a hypergraph gets disconnected when every hyperedge fails independently with a given probability. For graphs, the unreliability problem has been studied over many decades, and multiple fully polynomial-time approximation schemes are known starting with the work of Karger (STOC 1995). In contrast, prior to this work, no non-trivial result was known for hypergraphs (of arbitrary rank). In this paper, we give quasi-polynomial time approximation schemes for the hypergraph unreliability problem. For any fixed ε ∈ (0, 1), we first give a (1+ε)-approximation algorithm that runs in mO(logn) time on an m-hyperedge, n-vertex hypergraph. Then, we improve the running time to m· nO(log2 n) with an additional exponentially small additive term in the approximation.

Publisher's Version

10C

Memory Checking Requires Logarithmic Overhead
Elette Boyle, Ilan Komargodski, and Neekon Vafa
(Reichman University, Israel; NTT Research, USA; Hebrew University of Jerusalem, Israel; Massachusetts Institute of Technology, USA)
We study the complexity of memory checkers with computational security and prove the first general tight lower bound.
Memory checkers, first introduced over 30 years ago by Blum, Evans, Gemmel, Kannan, and Naor (FOCS ’91, Algorithmica ’94), allow a user to store and maintain a large memory on a remote and unreliable server by using small trusted local storage. The user can issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. The main complexity measure of interest is the size of the local storage and the number of queries the memory checker makes upon every logical instruction. The most efficient known construction has query complexity O(logn/loglogn) and local space proportional to a computational security parameter, assuming one-way functions, where n is the logical memory size. Dwork, Naor, Rothblum, and Vaikuntanathan (TCC ’09) showed that for a restricted class of “deterministic and non-adaptive” memory checkers, this construction is optimal, up to constant factors. However, going beyond the small class of deterministic and non-adaptive constructions has remained a major open problem.
In this work, we fully resolve the complexity of memory checkers by showing that any construction with local space p and query complexity q must satisfy
pn/(logn)O(q)  .
This implies, as a special case, that q≥ Ω(logn/loglogn) in any scheme, assuming that pn1−ε for ε>0. The bound applies to any scheme with computational security, completeness 2/3, and inverse polynomial in n soundness (all of which make our lower bound only stronger). We further extend the lower bound to schemes where the read complexity qr and write complexity qw differ. For instance, we show the tight bound that if qr=O(1) and pn1−ε for ε>0, then qwnΩ(1). This is the first lower bound, for any non-trivial class of constructions, showing a read-write query complexity trade-off.
Our proof is via a delicate compression argument showing that a “too good to be true” memory checker can be used to compress random bits of information. We draw inspiration from tools recently developed for lower bounds for relaxed locally decodable codes. However, our proof itself significantly departs from these works, necessitated by the differences between settings.

Publisher's Version
Perfect Zero-Knowledge PCPs for #P
Tom Gur, Jack O'Connor, and Nicholas Spooner
(University of Cambridge, United Kingdom; University of Warwick, United Kingdom; New York University, USA)
We construct perfect zero-knowledge probabilistically checkable proofs (PZK-PCPs) for every language in #P. This is the first construction of a PZK-PCP for any language outside BPP. Furthermore, unlike previous constructions of (statistical) zero-knowledge PCPs, our construction simultaneously achieves non-adaptivity and zero knowledge against arbitrary (adaptive) polynomial-time malicious verifiers. Our construction consists of a novel masked sumcheck PCP, which uses the combinatorial nullstellen- satz to obtain antisymmetric structure within the hypercube and randomness outside of it. To prove zero knowledge, we introduce the notion of locally simulatable encodings: randomised encodings in which every local view of the encoding can be efficiently sampled given a local view of the message. We show that the code arising from the sumcheck protocol (the Reed–Muller code augmented with subcube sums) admits a locally simulatable encoding. This reduces the algebraic problem of simulating our masked sumcheck to a combinatorial property of antisymmetric functions.

Publisher's Version
One-Way Functions and Zero Knowledge
Shuichi Hirahara and Mikito Nanashima
(National Institute of Informatics, Tokyo, Japan; Tokyo Institute of Technology, Tokyo, Japan)
The fundamental theorem of Goldreich, Micali, and Wigderson (J. ACM 1991) shows that the existence of a one-way function is sufficient for constructing computational zero knowledge (CZK) proofs for all languages in NP. We prove its converse, thereby establishing characterizations of one-way functions based on the worst-case complexities of zero knowledge. Specifically, we prove that the following are equivalent: - A one-way function exists. - NPCZK and NP is hard in the worst case. - CZK is hard in the worst case and the problem GapMCSP of approximating circuit complexity is in CZK. The characterization above also holds for statistical and computational zero-knowledge argument systems. We further extend this characterization to a proof system with knowledge complexity O(logn). In particular, we show that the existence of a one-way function is characterized by the worst-case hardness of CZK if GapMCSP has a proof system with knowledge complexity O(logn). We complement this result by showing that NP admits an interactive proof system with knowledge complexity ω(logn) under the existence of an exponentially hard auxiliary-input one-way function (which is a weaker primitive than an exponentially hard one-way function). We also characterize the existence of a robustly-often nonuniformly computable one-way function by the nondeterministic hardness of CZK under the weak assumption that PSPACEAM. We present two applications of our results. First, we simplify the proof of the recent characterization of a one-way function by NP-hardness of a meta-computational problem and the worst-case hardness of NP given by Hirahara (STOC’23). Second, we show that if NP has a laconic zero-knowledge argument system, then there exists a public-key encryption scheme whose security can be based on the worst-case hardness of NP. This improves previous results which assume the existence of an indistinguishable obfuscation.

Publisher's Version
Tight Time-Space Tradeoffs for the Decisional Diffie-Hellman Problem
Akshima, Tyler Besselman, Siyao Guo, Zhiye Xie, and Yuping Ye
(NYU Shanghai, China; East China Normal University, China)
In the (preprocessing) Decisional Diffie-Hellman (DDH) problem, we are given a cyclic group G with a generator g and a prime order N, and want to prepare some advice of S, such that we can efficiently distinguish (gx,gy,gxy) from (gx,gy,gz) in time T for uniformly and independently chosen x,y,z from [N]. This is a central cryptographic problem whose computational hardness underpins many widely deployed schemes such as the Diffie–Hellman key exchange protocol.
We prove that any generic preprocessing DDH algorithm (operating in any cyclic group) achieves advantage at most O(ST2/N). This bound matches the best known attack up to poly-log factors, and confirms that DDH is as secure as the (seemingly harder) discrete logarithm problem against preprocessing attacks. Our result resolves an open question by Corrigan-Gibbs and Kogan (EUROCRYPT 2018), which proved optimal bounds for many variants of discrete logarithm problems except DDH (with an O(√ST2/N) bound).
We obtain our results by adopting and refining the approach by Gravin, Guo, Kwok, Lu (SODA 2021) and by Yun (EUROCRYPT 2015). Along the way, we significantly simplified and extended above techniques which may be of independent interests. The highlights of our techniques are following:
1. We obtain a simpler reduction from decisional problems against S-bit advice to their S-wise XOR lemmas against zero-advice, recovering the reduction by Gravin, Guo, Kwok and Lu (SODA 2021).
2. We show how to reduce generic hardness of decisional problems to their variants in the simpler hyperplane model proposed by Yun (EUROCRYPT 2015). This is the first work analyzing a decisional problem in Yun’s model, answering an open problem proposed by Auerbach, Hoffman, and Pascual-Perez (TCC 2023).
3. We prove an S-wise XOR lemma of DDH in Yun’s model. As a corollary, we obtain the generic hardness of the S-XOR DDH problem.

Publisher's Version
SNARGs under LWE via Propositional Proofs
Zhengzhong Jin, Yael Kalai, Alex Lombardi, and Vinod Vaikuntanathan
(Northeastern University, USA; Microsoft Research, USA; Massachusetts Institute of Technology, USA; Princeton University, USA)
We construct a succinct non-interactive argument (SNARG) system for every NP language L that has a propositional proof of non-membership, i.e. of xL. The soundness of our SNARG system relies on the hardness of the learning with errors (LWE) problem. The common reference string (CRS) in our construction grows with the space required to verify the propositional proof, and the size of the proof grows poly-logarithmically in the length of the propositional proof. Unlike most of the literature on SNARGs, our result implies SNARGs for languages L with proof length shorter than logarithmic in the deterministic time complexity of L. Our SNARG improves over prior SNARGs for such “hard” NP languages (Sahai and Waters, STOC 2014, Jain and Jin, FOCS 2022) in several ways: 1) For languages with polynomial-length propositional proofs of non-membership, our SNARGs are based on a single, polynomial-time falsifiable assumption, namely LWE. 2) Our construction handles super-polynomial length propositional proofs, as long as they have bounded space, under the subexponential LWE assumption. 3) Our SNARGs have a transparent setup, meaning that no private randomness is required to generate the CRS. Moreover, our approach departs dramatically from these prior works: we show how to design SNARGs for hard languages without publishing a program (in the CRS) that has the power to verify NP witnesses. The key new idea in our construction is what we call a “locally unsatisfiable extension” of the NP verification circuit {Cx}x. We say that an NP verifier has a locally unsatisfiable extension if for every xL, there exists an extension Ex of Cx that is not even locally satisfiable in the sense of a local assignment generator [Paneth-Rothblum, TCC 2017]. Crucially, we allow Ex to be depend arbitrarily on x rather than being efficiently constructible. In this work, we show – via a “hash-and-BARG” for a hidden, encrypted computation – how to build SNARGs for all languages with locally unsatisfiable extensions. We additionally show that propositional proofs of unsatisfiability generically imply the existence of locally unsatisfiable extensions, which allows us to deduce our main results. As an illustrative example, our results imply a SNARG for the decisional Diffie-Hellman (DDH) language under the LWE assumption.

Publisher's Version

10D

On the Communication Complexity of Approximate Pattern Matching
Tomasz Kociumaka, Jakob Nogler, and Philip Wellnitz
(MPI-INF, Germany; Saarland Informatics Campus, Saarbrücken, Germany; ETH Zurich, Switzerland)
The decades-old Pattern Matching with Edits problem, given a length-n string T (the text), a length-m string P (the pattern), and a positive integer k (the threshold), asks to list all fragments of T that are at edit distance at most k from P. The one-way communication complexity of this problem is the minimum amount of space needed to encode the answer so that it can be retrieved without accessing the input strings P and T.
The closely related Pattern Matching with Mismatches problem (defined in terms of the Hamming distance instead of the edit distance) is already well understood from the communication complexity perspective: Clifford, Kociumaka, and Porat [SODA 2019] proved that Ω(n/m · k log(m/k)) bits are necessary and O(n/m · klog(m|Σ|/k)) bits are sufficient; the upper bound allows encoding not only the occurrences of P in T with at most k mismatches but also the substitutions needed to make each k-mismatch occurrence exact.
Despite recent improvements in the running time [Charalampopoulos, Kociumaka, and Wellnitz; FOCS 2020 and 2022], the communication complexity of Pattern Matching with Edits remained unexplored, with a lower bound of Ω(n/m · klog(m/k)) bits and an upper bound of O(n/m · k3logm) bits stemming from previous research. In this work, we prove an upper bound of O(n/m · k log2 m) bits, thus establishing the optimal communication complexity up to logarithmic factors. We also show that O(n/m · k logm log(m|Σ|)) bits allow encoding, for each k-error occurrence of P in T, the shortest sequence of edits needed to make the occurrence exact. Our result further emphasizes the close relationship between Pattern Matching with Mismatches and Pattern Matching with Edits.
We leverage the techniques behind our new result on the communication complexity to obtain quantum algorithms for Pattern Matching with Edits: we demonstrate a quantum algorithm that uses O(n1+o(1)/m · √km) queries and O(n1+o(1)/m · (√km + k3.5)) quantum time. Moreover, when determining the existence of at least one occurrence, the algorithm uses O(√n1+o(1)/m · √km) queries and O(√n1+o(1)/m · (√km + k3.5)) time. For both cases, we establish corresponding lower bounds to demonstrate that the query complexity is optimal up to sub-polynomial factors.

Publisher's Version
Local Borsuk-Ulam, Stability, and Replicability
Zachary Chase, Bogdan Chornomaz, Shay Moran, and Amir Yehudayoff
(Technion, Israel; Google Research, Israel; University of Copenhagen, Copenhagen, Denmark)
We use and adapt the Borsuk-Ulam Theorem from topology to derive limitations on list-replicable and globally stable learning algorithms. We further demonstrate the applicability of our methods in combinatorics and topology. We show that, besides trivial cases, both list-replicable and globally stable learning are impossible in the agnostic PAC setting. This is in contrast with the realizable case where it is known that any class with a finite Littlestone dimension can be learned by such algorithms. In the realizable PAC setting, we sharpen previous impossibility results and broaden their scope. Specifically, we establish optimal bounds for list replicability and global stability numbers in finite classes. This provides an exponential improvement over previous works and implies an exponential separation from the Littlestone dimension. We further introduce lower bounds for weak learners, i.e., learners that are only marginally better than random guessing. Lower bounds from previous works apply only to stronger learners. To offer a broader and more comprehensive view of our topological approach, we prove a local variant of the Borsuk-Ulam theorem in topology and a result in combinatorics concerning Kneser colorings. In combinatorics, we prove that if c is a coloring of all non-empty subsets of [n] such that disjoint sets have different colors, then there is a chain of subsets that receives at least 1+ ⌊ n/2⌋ colors (this bound is sharp). In topology, we prove e.g. that for any open antipodal-free cover of the d-dimensional sphere, there is a point x that belongs to at least t=⌈d+3/2⌉ sets.

Publisher's Version
A New Information Complexity Measure for Multi-pass Streaming with Applications
Mark Braverman, Sumegha Garg, Qian Li, Shuo Wang, David P. Woodruff, and Jiapeng Zhang
(Princeton University, USA; Rutgers University, USA; Shenzhen Research Institute of Big Data, China; Shanghai Jiao Tong University, China; Carnegie Mellon University, USA; University of Southern California, USA)
We introduce a new notion of information complexity for multi-pass streaming problems and use it to resolve several important questions in data streams.
In the coin problem, one sees a stream of n i.i.d. uniformly random bits and one would like to compute the majority with constant advantage. We show that any constant-pass algorithm must use Ω(logn) bits of memory, significantly extending an earlier Ω(logn) bit lower bound for single-pass algorithms of Braverman-Garg-Woodruff (FOCS, 2020). This also gives the first Ω(logn) bit lower bound for the problem of approximating a counter up to a constant factor in worst-case turnstile streams for more than one pass.
In the needle problem, one either sees a stream of n i.i.d. uniform samples from a domain [t], or there is a randomly chosen needle α ∈[t] for which each item independently is chosen to equal α with probability p, and is otherwise uniformly random in [t]. The problem of distinguishing these two cases is central to understanding the space complexity of the frequency moment estimation problem in random order streams. We show tight multi-pass space bounds for this problem for every p < 1/√n log3 n, resolving an open question of Lovett and Zhang (FOCS, 2023); even for 1-pass our bounds are new. To show optimality, we improve both lower and upper bounds from existing results.
Our information complexity framework significantly extends the toolkit for proving multi-pass streaming lower bounds, and we give a wide number of additional streaming applications of our lower bound techniques, including multi-pass lower bounds for ℓp-norm estimation, ℓp-point query and heavy hitters, and compressed sensing problems.

Publisher's Version
New Graph and Hypergraph Container Lemmas with Applications in Property Testing
Eric Blais and Cameron Seth
(University of Waterloo, Canada)
The graph and hypergraph container methods are powerful tools with a wide range of applications across combinatorics. Recently, Blais and Seth (FOCS 2023) showed that the graph container method is particularly well-suited for the analysis of the natural canonical tester for two fundamental graph properties: having a large independent set and k-colorability. In this work, we show that the connection between the container method and property testing extends further along two different directions.
First, we show that the container method can be used to analyze the canonical tester for many other properties of graphs and hypergraphs. We introduce a new hypergraph container lemma and use it to give an upper bound of O(kq3/є) on the sample complexity of є-testing satisfiability, where q is the number of variables per constraint and k is the size of the alphabet. This is the first upper bound for the problem that is polynomial in all of k, q and 1/є. As a corollary, we get new upper bounds on the sample complexity of the canonical testers for hypergraph colorability and for every semi-homogeneous graph partition property.
Second, we show that the container method can also be used to study the query complexity of (non-canonical) graph property testers. This result is obtained by introducing a new container lemma for the class of all independent set stars, a strict superset of the class of all independent sets. We use this container lemma to give a new upper bound of O57/2) on the query complexity of є-testing the ρ-independent set property. This establishes for the first time the non-optimality of the canonical tester for a non-homogeneous graph partition property.

Publisher's Version
Exponential Quantum Space Advantage for Approximating Maximum Directed Cut in the Streaming Model
John Kallaugher, Ojas Parekh, and Nadezhda Voronova
(Sandia National Laboratories, USA; Boston University, USA)
While the search for quantum advantage typically focuses on speed­ups in execution time, quantum algorithms also offer the potential for advantage in space complexity. Previous work has shown such advantages for data stream problems, in which elements arrive and must be processed sequentially without random access, but these have been restricted to specially-constructed problems Le Gall, SPAA ‘06 or polynomial advantage Kallaugher, FOCS ‘21. We show an exponential quantum space advantage for the maximum directed cut problem. This is the first known exponential quantum space advantage for any natural streaming problem. This also constitutes the first unconditional exponential quantum resource advantage for approximating a discrete optimization problem in any setting.
Our quantum streaming algorithm 0.4844-approximates the value of the largest directed cut in a graph stream with n vertices using polylog(n) space, while previous work by Chou, Golovnev, and Velusamy FOCS ’20 implies that obtaining an approximation ratio better than 4/9 ≈ 0.4444 requires Ω(√n) space for any classical streaming algorithm. Our result is based on a recent O(√n) space classical streaming approach by Saxena, Singer, Sudan, and Velusamy FOCS ’23, with an additional improvement in the approximation ratio due to recent work by Singer APPROX ’23.

Publisher's Version

11A

Proof of the Density Threshold Conjecture for Pinwheel Scheduling
Akitoshi Kawamura
(Kyoto University, Kyoto, Japan)
In the pinwheel scheduling problem, each task i is associated with a positive integer ai called its period, and we want to (perpetually) schedule one task per day so that each task i is performed at least once every ai days. An obvious necessary condition for schedulability is that the density, i.e., the sum of the reciprocals 1/ai, not exceed 1. We prove that all instances with density not exceeding 5/6 are schedulable, as was conjectured by Chan and Chin in 1993. Like some of the known partial progress towards the conjecture, our proof involves computer search for schedules for a large but finite set of instances. A key idea in our reduction to these finite cases is to generalize the problem to fractional (non-integer) periods in an appropriate way. As byproducts of our ideas, we obtain a simple proof that every instance with two distinct periods and density at most 1 is schedulable, as well as a fast algorithm for the bamboo garden trimming problem with approximation ratio 4/3.

Publisher's Version
Constrained Submodular Maximization via New Bounds for DR-Submodular Functions
Niv Buchbinder and Moran Feldman
(Tel Aviv University, Israel; University of Haifa, Israel)
Submodular maximization under various constraints is a fundamental problem studied continuously, in both computer science and operations research, since the late 1970’s. A central technique in this field is to approximately optimize the multilinear extension of the submodular objective, and then round the solution. The use of this technique requires a solver able to approximately maximize multilinear extensions. Following a long line of work, Buchbinder and Feldman (2019) described such a solver guaranteeing 0.385-approximation for down-closed constraints, while Oveis Gharan and Vondrák (2011) showed that no solver can guarantee better than 0.478-approximation. In this paper, we present a solver guaranteeing 0.401-approximation, which significantly reduces the gap between the best known solver and the inapproximability result. The design and analysis of our solver are based on a novel bound that we prove for DR-submodular functions. This bound improves over a previous bound due to Feldman et al. (2011) that is used by essentially all state-of-the-art results for constrained maximization of general submodular/DR-submodular functions. Hence, we believe that our new bound is likely to find many additional applications in related problems, and to be a key component for further improvement.

Publisher's Version
Optimal Online Discrepancy Minimization
Janardhan Kulkarni, Victor Reis, and Thomas Rothvoss
(Microsoft Research, USA; Institute for Advanced Study, Princeton, USA; University of Washington, USA)
We prove that there exists an online algorithm that for any sequence of vectors v1,…,vT ∈ ℝn with ||vi||2 ≤ 1, arriving one at a time, decides random signs x1,…,xT ∈ { −1,1} so that for every tT, the prefix sum ∑i=1t xivi is 10-subgaussian. This improves over the work of Alweiss, Liu and Sawhney who kept prefix sums O(√log(nT))-subgaussian, and gives a O(√logT) bound on the discrepancy maxtT ||∑i=1t xi vi||. Our proof combines a generalization of Banaszczyk’s prefix balancing result to trees with a cloning argument to find distributions rather than single colorings. We also show a matching Ω(√logT) strategy for an oblivious adversary.

Publisher's Version
Supermodular Approximation of Norms and Applications
Thomas Kesselheim, Marco Molinaro, and Sahil Singla
(University of Bonn, Bonn, Germany; PUC-Rio, Brazil; Georgia Institute of Technology, USA)
Many classical problems in theoretical computer science involve norms, even if implicitly; for example, both XOS functions and downward-closed sets are equivalent to some norms. The last decade has seen a lot of interest in designing algorithms beyond the standard ℓp norms ||· ||p. Despite notable advancements, many existing methods remain tailored to specific problems, leaving a broader applicability to general norms less understood. This paper investigates the intrinsic properties of ℓp norms that facilitate their widespread use and seeks to abstract these qualities to a more general setting. We identify supermodularity—often reserved for combinatorial set functions and characterized by monotone gradients—as a defining feature beneficial for ||·||pp. We introduce the notion of p-supermodularity for norms, asserting that a norm is p-supermodular if its pth power function exhibits supermodularity. The association of supermodularity with norms offers a new lens through which to view and construct algorithms. Our work demonstrates that for a large class of problems p-supermodularity is a sufficient criterion for developing good algorithms. This is either by reframing existing algorithms for problems like Online Load-Balancing and Bandits with Knapsacks through a supermodular lens, or by introducing novel analyses for problems such as Online Covering, Online Packing, and Stochastic Probing. Moreover, we prove that every symmetric norm can be approximated by a p-supermodular norm. Together, these recover and extend several existing results, and support p-supermodularity as a unified theoretical framework for optimization challenges centered around norm-related problems.

Publisher's Version
Ghost Value Augmentation for k-Edge-Connectivity
D. Ellis Hershkowitz, Nathan Klein, and Rico Zenklusen
(Brown University, USA; Institute for Advanced Study, Princeton, USA; ETH Zurich, Switzerland)
We give a poly-time algorithm for the k-edge-connected spanning subgraph (k-ECSS) problem that returns a solution of cost no greater than the cheapest (k+10)-ECSS on the same graph. Our approach enhances the iterative relaxation framework with a new ingredient, which we call ghost values, that allows for high sparsity in intermediate problems. Our guarantees improve upon the best-known approximation factor of 2 for k-ECSS whenever the optimal value of (k+10)-ECSS is close to that of k-ECSS. This is a property that holds for the closely related problem k-edge-connected spanning multi-subgraph (k-ECSM), which is identical to k-ECSS except edges can be selected multiple times at the same cost. As a consequence, we obtain a 1+O(1/k)-approximation algorithm for k-ECSM, which resolves a conjecture of Pritchard and improves upon a recent 1+O(1/√k)-approximation algorithm of Karlin, Klein, Oveis Gharan, and Zhang. Moreover, we present a matching lower bound for k-ECSM, showing that our approximation ratio is tight up to the constant factor in O(1/k), unless P=NP.

Publisher's Version

11B

Breaking the VLB Barrier for Oblivious Reconfigurable Networks
Tegan Wilson, Daniel Amir, Nitika Saran, Robert Kleinberg, Vishal Shrivastav, and Hakim Weatherspoon
(Cornell University, USA; Purdue University, USA)
In a landmark 1981 paper, Valiant and Brebner gave birth to the study of oblivious routing and, simultaneously, introduced its most powerful and ubiquitous method: Valiant load balancing (VLB). By routing messages through a randomly sampled intermediate node, VLB lengthens routing paths by a factor of two but gains the crucial property of obliviousness: it balances load in a completely decentralized manner, with no global knowledge of the communication pattern. Forty years later, with datacenters handling workloads whose communication pattern varies too rapidly to allow centralized coordination, oblivious routing is as relevant as ever, and VLB continues to take center stage as a widely used — and in some settings, provably optimal — way to balance load in the network obliviously to the traffic demands. However, the ability of the network to rapidly reconfigure its interconnection topology gives rise to new possibilities.
In this work we revisit the question of whether VLB remains optimal in the novel setting of reconfigurable networks. Prior work showed that VLB achieves the optimal tradeoff between latency and guaranteed throughput. In this work we show that a strictly superior latency-throughput tradeoff is achievable when the throughput bound is relaxed to hold with high probability. The same improved tradeoff is also achievable with guaranteed throughput under time-stationary demands, provided the latency bound is relaxed to hold with high probability and that the network is allowed to be semi-oblivious, using an oblivious (randomized) connection schedule but demand-aware routing. We prove that the latter result is not achievable by any fully-oblivious reconfigurable network design, marking a rare case in which semi-oblivious routing has a provable asymptotic advantage over oblivious routing. Our results are enabled by a novel oblivious routing scheme that improves VLB by stretching routing paths the minimum possible amount — an additive stretch of 1 rather than a multiplicative stretch of 2 — yet still manages to balance load with high probability when either the traffic demand matrix or the network’s interconnection schedule are shuffled by a uniformly random permutation. To analyze our routing scheme we prove an exponential tail bound which may be of independent interest, concerning the distribution of values of a bilinear form on an orbit of a permutation group action.

Publisher's Version
Lenzen’s Distributed Routing Generalized: A Full Characterization of Constant-Time Routability
Mohsen Ghaffari and Brandon Wang
(Massachusetts Institute of Technology, USA)
A celebrated and widely used result of Lenzen and Wattenhofer [STOC’11, PODC’13] shows a constant-round (deterministic) distributed routing algorithm for the complete-graph network: if each node is the source or destination of at most Θ(n) packets, there is a constant-round deterministic distributed algorithm that routes all packets to their destinations in a constant number of rounds, on the complete-graph network. We study generalizations of this result to arbitrary network graphs and show a necessary and sufficient condition for the network so that it can route any such demand in constant rounds distributedly. One can easily see that just for the existence of a constant-round routing for all such demands, it is necessary that any cut’s size, when normalized by the number of possible edges in that cut, should be lower bounded by a positive constant. That is, for any partition of nodes with exactly k∈ [1, n/2] nodes on one side, the cut should have at least Θ(kn) edges. We call this a graph with a positive minimum normalized cut, or a positive graph for short. We show that this necessary condition is also sufficient. In particular, by tightening the Leighton-Rao multicommodity max-flow min-cut theorem for positive graphs, we show the existence of a constant-round routing in positive graphs (assuming the network graph is known globally). Then, as the main technical contribution of this paper, we also show that there is a (deterministic) distributed algorithm that computes such a constant-round routing in constant rounds in these graphs. This result allows us to vastly relax the conditions of the well-studied congested clique model of distributed computing: Any distributed algorithm for the congested clique model can be run in any positive graph network, without any asymptotic slow-down. Our results are in fact more general and they give a distributed routing bound for any network, as a function of its minimum normalized cut size (and without assuming it is a constant), within a polynomial of the relevant lower bound.

Publisher's Version
Work-Efficient Parallel Derandomization II: Optimal Concentrations via Bootstrapping
Mohsen Ghaffari and Christoph Grunau
(Massachusetts Institute of Technology, USA; ETH Zurich, Switzerland)
In this paper, we present an efficient parallel derandomization method for randomized algorithms that rely on concentrations such as the Chernoff bound. This settles a classic problem in parallel derandomization, which dates back to the 1980s. Concretely, consider the set balancing problem where m sets of size at most s are given in a ground set of size n, and we should partition the ground set into two parts such that each set is split evenly up to a small additive (discrepancy) bound. A random partition achieves a discrepancy of O(√s logm) in each set, by Chernoff bound. We give a deterministic parallel algorithm that matches this bound, using near-linear work Õ(m+n+∑i=1m |Si|) and polylogarithmic depth poly(log(mn)). The previous results were weaker in discrepancy and/or work bounds: Motwani, Naor, and Naor [FOCS’89] and Berger and Rompel [FOCS’89] achieve discrepancy s · O(√s logm) with work Õ(m+n+∑i=1m |Si|) · mΘ(1/) and polylogarithmic depth; the discrepancy was optimized to O(√s logm) in later work, e.g. by Harris [Algorithmica’19], but the work bound remained prohibitively high at Õ(m4n3). Notice that these would require a large polynomial number of processors to even match the near-linear runtime of the sequential algorithm. Ghaffari, Grunau, and Rozhon [FOCS’23] achieve discrepancy s/poly(log(nm)) + O(√s logm) with near-linear work and polylogarithmic-depth. Notice that this discrepancy is nearly quadratically larger than the desired bound and barely sublinear with respect to the trivial bound of s. Our method is different from prior work. It can be viewed as a novel bootstrapping mechanism that uses crude partitioning algorithms as a subroutine and sharpens their discrepancy to the optimal bound. In particular, we solve the problem recursively, by using the crude partition in each iteration to split the variables into many smaller parts, and then we find a constraint for the variables in each part such that we reduce the overall number of variables in the problem. The scheme relies crucially on an interesting application of the multiplicative weights update method to control the variance losses in each iteration. Our result applies to the much more general lattice approximation problem, thus providing an efficient parallel derandomization of the randomized rounding scheme for linear programs.

Publisher's Version
No Distributed Quantum Advantage for Approximate Graph Coloring
Xavier Coiteux-Roy, Francesco d'Amore, Rishikesh Gajjala, Fabian Kuhn, François Le Gall, Henrik Lievonen, Augusto Modanese, Marc-Olivier Renou, Gustav Schmid, and Jukka Suomela
(TU Munich, Germany; Munich Center for Quantum Science and Technology, Germany; Aalto University, Finland; Bocconi University, Italy; Indian Institute of Science, India; University of Freiburg, Freiburg, Germany; Nagoya University, Nagoya, Japan; Inria, France; Université Paris-Saclay, France; Institut Polytechnique de Paris, France)
We give an almost complete characterization of the hardness of c-coloring χ-chromatic graphs with distributed algorithms, for a wide range of models of distributed computing. In particular, we show that these problems do not admit any distributed quantum advantage. To do that:
We give a new distributed algorithm that finds a c-coloring in χ-chromatic graphs in Õ(n1/α) rounds, with α = ⌊c−1/χ − 1⌋. We prove that any distributed algorithm for this problem requires Ω(n1/α) rounds.
Our upper bound holds in the classical, deterministic LOCAL model, while the near-matching lower bound holds in the non-signaling model. This model, introduced by Arfaoui and Fraigniaud in 2014, captures all models of distributed graph algorithms that obey physical causality; this includes not only classical deterministic LOCAL and randomized LOCAL but also quantum-LOCAL, even with a pre-shared quantum state.
We also show that similar arguments can be used to prove that, e.g., 3-coloring 2-dimensional grids or c-coloring trees remain hard problems even for the non-signaling model, and in particular do not admit any quantum advantage. Our lower-bound arguments are purely graph-theoretic at heart; no background on quantum information theory is needed to establish the proofs.

Publisher's Version
Optimal Communication Bounds for Classic Functions in the Coordinator Model and Beyond
Hossein Esfandiari, Praneeth Kacham, Vahab Mirrokni, David P. Woodruff, and Peilin Zhong
(Google, United Kingdom; Carnegie Mellon University, USA; Google Research, USA)
In the coordinator model of communication with s servers, given an arbitrary non-negative function f, we study the problem of approximating the sum ∑i ∈ [n]f(xi) up to a 1 ± ε factor. Here the vector x ∈ ℝn is defined to be x = x(1) + ⋯ + x(s), where x(j) ≥ 0 denotes the non-negative vector held by the j-th server. A special case of the problem is when f(x) = xk which corresponds to the well-studied problem of Fk moment estimation in the distributed communication model. We introduce a new parameter cf[s] which captures the communication complexity of approximating ∑i∈ [n] f(xi) and for a broad class of functions f which includes f(x) = xk for k ≥ 2 and other robust functions such as the Huber loss function, we give a two round protocol that uses total communication cf[s]/ε2 bits, up to polylogarithmic factors. For this broad class of functions, our result improves upon the communication bounds achieved by Kannan, Vempala, and Woodruff (COLT 2014) and Woodruff and Zhang (STOC 2012), obtaining the optimal communication up to polylogarithmic factors in the minimum number of rounds. We show that our protocol can also be used for approximating higher-order correlations. Our results are part of a broad framework for optimally sampling from a joint distribution in terms of the marginal distributions held on individual servers. Apart from the coordinator model, algorithms for other graph topologies in which each node is a server have been extensively studied. We argue that directly lifting protocols from the coordinator model to other graph topologies will require some nodes in the graph to send a lot of communication. Hence, a natural question is the type of problems that can be efficiently solved in general graph topologies. We address this question by giving communication efficient protocols in the so-called personalized CONGEST model for solving linear regression and low rank approximation by designing composable sketches. Our sketch construction may be of independent interest and can implement any importance sampling procedure that has a monotonicity property.

Publisher's Version

11C

Sum-of-Squares Lower Bounds for Independent Set on Ultra-Sparse Random Graphs
Pravesh K. Kothari, Aaron Potechin, and Jeff Xu
(Carnegie Mellon University, USA; University of Chicago, USA)
We prove that for every DN, and large enough constant dN, with high probability over the choice of GG(n,d/n), the Erdos-Renyi random graph distribution, the canonical degree 2D Sum-of-Squares relaxation fails to certify that the largest independent set in G is of size o(n/√d D4). In particular, degree D sum-of-squares strengthening can reduce the integrality gap of the classical theta SDP relaxation by at most a O(D4) factor. This is the first lower bound for >4-degree Sum-of-Squares (SoS) relaxation for any problems on ultra sparse random graphs (i.e. average degree of an absolute constant). Such ultra-sparse graphs were a known barrier for previous methods and explicitly identified as a major open direction. Indeed, the only other example of an SoS lower bound on ultra-sparse random graphs was a degree-4 lower bound for Max-Cut. Our main technical result is a new method to obtain spectral norm estimates on graph matrices (a class of low-degree matrix-valued polynomials in G(n,d/n)) that are accurate to within an absolute constant factor. All prior works lose log n factors that trivialize any lower bound on o(logn)-degree random graphs. We combine these new bounds with several upgrades on the machinery for analyzing lower-bound witnesses constructed by pseudo-calibration so that our analysis does not lose any ω(1)-factors that would trivialize our results. In addition to other SoS lower bounds, we believe that our methods for establishing spectral norm estimates on graph matrices will be useful in the analyses of numerical algorithms on average-case inputs.

Publisher's Version
Semidefinite Programming and Linear Equations vs. Homomorphism Problems
Lorenzo Ciardo and Stanislav Živný
(University of Oxford, United Kingdom)
We introduce a relaxation for homomorphism problems that combines semidefinite programming with linear Diophantine equations, and propose a framework for the analysis of its power based on the spectral theory of association schemes. We use this framework to establish an unconditional lower bound against the semidefinite programming + linear equations model, by showing that the relaxation does not solve the approximate graph homomorphism problem and thus, in particular, the approximate graph colouring problem.

Publisher's Version
How Random CSPs Fool Hierarchies
Siu On Chan, Hiu Tsun Ng, and Sijin Peng
(Unaffiliated, Hong Kong, China; Chinese University of Hong Kong, China; Tsinghua University, China)
Relaxations for the constraint satisfaction problem (CSP) include bounded width, linear program (LP), semidefinite program (SDP), affine integer program (AIP), and the combined LP+AIP of Brakensiek, Guruswami, Wrochna, and Živný (SICOMP 2020). Tightening relaxations systematically leads to hierarchies and stronger algorithms. For the LP+AIP hierarchy, a constant level lower bound for approximate graph coloring was given by Ciardo and Živný (STOC 2023). We prove the first linear (and hence optimal) level lower bound for LP+AIP and its stronger variant, SDP+AIP. For each hierarchy, our bound holds for random instances of a broad class of CSPs that we call 𝜏-wise neutral. We extend to other hierarchies the LP lower bound techniques in Benabbas, Georgiou, Magen and Tulsiani (ToC 2012) and Kothari, Mori, O’Donnell, and Witmer (STOC 2017), and simplify the SDP solution construction in the latter.

Publisher's Version
Swap Cosystolic Expansion
Yotam Dikstein and Irit Dinur
(Institute for Advanced Study, Princeton, USA; Weizmann Institute of Science, Israel)
We introduce and study swap cosystolic expansion, a new expansion property of simplicial complexes. We prove lower bounds for swap coboundary expansion of spherical buildings and use them to lower bound swap cosystolic expansion of the LSV Ramanujan complexes. Our motivation is the recent work (in a companion paper) showing that swap cosystolic expansion implies agreement theorems. Together the two works show that these complexes support agreement tests in the low acceptance regime. We also study the closely related swap coboundary expansion. Swap cosystolic expansion is defined by considering, for a given complex X, its faces complex , whose vertices are r-faces of X and where two vertices are connected if their disjoint union is also a face in X. The faces complex is a derandomization of the product of X with itself r times. The graph underlying is the swap walk of X, known to have excellent spectral expansion. The swap cosystolic expansion of X is defined to be the cosystolic expansion of . Our main result is a exp(−O(√r)) lower bound on the swap coboundary expansion of the spherical building and the swap cosystolic expansion of the LSV complexes. For more general coboundary expanders we show a weaker lower bound of exp(−O(r)).

Publisher's Version
Agreement Theorems for High Dimensional Expanders in the Low Acceptance Regime: The Role of Covers
Yotam Dikstein and Irit Dinur
(Institute for Advanced Study, Princeton, USA; Weizmann Institute of Science, Israel)
Let X be a family of k-element subsets of [n] and let {fs:s→Σ  :   sX} be an ensemble of local functions, each defined over a subset s⊂ [n]. Is there a global function G:[n]→Σ such that fs = G|s for all sX ? An agreement test is a randomized property tester for this question.
One such test is the V-test, that chooses a random pair of sets s1,s2X with prescribed intersection size and accepts if fs1,fs2 agree on the elements in s1s2.
The low acceptance (or 1%) regime is concerned with the situation that the test succeeds with low but non-negligible probability Agree({fs}) ≥ ε>0. A “classical” low acceptance agreement theorem says
Agree ({f_s}) > ε =⇒ 
∃G:[n]→Σ, Prs[f_s0.99≈ G|_s](ε).   (*)
Such statements are motivated by PCP questions. The case X={s ⊆ [n]   :   |s|=k} is well-studied and known as “direct product testing”, which is related to the parallel repetition theorem. Finding sparser families X that satisfy (*) is known as derandomized direct product testing. Prior to this work, the sparsest family satisfying (*) had |X|≈ n25, and we show X with |X|≈ n2.
We study the general behavior of high dimensional expanders with respect to agreement tests in the low acceptance regime. High dimensional expanders, even very sparse ones with |X|=O(n), are known to satisfy the high acceptance variant (where ε =1−o(1)). It has been an open challenge to analyze the low acceptance regime.
Surprisingly, topological covers of X play an important role. We show that:
If X has no connected covers, then (*) holds, provided that X satisfies an additional expansion property, called swap cosystolic expansion.
If X has a connected cover, then (*) fails.
If X has a connected cover (and swap-cosystolic-expansion),
we replace (*) by a statement that takes covers into account:
Agree({f_s})> ε=⇒ ∃ cover ρ:Y↠X, and G:Y(0)→Σ, such that
Prs↠s[f_s 0.99≈ G|_s] (ε),   (**)
where s↠ s means that ρ(s)=s.
Our main result is a proof of (LFD) for complexes X whose links are spherical buildings.
The property of swap-cosystolic-expansion holds for quotients of the Bruhat Tits buildings. As a corollary we derive (*) for X being a spherical building, yielding a derandomized family with |X| ≈ n2.
We also derive (**) for LSV complexes X, for which |X|=O(n).

Publisher's Version
Characterizing Direct Product Testing via Coboundary Expansion
Mitali Bafna and Dor Minzer
(Massachusetts Institute of Technology, USA)
A d-dimensional simplicial complex X is said to support a direct product tester if any locally consistent function defined on its k-faces (where kd) necessarily come from a function over its vertices. More precisely, a direct product tester has a distribution µ over pairs of k-faces (A,A′), and given query access to F: X(k)→{0,1}k it samples (A,A′)∼ µ and checks that F[A]|AA = F[A′]|AA. The tester should have (1) the ”completeness property”, meaning that any assignment F which is a direct product assignment passes the test with probability 1, and (2) the ”soundness property”, meaning that if F passes the test with probability s, then F must be correlated with a direct product function. Dinur and Kaufman showed that a sufficiently good spectral expanding complex X admits a direct product tester in the ”high soundness” regime where s is close to 1. They asked whether there are high dimensional expanders that support direct product tests in the ”low soundness”, when s is close to 0. We give a characterization of high-dimensional expanders that support a direct product tester in the low soundness regime. We show that spectral expansion is insufficient, and the complex must additionally satisfy a variant of coboundary expansion, which we refer to as ”Unique-Games coboundary expanders”. Conversely, we show that this property is also sufficient to get direct product testers. This property can be seen as a high-dimensional generalization of the standard notion of coboundary expansion over non-Abelian groups for 2-dimensional complexes. It asserts that any locally consistent Unique-Games instance obtained using the low-level faces of the complex, must admit a good global solution.

Publisher's Version

11D

Symmetric Exponential Time Requires Near-Maximum Circuit Size
Lijie Chen, Shuichi Hirahara, and Hanlin Ren
(University of California at Berkeley, USA; National Institute of Informatics, Tokyo, Japan; University of Oxford, United Kingdom)
We show that there is a language in S2E/1 (symmetric exponential time with one bit of advice) with circuit complexity at least 2n/n. In particular, the above also implies the same near-maximum circuit lower bounds for the classes Σ2E, (Σ2E∩Π2E)/1, and ZPENP/1. Previously, only ”half-exponential” circuit lower bounds for these complexity classes were known, and the smallest complexity class known to require exponential circuit complexity was Δ3E = EΣ2P (Miltersen, Vinodchandran, and Watanabe COCOON’99).
Our circuit lower bounds are corollaries of an unconditional zero-error pseudodeterministic algorithm with an NP oracle and one bit of advice (FZPPNP/1) that solves the range avoidance problem infinitely often. This algorithm also implies unconditional infinitely-often pseudodeterministic FZPPNP/1 constructions for Ramsey graphs, rigid matrices, two-source extractors, linear codes, and Kpoly-random strings with nearly optimal parameters. Our proofs relativize. The two main technical ingredients are (1) Korten’s PNP reduction from the range avoidance problem to constructing hard truth tables (FOCS’21), which was in turn inspired by a result of Jeřábek on provability in Bounded Arithmetic (Ann. Pure Appl. Log. 2004); and (2) the recent iterative win-win paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS’23).

Publisher's Version
Symmetric Exponential Time Requires Near-Maximum Circuit Size: Simplified, Truly Uniform
Zeyong Li
(National University of Singapore, Singapore)
In a recent breakthrough, Chen, Hirahara and Ren prove that S2E/1SIZE[2n/n] by giving a single-valued FS2P algorithm for the Range Avoidance Problem (Avoid) that works for infinitely many input size n. Building on their work, we present a simple single-valued FS2P algorithm for Avoid that works for all input size n. As a result, we obtain the circuit lower bound S2Ei.o.-SIZE[2n/n] and many other corollaries: 1. Almost-everywhere near-maximum circuit lower bound for Σ2E ∩ Π2E and ZPENP. 2. Pseudodeterministic FZPPNP constructions for combinatorial objects such as: Ramsey graphs, rigid matrices, pseudorandom generators, two-source extractors, linear codes, hard truth tables, and Kpoly-random strings.

Publisher's Version
Random (log 𝑛)-CNF Are Hard for Cutting Planes (Again)
Dmitry Sokolov
(EPFL, Lausanne, Switzerland)
The random Δ-CNF model is one of the most important distribution over Δ-SAT instances. It is closely connected to various areas of computer science, statistical physics, and is a benchmark for satisfiability algorithms. Fleming, Pankratov, Pitassi, and Robere and independently Hrubeš and Pudlák showed that when Δ = Θ(logn), any Cutting Planes proof for random Δ-CNF on n variables requires size 2n / n in the regime where the number of clauses guarantees that the formula is unsatisfiable with high probability. In this paper we show tight lower bound 2Ω(n) on size -proofs for random (logn)-CNF formulas. Moreover, our proof is much simpler and self-contained in contrast with previous results based on Jukna’s lower bound for monotone circuits.

Publisher's Version
Hardness Condensation by Restriction
Mika Göös, Ilan Newman, Artur Riazanov, and Dmitry Sokolov
(EPFL, Lausanne, Switzerland; University of Haifa, Israel)
Can every n-bit boolean function with deterministic query complexity kn be restricted to O(k) variables such that the query complexity remains Ω(k)? That is, can query complexity be condensed via restriction? We study such hardness condensation questions in both query and communication complexity, proving two main results. Negative: Query complexity cannot be condensed in general: There is a function f with query complexity k such that any restriction of f to O(k) variables has query complexity O(k3/4). Positive: Randomised communication complexity can be condensed for the sink-of-xor function. This yields a quantitatively improved counterexample to the log-approximate-rank conjecture, achieving parameters conjectured by Chattopadhyay, Garg, and Sherif (2021). Along the way we show the existence of Shearer extractors — a new type of seeded extractor whose output bits satisfy prescribed dependencies across distinct seeds.

Publisher's Version
Explicit Codes for Poly-Size Circuits and Functions That Are Hard to Sample on Low Entropy Distributions
Ronen Shaltiel and Jad Silbak
(University of Haifa, Israel; Northeastern University, USA)
Codes for poly-size circuits: Guruswami and Smith (J. ACM 2016) considered codes for channels that are poly-size circuits which modify at most a p-fraction of the bits of the codeword. This class of channels is significantly stronger than Shannon’s binary symmetric channel (BSC), but weaker than Hamming’s channels which are computationally unbounded. The goal of this direction is to construct explicit codes (namely, codes with poly-time encoding and decoding algorithms) with rate R(p)=1−H(p) (matching the capacity of the BSC, and beating the capacity of codes for Hamming’s channels). This goal implies circuit lower bounds, and specifically that E=DTIME(2O(n)) does not have poly-size circuits (and therefore explicit constructions need to be based on hardness assumptions). We give the first explicit construction of such codes for poly-size channels. Specifically, for every 0 ≤ p < 1/4, there are explicit codes with rate R(p)=1−H(p), assuming E does not have size 2Ω(n) nondeterministic circuits. This hardness assumption was introduced in the context of hardness vs. randomness tradeoffs, and is by now standard in complexity theory. Our result builds on, and improves the previous work of Guruswami and Smith, and Shaltiel and Silbak (FOCS 2022). (These works gave a randomized Monte-Carlo construction, rather than explicit codes). Functions that are hard to sample on low entropy distributions: A key component in our codes (that may be of independent interest) is a new complexity theoretic notion of hard to sample functions (HTS): We say that a function f on n bits is an HTS for circuits of size nc, if there exists a constant c′>c, such that for every randomized circuit A of size nc that samples a distribution (X,Y) with (X) ≥ c′ · logn, it holds that Pr[Y=f(X)] ≤ 1/nc. This is inspired by works by Viola on the complexity of distributions (SICOMP 2012, 2020), in which X is the uniform distribution. Here, we allow A to choose any distribution X (except for distributions X with very low min-entropy) and note that a circuit A of size nc, may be hardwired with ≈ nc outputs of f, and therefore, can easily produce pairs (X,f(X)) for a distribution X, with (X) ≈ c logn. Building on classical works on “hardness amplification” (and using many additional tools and ideas from pseudorandomness) we show that if E does not have size 2Ω(n) nondeterministic circuits, then for every constant c, there is an HTS that is computable in time (nc). Our codes are obtained by using our HTS (as well as additional tools and ideas) to achieve explicit constructions (under the hardness assumption) of several components in the code of Shaltiel and Silbak, replacing previously obtained randomized Monte-Carlo constructions of these components. We then need to revisit the codes of Shaltiel and Silbak, and significantly modify the construction and analysis, so that they work with the weaker components that we are able to explicitly construct.

Publisher's Version
Opening Up the Distinguisher: A Hardness to Randomness Approach for BPL=L That Uses Properties of BPL
Dean Doron, Edward Pyne, and Roei Tell
(Ben-Gurion University of the Negev, Israel; Massachusetts Institute of Technology, USA; University of Toronto, Canada)
We provide compelling evidence for the potential of hardness-vs.-randomness approaches to make progress on the long-standing problem of derandomizing space-bounded computation. Our first contribution is a derandomization of bounded-space machines from hardness assumptions for classes of uniform deterministic algorithms, for which strong (but non-matching) lower bounds can be unconditionally proved. We prove one such result for showing that BPL=L “on average”, and another similar result for showing that BPSPACE[O(n)]=DSPACE[O(n)]. Next, we significantly improve the main results of prior works on hardness-vs.-randomness for logspace. As one of our results, we relax the assumptions needed for derandomization with minimal memory footprint (i.e., showing BPSPACE[S]⊆ DSPACE[c · S] for a small constant c), by completely eliminating a cryptographic assumption that was needed in prior work. A key contribution underlying all of our results is non-black-box use of the descriptions of space-bounded Turing machines, when proving hardness-to-randomness results. That is, the crucial point allowing us to prove our results is that we use properties that are specific to space-bounded machines.

Publisher's Version

proc time: 74.1