Show simple item record

dc.contributor.authorJin, Ce
dc.contributor.authorXu, Yinzhan
dc.date.accessioned2024-07-18T15:41:14Z
dc.date.available2024-07-18T15:41:14Z
dc.date.issued2024-06-10
dc.identifier.isbn979-8-4007-0383-6
dc.identifier.urihttps://hdl.handle.net/1721.1/155705
dc.descriptionSTOC ’24, June 24–28, 2024, Vancouver, BC, Canadaen_US
dc.description.abstractIn sparse convolution-type problems, a common technique is to hash the input integers modulo a random prime 𝑝 ∈ [𝑄/2, 𝑄] for some parameter 𝑄, which reduces the range of the input integers while preserving their additive structure. However, this hash family suffers from two drawbacks, which led to bottlenecks in many state-of-the-art algorithms: (1) The collision probability of two elements from [𝑁] is 𝑂( log 𝑁 𝑄 ) rather than 𝑂( 1 𝑄 ); (2) It is difficult to derandomize the choice of 𝑝; known derandomization techniques lead to super-logarithmic overhead [Chan, Lewenstein STOC’15]. In this paper, we partially overcome these drawbacks in certain scenarios, via novel applications of the large sieve inequality from analytic number theory. Consequently, we obtain the following improved algorithms for various problems (in the standard word RAM model): Sparse Nonnegative Convolution: We obtain an 𝑂(𝑡 log 𝑡)- time Las Vegas algorithm that computes the convolution 𝐴 ★ 𝐵 of two nonnegative integer vectors 𝐴, 𝐵, where 𝑡 is the output sparsity ∥𝐴 ★ 𝐵∥0. Moreover, our algorithm terminates in 𝑂(𝑡 log 𝑡) time with 1 − 1/poly(𝑡) probability. This simultaneously improves the 𝑂(𝑡 log 𝑡 log log 𝑡)-time Las Vegas algorithm [Bringmann, Fischer, Nakos SODA’22] and the Monte Carlo 𝑂(𝑡 log 𝑡)-time algorithm with failure probability 2 − √ log 𝑡 [Bringmann, Fischer, Nakos STOC’21]. Text-to-Pattern Hamming Distances: Given a length-𝑚 pattern 𝑃 and a length-𝑛 text 𝑇 , we obtain an 𝑂(𝑛 √︁ 𝑚 log log𝑚)-time deterministic algorithm that exactly computes the Hamming distance between 𝑃 and every length-𝑚 substring of 𝑇 . This improves the previous 𝑂(𝑛 √ 𝑚(log𝑚 log log𝑚) 1/4 )-time deterministic algorithm [Chan, Jin, Vassilevska Williams, Xu FOCS’23] and nearly matches their 𝑂(𝑛 √ 𝑚)-time Las Vegas algorithm. Sparse General Convolution: For sparse convolution with possibly negative input, all previous approaches required Ω(𝑡 log2 𝑡) time, where 𝑡 is the maximum of input and output sparsity, and an important question left open by [Bringmann, Fischer, Nakos STOC’21] is whether this can be improved. We make partial progress towards solving this question by giving a Monte Carlo 𝑂(𝑡 log 𝑡) time algorithm in the restricted case where the length 𝑁 of the input vectors satisfies 𝑁 ≤ 𝑡 1.99 .en_US
dc.publisherACM|Proceedings of the 56th Annual ACM Symposium on Theory of Computingen_US
dc.relation.isversionofhttps://doi.org/10.1145/3618260.3649605en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleShaving Logs via Large Sieve Inequality: Faster Algorithms for Sparse Convolution and Moreen_US
dc.typeArticleen_US
dc.identifier.citationJin, Ce and Xu, Yinzhan. 2024. "Shaving Logs via Large Sieve Inequality: Faster Algorithms for Sparse Convolution and More."
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2024-07-01T07:46:24Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-07-01T07:46:24Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record