MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Better Hardness via Algorithms, and New Forms of Hardness versus Randomness

Author(s)
Chen, Lijie
Thumbnail
DownloadThesis PDF (2.336Mb)
Advisor
Williams, Ryan
Terms of use
In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
One central theme of complexity theory is the rich interplay between hardness (the existence of functions that are hard to compute) and pseudorandomness (the procedure that converts randomized algorithms into equivalent deterministic algorithms). In one direction, from the classic works of Nisan-Widgerson and Impagliazzo-Widgerson, we know certain hardness hypothesis (circuit lower bounds) implies that all randomized algorithms can be derandomized with a polynomial overhead. In another direction, A decade ago, Williams have proved that certain circuit lower bounds follows from non-trivial derandomization. In this thesis we establish many new connections between hardness and pseudorandomness, strengthening and refining the classic works mentioned above. • New circuit lower bounds from non-trivial derandomization. Following Williams’ algorithmic method, we prove several new circuit lower bounds using various non-trivial derandomization algorithms, including almost-everywhere and strongly average-case lower bound against ACC0 circuits and a new construction of rigid matrices. • Superfast and non-black-box derandomization from plausible hardness assumptions. Under plausible hardness hypotheses, we obtain almost optimal worst-case derandomization of both randomized algorithms and constant-round Arthur-Merlin protocols. We also propose a new framework for non-black-box derandomization and demonstrate its usefulness by showing (1) it connects derandomization to a new type of hardness assumptions that is against uniform algorithms and (2) (from plausible assumptions) it gives derandomization of both randomized algorithms and constant-round doubly efficient proof systems with almost no overhead such that no polynomial-time adversary can find a mistake.
Date issued
2022-09
URI
https://hdl.handle.net/1721.1/147560
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.