Electrical Engineering and Computer Sciences - Ph.D. / Sc.D.
http://hdl.handle.net/1721.1/7815
Tue, 28 Mar 2017 08:16:38 GMT2017-03-28T08:16:38ZGraphical model driven methods in adaptive system identification
http://hdl.handle.net/1721.1/107499
Graphical model driven methods in adaptive system identification
Yellepeddi, Atulya
Identifying and tracking an unknown linear system from observations of its inputs and outputs is a problem at the heart of many different applications. Due to the complexity and rapid variability of modern systems, there is extensive interest in solving the problem with as little data and computation as possible. This thesis introduces the novel approach of reducing problem dimension by exploiting statistical structure on the input. By modeling the input to the system of interest as a graph-structured random process, it is shown that a large parameter identification problem can be reduced into several smaller pieces, making the overall problem considerably simpler. Algorithms that can leverage this property in order to either improve the performance or reduce the computational complexity of the estimation problem are developed. The first of these, termed the graphical expectation-maximization least squares (GEM-LS) algorithm, can utilize the reduced dimensional problems induced by the structure to improve the accuracy of the system identification problem in the low sample regime over conventional methods for linear learning with limited data, including regularized least squares methods. Next, a relaxation of the GEM-LS algorithm termed the relaxed approximate graph structured least squares (RAGS-LS) algorithm is obtained that exploits structure to perform highly efficient estimation. The RAGS-LS algorithm is then recast into a recursive framework termed the relaxed approximate graph structured recursive least squares (RAGS-RLS) algorithm, which can be used to track time-varying linear systems with low complexity while achieving tracking performance comparable to much more computationally intensive methods. The performance of the algorithms developed in the thesis in applications such as channel identification, echo cancellation and adaptive equalization demonstrate that the gains admitted by the graph framework are realizable in practice. The methods have wide applicability, and in particular show promise as the estimation and adaptation algorithms for a new breed of fast, accurate underwater acoustic modems. The contributions of the thesis illustrate the power of graphical model structure in simplifying difficult learning problems, even when the target system is not directly structured.
Thesis: Ph. D., Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2016.; This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 209-225).
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/1721.1/1074992016-01-01T00:00:00ZGames, protocols, and quantum entanglement
http://hdl.handle.net/1721.1/107364
Games, protocols, and quantum entanglement
Yuen, Henry, Ph. D. Massachusetts Institute of Technology
Quantum entanglement has evolved from being "spooky action at a distance" to being a fundamental information-theoretic resource, extending the frontiers of what is possible in communications, computation, and cryptography. It gives rise to non-local correlations that can be harnessed to perform tasks such as certified randomness generation and classical verification of quantum computation. However, these same non-local correlations also pose a challenge when analyzing complexity-theoretic or cryptographic protocols in a quantum world: the soundness or security of the protocol may no longer hold in the presence of entangled adversaries. This thesis presents several results involving games and protocols with entangled parties; in each result, we introduce new techniques and methods to analyze soundness against adversaries that can manipulate quantum entanglement. First, we present a protocol wherein a classical verifer interacts with eight non-communicating quantum devices, and for all integer N the verifier can statistically certify that the devices have produced N bits of randomness that is E-close to uniform, while only using O(log³ 1/[epsilon]) bits of seed randomness. We call this an infinite randomness expansion protocol, because the amount N of certified output randomness is independent of the verifier's seed length. Entanglement is both a blessing and a curse for this protocol: on one hand, the devices need entanglement in order to successfully generate randomness to pass the protocol. But on the other hand, the devices may try to use entanglement to cheat and pass the protocol without producing additional randomness. We show that the monogamous nature of entanglement prevents this from happening. Next, this thesis studies the parallel repetition of games with entangled players. Raz's classical parallel repetition theorem (SICOMP 1998) is an influential result in complexity theory showing that the maximum success probability of unentangled players in a two-player game must decrease exponentially when the game is repeated in parallel. Its proof is highly non-trivial, and a major open question is whether it extends to the case of entangled players. We make progress on this question in several ways. First, we present an efficient transformation on games called "anchoring" that converts any k-player game G into a k-player game G[upside down upper case T] such that the entangled value of its n-fold parallel repetition, Gn[upside down upper case T], is exponentially small in n (provided that the entangled value of G is less than 1). Furthermore, the transformation is completeness preserving, in that if the entangled value of G is 1, then the entangled value of Gn[upside down upper case T] is also 1. This yields the first gap amplification procedure for general entangled games that achieves exponential decay. We also show that parallel repetition of a game causes the entangled value to decrease at a polynomial rate with the number of repetitions. In particular, this gives the first proof that the entangled value of a parallel repeated game converges to 0 for all games who entangled value is less than 1. The third result of this thesis on entangled parallel repetition is an improved analysis of the parallel repetition of free games with entangled players. Free games are those where the players' questions are independent of each other. We show how to use the fact that the DISJOINTNESS problem of size N can be solved with O([square root]N) qubits of quantum communication in order to speed up the rate of decay for the parallel repetition: given a free game G with entangled value 1 - [epsilon], its n-fold parallel repetition Gn has entangled value at most (1 - [epsilon]³/²)[omega](n/s), where s is the length of the players' answers in G. In contrast, the best parallel repetition theorem for free games with unentangled players, due to Barak, et al. (RANDOM 2009), shows that for a free game G with entangled value 1 - [delta], the classical value of Gn is at most (1 - [epsilon]² )[omega](n/s), which is a slower rate of decay. This suggests a separation between the behavior of entangled games and unentangled games under parallel repetition. In the final part of this thesis, we examine message authentication in a quantum world. Message authentication is a fundamental task in cryptography that ensures data integrity when communicating over an insecure channel. We consider two settings. One is classical authentication against quantum attacks. The other is total quantum authentication of quantum data. We give a new class of security definitions for both modes of message authentication. Our definitions capture and strengthen several existing definitions, including that of Boneh-Zhandry (EUROCRYPT 2013), which pertains to superposition attacks on classical authentication schemes, as well as the definition of Barnum, et al. (FOCS 2002), which addresses total authentication of quantum data. Our definitions give strong characterizations for what a quantum adversary is able to do in a message authentication protocol, even when the adversary has quantum side information that is entangled with the message state. We argue that, in the "one time" setting, our definitions are the strongest possible. We prove that our security definition for total quantum authentication has some surprising implications, such as the ability to reuse the key whenever verification is successful, and a conceptually simple quantum key distribution protocol. We then give several constructions of protocols that satisfy our security definitions: (1) we show that the classical Wegman-Carter scheme with 3-universal hashing is secure against quantum adversaries with quantum-side information; (2) we present a protocol based on unitary designs that achieves total quantum authentication, and (3) we show that using the classical Wegman- Carter scheme to authenticate in complementary bases yields a form of total quantum authentication, with bounded key leakage.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 177-184).
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/1721.1/1073642016-01-01T00:00:00ZAmplifier and data converter techniques for low power sensor interfaces
http://hdl.handle.net/1721.1/107360
Amplifier and data converter techniques for low power sensor interfaces
Yaul, Frank M
Sensor interfaces circuits are integral components of wireless sensor nodes, and improvements to their energy-efficiency help enable long-term medical and industrial monitoring applications. This thesis explores both analog and algorithmic energy-saving techniques in the sensor interface signal chain. First, a data-dependent successive-approximation algorithm is developed and is demonstrated in a low-power analog-to-digital converter (ADC) implementation. When averaged over many samples, the energy per conversion and number of bitcycles per conversion used by this algorithm both scale logarithmically with the activity of the input signal, with each N-bit conversion using between 2 and 2N+1 bitcycles, compared to N for conventional binary SA. This algorithm reduces ADC power consumption when sampling signals with low mean activity, and its effectiveness is demonstrated on an electrocardiogram signal. With a 0.6V supply, the 10-bit ADC test chip has a maximum sample rate of 16 kHz and an effective number of bits (ENOB) of 9.73b. The ADC's Walden Figure of Merit (FoM) ranges from 3.5 to 20 fJ/conversion-step depending on the input signal activity. Second, an ultra-low supply voltage amplifier stage is developed and used to create an energy-efficient low-noise instrumentation amplifier (LNIA). This chopper LNIA uses a 0.2V-supply inverter-based input stage followed by a 0.8V-supply folded-cascode common-source stage. The high input-stage current needed to reduce the input-referred noise is drawn from the 0.2V supply, significantly reducing power consumption. The 0.8V stage provides high gain and signal swing, improving linearity. Biasing and common-mode rejection techniques for the 0.2V-stage are also presented. The analog front-end (AFE) test chip incorporating the chopper LNIA achieves a power-efficiency figure (PEF) of 1.6 with an input noise of 0.94 [mu]VRMS, integrated from 0.5 to 670 Hz. Human biopotential signals are measured using the AFE.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 151-157).
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/1721.1/1073602016-01-01T00:00:00ZMSL : a synthesis enabled language for distributed high performance computing implementations
http://hdl.handle.net/1721.1/107359
MSL : a synthesis enabled language for distributed high performance computing implementations
Xu, Zhilei
SPMD-style (single program multiple data) parallel programming, usually done with MPI, are dominant in high-performance computing on distributed memory machines. This thesis outlines a new methodology to aid in the development of SPMD-style high-performance programs. The new methodology is supported by a new language called MSL that combines ideas from generative programming and software synthesis to simplify the development process as well as to allow programmers to package complex implementation strategies behind clean high-level reusable abstractions. We propose in this thesis the key new language features in MSL and new analyses in order to support synthesis and equivalence checking for programs written in SPMD-style, as well as empirical evaluations of our new methodology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-139).
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/1721.1/1073592016-01-01T00:00:00Z