Theses - Electrical Engineering and Computer Sciences
http://hdl.handle.net/1721.1/7814
2015-08-07T03:24:48ZPhysical limitations on free-field microphone calibration
http://hdl.handle.net/1721.1/97951
Physical limitations on free-field microphone calibration
Cox, Jerome R
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering, 1954.; Vita.; Includes bibliographical references (leaves 200-204).
1954-01-01T00:00:00ZDistribution network use-of-system charges under high penetration of distributed energy resources
http://hdl.handle.net/1721.1/97942
Distribution network use-of-system charges under high penetration of distributed energy resources
Bharatkumar, Ashwini
Growing integration of distributed energy resources (DER) presents the electric power sector with the potential for signicant changes to technical operations, business models, and industry structure. New physical components, control and information architecture, markets, and policies are required as the power system transitions from one of centralized generation and passive load to a network of increasingly decentralized generation and diverse system users. Price signals will play a crucial role in shaping the interactions between the physical components and users of the electric power system. Distribution network use-of-system (DNUoS) charges signal to network users how their utilization of the distribution system impacts system costs and each user's share of those costs. Distribution utilities cover network operation and maintenance costs and recover infrastructure in- vestments through DNUoS charges applied to network users. This thesis develops a framework for the design of DNUoS charges that addresses the challenge of distribution network cost allocation under growing penetration of DER. The proposed framework is comprised of 1) the use of a reference network model (RNM) to identify the key drivers of distribution system costs and their relative shares of total costs, and 2) the allocation of those costs according to network utilization profiles that capture each network user's contribution to and share of total system costs. The resulting DNUoS charges are highly differentiated for network users according to the impact that network use behaviors have on system costs. This is a substantial departure from existing methods of distribution network cost allocation and thus presents implementation challenges and implications that may be addressed in a range of ways to achieve varying regulatory objectives.
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, Engineering Systems Division, 2015.; Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.; This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 66-69).
2015-01-01T00:00:00ZA sampling technique based on LDPC codes
http://hdl.handle.net/1721.1/97821
A sampling technique based on LDPC codes
Zhang, Xuhong, S.M. Massachusetts Institute of Technology
Given an inference problem, it is common that exact inference algorithms are computationally intractable and one has to resort to approximate inference algorithms. Monte Carlo methods, which rely on repeated sampling of the target distribution to obtain numerical results, is a powerful and popular way to tackle difficult inference problems. In order to use Monte Carlo methods, a good sampling scheme is vital. This thesis aims to propose a new sampling scheme based on Low Density Parity Check codes and compare it with existing sampling techniques. The proposed sampling scheme works for discrete variables only, but makes no further assumption of the structure of target distribution. The main idea of the proposed sampling method relies on the concept of typicality. By definition, a strong typical sequence with respect to a distribution can closely approximate the distribution. In other words, if we can find a strong typical sequence, the symbols in the sequence can be used as samples from the distribution. According to asymptotic analysis, the set of typical sequences dominates the probability and all typical sequences are roughly equi-probable. Thus samples from the distribution can be obtained by associating each typical sequence with an index, uniformly randomly picking an index, and finding the typical sequence that corresponds to the chosen index. The symbols in that sequence are the desired samples. To simulate this process in practice, an LDPC code is introduced. Its parity check values are uniformly randomly generated, and can be regarded as the index. Then we look for the most likely sequence that satisfies all the parity checks, and it will be proved that this sequence is a typical one with high probability if the LDPC has appropriate rate. If the most likely sequence found is a typical one, it can be regarded as the one corresponding to the chosen index. In practice, finding the most likely sequence can be computationally intractable. Thus Belief Propagation algorithm is implemented to perform approximate simulation of the sampling process. The proposed LDPC-based sampling scheme is formally defined first. After proving its correctness under maximum-likelihood simulation, we empirically examine the performance of the scheme on several distributions, namely Markov chain sources, Single loop sources, and 2-Dimensional Ising models. The results show that the proposed scheme can produce good quality samples for the aforementioned distributions.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 111-113).
2015-01-01T00:00:00ZAn evaluation of concurrency control with one thousand cores
http://hdl.handle.net/1721.1/97820
An evaluation of concurrency control with one thousand cores
Yu, Xiangyao
Computer architectures are moving towards an era dominated by many-core machines with dozens or even hundreds of cores on a single chip. This unprecedented level of on-chip parallelism introduces a new dimension to scalability that current database management systems (DBMSs) were not designed for. In particular, as the number of cores increases, the problem of concurrency control becomes extremely challenging. With hundreds of threads running in parallel, the complexity of coordinating competing accesses to data will likely diminish the gains from increased core counts. To better understand just how unprepared current DBMSs are for future CPU architectures, we performed an evaluation of concurrency control for on-line transaction processing (OLTP) workloads on many-core chips. We implemented seven concurrency control algorithms on a main-memory DBMS and using computer simulations scaled our system to 1024 cores. Our analysis shows that all algorithms fail to scale to this magnitude but for different reasons. In each case, we identify fundamental bottlenecks that are independent of the particular database implementation and argue that even state-of-the-art DBMSs suffer from these limitations. We conclude that rather than pursuing incremental solutions, many-core chips may require a completely redesigned DBMS architecture that is built from ground up and is tightly coupled with the hardware.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 71-75).
2015-01-01T00:00:00Z